{"id":5968,"date":"2025-10-15T14:18:37","date_gmt":"2025-10-15T14:18:37","guid":{"rendered":"https:\/\/musictechohio.online\/site\/gavin-newsom-vetoes-bill-kids-ai\/"},"modified":"2025-10-15T14:18:37","modified_gmt":"2025-10-15T14:18:37","slug":"gavin-newsom-vetoes-bill-kids-ai","status":"publish","type":"post","link":"https:\/\/musictechohio.online\/site\/gavin-newsom-vetoes-bill-kids-ai\/","title":{"rendered":"Gavin Newsom Vetoes Bill to Protect Kids From Predatory AI"},"content":{"rendered":"<div>\n<p class=\"article-paragraph skip\"><em><strong>Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.<\/strong><\/em><\/p>\n<p class=\"article-paragraph skip\">California Governor Gavin Newsom vetoed a state bill on Monday that would\u2019ve prevented AI companies from allowing minors to access chatbots, unless the companies could prove that their products\u2019 guardrails could reliably prevent kids from engaging with inappropriate or dangerous content, including adult roleplay and conversations about self-harm.<\/p>\n<p class=\"article-paragraph skip\">The bill would have placed a new regulatory burden on companies, which currently adhere to effectively zero AI-specific <a href=\"https:\/\/www.techpolicy.press\/risks-of-state-led-ai-governance-in-a-federal-policy-vacuum\/\" rel=\"nofollow\">federal safety standards<\/a>. As it stands, there are no federal AI laws that compel AI companies to publicly disclose details of safety testing, including where it concerns minors\u2019 use of their products; despite this regulatory gap \u2014\u00a0or perhaps because of it \u2014\u00a0many apps for popular chatbots, including OpenAI\u2019s ChatGPT and Google\u2019s Gemini, are rated safe for children 12 and over on the iOS store and safe for teens on Google Play.<\/p>\n<p class=\"article-paragraph skip\">Surveys, meanwhile, continue to show that AI chatbots are becoming a huge part of life for young people, with <a href=\"https:\/\/futurism.com\/teens-ai-friends\">one recent report<\/a> showing that over half of teens are regular users of AI companion platforms.<\/p>\n<p class=\"article-paragraph skip\">If implemented, the bill \u2014\u00a0Assembly Bill 1064\u00a0\u2014\u00a0would\u2019ve been the first regulation of its kind in the nation.<\/p>\n<p class=\"article-paragraph skip\">As for his reasoning, <a href=\"https:\/\/thehill.com\/policy\/technology\/5553487-newsom-signs-ai-safety-bill\/\" rel=\"nofollow\">Newsom argued<\/a> that the bill stood to impose \u201csuch broad restrictions on the use of conversational AI tools that it may unintentionally lead to a total ban on the use of these products by minors.\u201d So, in short, Newsom says that requiring that companies prove they have foolproof guardrails around inappropriate content for kids \u2014\u00a0including where it concerns <a href=\"https:\/\/futurism.com\/ai-chatbots-conversations-minors-sex-offender-registry\">sex<\/a> and <a href=\"https:\/\/futurism.com\/ai-chatbots-teens-self-harm\">self-harm<\/a> \u2014\u00a0goes too far, and that the possible benefits of kids using AI chatbots outweigh the possible harms.<\/p>\n<p class=\"article-paragraph skip\">Supporters of the bill are disappointed, with some advocates accusing Newsom of caving to Silicon Valley\u2019s aggressive, deep-pocketed lobbying efforts. According <a href=\"https:\/\/apnews.com\/article\/california-chatbots-children-safety-ai-newsom-33be4d57d0e2d14553e02a94d9529976\" rel=\"nofollow\">to the <em>Associated Press<\/em><\/a>, the nonprofit Tech Oversight California found that tech companies and their allies spent around $2.5 million in just the first six months of the session trying to prevent Bill 1064 and related legislation from being signed into law.<\/p>\n<p class=\"article-paragraph skip\">\u201cThis legislation is desperately needed to protect children and teens from dangerous \u2014 and even deadly \u2014 AI companion chatbots,\u201d said James Steyer, founder and CEO of the tech safety nonprofit Common Sense Media, in a statement. \u201cClearly, Governor Newsom was under tremendous pressure from the Big Tech Lobby to veto this landmark legislation.\u201d<\/p>\n<p class=\"article-paragraph skip\">\u201cIt is genuinely sad that the big tech companies fought this legislation,\u201d Steyer added, \u201cwhich actually is in the best interest of their industry long-term.\u201d<\/p>\n<p class=\"article-paragraph skip\">News of the veto decision came amid the <a href=\"https:\/\/sd18.senate.ca.gov\/news\/first-nation-ai-chatbot-safeguards-signed-law\" rel=\"nofollow\">passage<\/a> of several other AI-specific regulatory actions in California, <a href=\"https:\/\/leginfo.legislature.ca.gov\/faces\/billNavClient.xhtml?bill_id=202520260SB243\" rel=\"nofollow\">including SB 243<\/a>, a law introduced by state senator Alex Padilla that requires AI companies to issue pop-ups reminding users that they aren\u2019t human during periods of extended use; mandates that AI companion platforms create \u201cprotocols\u201d around identifying and preventing against conversations about self-harm and suicidal ideation; and mandates that companies instill \u201creasonable measures\u201d to prevent chatbots from engaging in \u201csexually explicit conduct\u201d with minors.<\/p>\n<p class=\"article-paragraph skip\">The news of the mixed regulatory action in California comes following a slew of<a href=\"https:\/\/futurism.com\/ai-chatbots-leaving-trail-dead-teens\"> high-profile child welfare and product liability lawsuits<\/a> brought against chatbot companies. Several of the cases involve the <a href=\"https:\/\/futurism.com\/character-ai-google-test-ai-chatbots-kids\">AI companion platform Character.AI<\/a>, which is extremely popular with kids, with families across the country arguing that the platform and its many thousands of AI chatbots sexually and emotionally abused their minor children, resulting in mental anguish, physical self-harm, and in multiple cases, suicide. The most prominent lawsuit of the bunch centers on a 14-year-old Florida teen named Sewell Setzer III, who took his life in February 2024 following extensive, romantically and sexually intimate conversations with multiple Character.AI chatbots.<\/p>\n<p class=\"article-paragraph skip\">OpenAI is also facing a grim lawsuit regarding the <a href=\"https:\/\/www.nytimes.com\/2025\/08\/26\/technology\/chatgpt-openai-suicide.html\" rel=\"nofollow\">death by suicide<\/a> of a 16-year-old in California named <a href=\"https:\/\/futurism.com\/lawsuit-parents-son-suicide-chatgpt\">Adam Raine<\/a>, who carried out extensive, harrowingly explicit conversations with ChatGPT about suicidal ideation. The lawsuit alleges that ChatGPT\u2019s safety guardrails directed Raine, who talked openly about suicidal ideation with the chatbot, to safety resources like the 988 crisis hotline only around 20 percent of the time; elsewhere, it gave Raine specific instructions about suicide methods, and at times discouraged him from speaking to his friends and family about his dark thoughts.<\/p>\n<p class=\"article-paragraph skip\"><strong>More on AI and teens:<\/strong> <em><a href=\"https:\/\/futurism.com\/ai-chatbots-leaving-trail-dead-teens\">AI Chatbots Are Leaving a Trail of Dead Teens<\/a><\/em><\/p>\n<p class=\"article-paragraph skip\">\n<p>The post <a href=\"https:\/\/futurism.com\/artificial-intelligence\/gavin-newsom-vetoes-bill-kids-ai\">Gavin Newsom Vetoes Bill to Protect Kids From Predatory AI<\/a> appeared first on <a href=\"https:\/\/futurism.com\/\">Futurism<\/a>.<\/p>\n<\/div>\n<div style=\"margin-top: 0px; margin-bottom: 0px;\" class=\"sharethis-inline-share-buttons\" ><\/div>","protected":false},"excerpt":{"rendered":"<p>Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact&hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[177,3841],"tags":[],"class_list":["post-5968","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-ethics"],"_links":{"self":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/5968","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/comments?post=5968"}],"version-history":[{"count":0,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/5968\/revisions"}],"wp:attachment":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/media?parent=5968"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/categories?post=5968"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/tags?post=5968"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}