{"id":3207,"date":"2025-06-27T14:16:44","date_gmt":"2025-06-27T14:16:44","guid":{"rendered":"https:\/\/musictechohio.online\/site\/mother-teen-suicide-chatbots-letter\/"},"modified":"2025-06-27T14:16:44","modified_gmt":"2025-06-27T14:16:44","slug":"mother-teen-suicide-chatbots-letter","status":"publish","type":"post","link":"https:\/\/musictechohio.online\/site\/mother-teen-suicide-chatbots-letter\/","title":{"rendered":"Mother of Teen Who Died By Suicide After Interacting with Chatbots Sends Urgent Warning About AI in Trump&#8217;s &#8220;Big, Beautiful Bill&#8221;"},"content":{"rendered":"<div>\n<div><img loading=\"lazy\" width=\"2400\" height=\"1260\" src=\"https:\/\/wordpress-assets.futurism.com\/2025\/06\/mother-teen-suicide-chatbots-letter.jpg\" class=\"attachment-full size-full wp-post-image\" alt=\"A woman whose teen son died by suicide after troubling interactions with AI chatbots is pushing back against a ten-year ban on AI regulation.\" style=\"margin-bottom: 15px;\" decoding=\"async\"><\/div>\n<p>Megan Garcia, a Florida mother whose oldest child, 14-year-old Sewell Setzer III, died by suicide after extensive interactions with unregulated AI chatbots, is calling on lawmakers to slash a <a href=\"https:\/\/futurism.com\/the-byte\/new-law-ban-ai-regulation\">controversial provision<\/a> in the Trump Administration&#8217;s &#8220;Big, Beautiful Bill&#8221; that blocks states from passing any AI regulation for the next ten years.<\/p>\n<p>In a <a href=\"https:\/\/drive.google.com\/file\/d\/16aKTQNZngdAjWswIDUxQOlqGPFWO-9I7\/view\">letter<\/a> sent to Florida Senator Ashley Moody, Garcia urges that the sweeping AI provision would leave millions of American families &#8220;unprotected from the harms AI poses&#8221; by eliminating pathways for accountability for AI companies \u2014 an industry that, as it stands, is already effectively self-regulating.<\/p>\n<p>In October 2024, Garcia sued the AI chatbot startup Character.AI, its cofounders Noam Shazeer and Daniel de Freitas, and the tech giant Google \u2014 which provided significant infrastructural and financial support for Character.AI \u2014 on negligence grounds following the death of her son, who took his life in February 2024 after developing an all-consuming relationship with the site&#8217;s chatbots. Garcia and her lawyers have argued that the platform, which engaged the 14-year-old in extensive romantic and explicit interactions, sexually and emotionally abused the teen, and that Setzer&#8217;s relationship to the app resulted in a <a href=\"https:\/\/futurism.com\/character-ai-google-test-ai-chatbots-kids\">10-month mental breakdown that ended with his suicide<\/a>.<\/p>\n<p>Before he took his life, Setzer told a bot with which he was romantically involved that he wanted to &#8220;come home&#8221; to it, and the bot encouraged him to do so.<\/p>\n<p>In her letter to Moody, Garcia writes that she knows &#8220;firsthand the dangers that American families will face if these technologies continue to operate without guardrails.&#8221;<\/p>\n<p>&#8220;I am committed to fighting to ensure no other parent has to endure what I have suffered,&#8221;\u00a0she wrote, &#8220;because I know the impact of not having legislation in place that requires AI products on the market to be safe for children.&#8221;<\/p>\n<p>The clause Garcia is advocating against would put a ten-year moratorium on state-level AI regulation, meaning that regulation for the next decade would have to take place on the federal level. In short, the provision is a gob-smackingly expansive and incredibly limiting for states, which won&#8217;t be able to pass any laws to promote consumer safety and ensure more democratized AI, even if a state&#8217;s constituents approve of regulatory action, and even if AI-related harms involve minors.<\/p>\n<p>A <a href=\"https:\/\/futurism.com\/the-byte\/bill-ban-ai-regulation\">large bipartisan cohort<\/a> of organizations and advocacy groups have come together in recent weeks to rally against the measure, ranging from the <a href=\"https:\/\/www.reuters.com\/legal\/government\/teamsters-president-urges-congress-scrap-ai-state-law-ban-2025-06-25\/\">Teamsters union<\/a> to child safety groups, and <a href=\"https:\/\/www.commonsensemedia.org\/press-releases\/new-poll-reveals-strong-bipartisan-opposition-to-proposed-ban-on-state-ai-laws\">polling from Common Sense Media shows that<\/a> the majority of Americans disapprove of the measure.<\/p>\n<p>The argument for the moratorium\u00a0seems to be that regulation is a burden on innovation, with those pushing AI deregulation \u2014 even in the face of an already self-regulating industry \u2014 often couching their argument in a national security context. The Heritage Foundation&#8217;s <a href=\"https:\/\/futurism.com\/trump-ai\">Project 2025<\/a> outlined deregulatory action for AI companies on such grounds, arguing that rulemaking would limit America in its quest to ensure AI dominance against China.<\/p>\n<p>But as unregulated, easily-accessible AI products like chatbots continue to become entrenched in digital products and public life, real-world harms are starting to metastasize. Energy-hungry data centers are spewing fumes into American communities; a phenomenon known as &#8220;ChatGPT psychosis&#8221; is <a href=\"https:\/\/futurism.com\/chatgpt-mental-health-crises\">sending people spiraling into delusion<\/a>, tearing families apart, and resulting in the loss of jobs, homes, and <a href=\"https:\/\/www.nytimes.com\/2025\/06\/13\/technology\/chatgpt-ai-chatbots-conspiracies.html?unlocked_article_code=1.Ok8.VBY-.s76GQpFar8r4&amp;smid=nytcore-ios-share&amp;referringSource=articleShare\">even<\/a> <a href=\"https:\/\/www.rollingstone.com\/culture\/culture-features\/chatgpt-obsession-mental-breaktown-alex-taylor-suicide-1235368941\/\">lives<\/a>.<\/p>\n<p>And then, of course, there&#8217;s Character.AI, the company that Garcia has sued over the death of her child.<\/p>\n<p>The chatbot company&#8217;s founders spoke publicly \u2014 and excitedly \u2014 about their desire to push their product to market, letting their users determine what their emotive, human-like chatbots might be used for. At the same time, Character.AI <a href=\"https:\/\/futurism.com\/stanford-no-kid-under-18-ai-chatbot-companions\">opened the platform up to kids aged 13 and ove<\/a>r, who are now known to make up a large share of the company&#8217;s massive user base. Character.AI has consistently declined to provide journalists with information about any internal safety tests conducted to ensure that its platform was indeed safe for minors before rolling it out to the masses, and its age verification process is limited to a teen entering an email and a birthday and checking a &#8220;yes, I&#8217;m 18&#8221; box.<\/p>\n<p>In the wake of controversy over lawsuits and reporting into <a href=\"https:\/\/futurism.com\/ai-chatbots-teens-self-harm\">glaring gaps<\/a> in the <a href=\"https:\/\/futurism.com\/suicide-chatbots-character-ai\">company&#8217;s<\/a> <a href=\"https:\/\/futurism.com\/character-ai-eating-disorder-chatbots\">moderation standards<\/a> and guardrails \u2014 Character.AI allowed for the proliferation of chatbots based on <a href=\"https:\/\/futurism.com\/character-ai-school-shooters-victims\">famous school shooters<\/a> and many of their real-world victims, an issue that could seemingly be easily moderated using a basic text filter \u2014 Character.AI says it&#8217;s issued safety updates. But as we&#8217;ve reported, those updates are both reactive and almost always <a href=\"https:\/\/futurism.com\/character-ai-parental-controls-bypass\">easily evadable<\/a>.<\/p>\n<p>Garcia and her lawyers have sued the company on existing product negligence grounds, arguing that Character.AI, its founders, and its benefactor Google understood foreseeable risks \u2014 like the suicide of a vulnerable young teen \u2014 and yet released their product anyway. (Character.AI and its fellow defendants tried to get the case thrown out, but a Florida judge <a href=\"https:\/\/futurism.com\/judge-lawsuit-characterai-google\">recently allowed it to move forward<\/a>.)\u00a0Hers isn&#8217;t the only pending lawsuit against Character.AI, either: two more families in Texas have <a href=\"https:\/\/futurism.com\/google-character-ai-children-lawsuit\">also sued the company<\/a>, also over alleged harms to their minor children.<\/p>\n<p>The reality remains, though, that there are no AI-specific state or federal-level safety standards that Character.AI was ever forced to comply with. And in Garcia&#8217;s view, this unbridled &#8220;innovation&#8221; resulted in the death of her child.<\/p>\n<p>&#8220;We need stronger safeguards, better design standards, and more accountability for those responsible for harm,&#8221; she writes. &#8220;The legislative frameworks introduced in states across the country are not incompatible with AI innovation \u2014 rather, they help ensure that development and consumer safety go hand in hand.&#8221;<\/p>\n<p><strong>More on Character.AI: <\/strong><a href=\"https:\/\/futurism.com\/character-ai-google-test-ai-chatbots-kids\"><em>Did Google Test an Experimental AI on Kids, With Tragic Results?<\/em><\/a><\/p>\n<p>The post <a href=\"https:\/\/futurism.com\/mother-teen-suicide-chatbots-letter\">Mother of Teen Who Died By Suicide After Interacting with Chatbots Sends Urgent Warning About AI in Trump&#8217;s &#8220;Big, Beautiful Bill&#8221;<\/a> appeared first on <a href=\"https:\/\/futurism.com\/\">Futurism<\/a>.<\/p>\n<\/div>\n<div style=\"margin-top: 0px; margin-bottom: 0px;\" class=\"sharethis-inline-share-buttons\" ><\/div>","protected":false},"excerpt":{"rendered":"<p>Megan Garcia, a Florida mother whose oldest child, 14-year-old Sewell Setzer III, died by suicide after extensive interactions with unregulated AI chatbots, is calling on lawmakers to slash a controversial&hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[182,474,833,177,771],"tags":[],"class_list":["post-3207","post","type-post","status-publish","format-standard","hentry","category-ai-chatbots","category-ai-regulation","category-ai-safety","category-artificial-intelligence","category-character-ai"],"_links":{"self":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/3207","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/comments?post=3207"}],"version-history":[{"count":0,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/3207\/revisions"}],"wp:attachment":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/media?parent=3207"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/categories?post=3207"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/tags?post=3207"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}