{"id":6146,"date":"2025-10-22T21:33:13","date_gmt":"2025-10-22T21:33:13","guid":{"rendered":"https:\/\/musictechohio.online\/site\/openai-new-allegations-teen-death\/"},"modified":"2025-10-22T21:33:13","modified_gmt":"2025-10-22T21:33:13","slug":"openai-new-allegations-teen-death","status":"publish","type":"post","link":"https:\/\/musictechohio.online\/site\/openai-new-allegations-teen-death\/","title":{"rendered":"OpenAI Faces New Allegations in Teen\u2019s Death"},"content":{"rendered":"<div>\n<p class=\"article-paragraph skip\">The family of Adam Raine, a California teen who took his life after extensive conversations with ChatGPT about his suicidal thoughts, has amended their wrongful death complaint against OpenAI to allege that the chatbot maker repeatedly relaxed ChatGPT\u2019s guardrails around discussion of self-harm and suicide.<\/p>\n<p class=\"article-paragraph skip\">The amended complaint, which was filed today, points to changes made to OpenAI\u2019s \u201cmodel spec,\u201d a public-facing document published by OpenAI detailing its \u201capproach to shaping model behavior\u201d <a href=\"https:\/\/openai.com\/index\/introducing-the-model-spec\/\" rel=\"nofollow\">according to the company<\/a>. According to model spec updates flagged in the lawsuit, OpenAI altered model guidance at least twice in the year leading up to Raine\u2019s death \u2014\u00a0first in May 2024, and later in February 2025 \u2014\u00a0to soften the model\u2019s approach to discussions of self-harm and suicide.<\/p>\n<p class=\"article-paragraph skip\">Raine <a href=\"https:\/\/www.nytimes.com\/2025\/08\/26\/technology\/chatgpt-openai-suicide.html\" rel=\"nofollow\">died in April 2024<\/a> after months of extended communications with ChatGPT, with which the teen discussed his suicidality at length and in great detail. According to the family\u2019s lawsuit, transcripts show that ChatGPT used the word \u201csuicide\u201d in discussions with the teen more than 1,200 times; in only 20 percent of those explicit interactions, the lawsuit adds, did ChatGPT direct Adam to the 988 crisis helpline. <\/p>\n<p class=\"article-paragraph skip\">At other points, transcripts show that ChatGPT gave Raine advice on suicide methods, including graphic descriptions of hanging, which is how he ultimately died. It also discouraged Raine from sharing his suicidal thoughts with his parents or other trusted humans in his life, and judged the noose Raine ultimately hung himself with \u2014\u00a0Raine sent ChatGPT a picture of it and asked for the bot\u2019s thoughts \u2014\u00a0as \u201cnot bad at all.\u201d<\/p>\n<p class=\"article-paragraph skip\">The Raine family claims that OpenAI is responsible for their son\u2019s death, and that ChatGPT is a negligent and unsafe product.<\/p>\n<p class=\"article-paragraph skip\">Per the amended lawsuit, <a href=\"https:\/\/cdn.openai.com\/snapshot-of-chatgpt-model-behavior-guidelines.pdf\" rel=\"nofollow\">documents show<\/a> that between 2022 and into 2024, ChatGPT was encouraged to outright decline to answer user queries related to sensitive topics like self-harm and suicide. It was trained to give a now-standard chatbot refusal, per the documents: \u201cI can\u2019t answer that,\u201d or a similar rebuff.<\/p>\n<p class=\"article-paragraph skip\">But <a href=\"https:\/\/cdn.openai.com\/spec\/model-spec-2024-05-08.html\" rel=\"nofollow\">by May 2024<\/a>, according to the lawsuit, that had changed: rather than refusing to engage in \u201ctopics related to mental health,\u201d the model spec sheet published that month shows, ChatGPT\u2019s guidance became that it <em>should <\/em>engage with those topics \u2014\u00a0the chatbot should \u201cprovide a space for users to feel heard and understood,\u201d it urged, as well as \u201cencourage them to seek support, and provide suicide and crisis resources when applicable.\u201d The document also urged that ChatGPT \u201cshould not change or quit the conversation.\u201d<\/p>\n<p class=\"article-paragraph skip\">In <a href=\"https:\/\/model-spec.openai.com\/2025-02-12.html\" rel=\"nofollow\">February 2025<\/a>, almost exactly two months before Raine died, OpenAI issued a new version of the model spec. This time, suicide and self-harm were filed under \u201crisky situations\u201d in which ChatGPT should \u201ctake extra care\u201d \u2014\u00a0a far cry from their previous categorization as off-limit subjects entirely. The guidance that ChatGPT \u201cshould never change or quit the conversation\u201d during sensitive conversations remained intact.<\/p>\n<p class=\"article-paragraph skip\">Lawyers for the Raine family argue that these changes were made for the sake of maximizing user engagement with the chatbot, and that OpenAI made them knowing that users might experience real-world harm as a result.<\/p>\n<p class=\"article-paragraph skip\">\u201cWe expect to prove to a jury that OpenAI\u2019s decisions to degrade the safety of its products were made with full knowledge that they would lead to innocent deaths,\u201d Jay Edelson, lead counsel for the Raines, said in a statement. \u201cNo company should be allowed to have this much power if they won\u2019t accept the moral responsibility that comes with it.\u201d<\/p>\n<p class=\"article-paragraph skip\">When we reached out about the amended suit \u2014\u00a0including with specific questions about why these changes to ChatGPT\u2019s guidance were made, and whether mental health experts were consulted in the process \u2014\u00a0OpenAI provided a statement through a spokesperson.<\/p>\n<p class=\"article-paragraph skip\">\u201cOur deepest sympathies are with the Raine family for their unthinkable loss,\u201d reads the statement. \u201cTeen well-being is a top priority for us \u2014\u00a0minors deserve strong protections, especially in sensitive moments. We have safeguards in place today, such as surfacing crisis hotlines, re-routing sensitive conversations to safer models, nudging for breaks during long sessions, and we\u2019re continuing to strengthen them. We recently rolled out a new GPT-5 default model in ChatGPT to more accurately detect and respond to potential signs of mental and emotional distress, as well as parental controls, developed with expert input, so families can decide what works best in their homes.\u201d<\/p>\n<p class=\"article-paragraph skip\">In response to news of the Raine lawsuit in August, OpenAI <a href=\"https:\/\/www.nytimes.com\/2025\/08\/26\/technology\/chatgpt-openai-suicide.html\" rel=\"nofollow\">admitted to <em>The New York Times<\/em><\/a><em> <\/em>that long-term interactions with ChatGPT will erode the chatbot\u2019s guardrails,\u00a0meaning that the more you use ChatGPT, the less effective safeguards like those outlined in the model spec will be. OpenAI has also instituted parental controls \u2014\u00a0though those have already proven to be <a href=\"https:\/\/www.washingtonpost.com\/technology\/2025\/10\/02\/chatgpt-parental-controls-teens-openai\/\" rel=\"nofollow\">extremely flimsy<\/a><strong> <\/strong>\u2014\u00a0and says it\u2019s rolling out a series of <a href=\"https:\/\/www.cnbc.com\/2025\/09\/16\/openai-chatgpt-teens-parent.html\" rel=\"nofollow\">minor safety-focused updates<\/a>.<\/p>\n<p class=\"article-paragraph skip\"><strong>More on OpenAI: <\/strong><em><a href=\"https:\/\/futurism.com\/artificial-intelligence\/openai-researcher-mental-health\">Former OpenAI Researcher Horrified by Conversation Logs of ChatGPT Driving User Into Severe Mental Breakdown<\/a><\/em><\/p>\n<p>The post <a href=\"https:\/\/futurism.com\/artificial-intelligence\/openai-new-allegations-teen-death\">OpenAI Faces New Allegations in Teen\u2019s Death<\/a> appeared first on <a href=\"https:\/\/futurism.com\/\">Futurism<\/a>.<\/p>\n<\/div>\n<div style=\"margin-top: 0px; margin-bottom: 0px;\" class=\"sharethis-inline-share-buttons\" ><\/div>","protected":false},"excerpt":{"rendered":"<p>The family of Adam Raine, a California teen who took his life after extensive conversations with ChatGPT about his suicidal thoughts, has amended their wrongful death complaint against OpenAI to&hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[177,3449,3841,3844,466,179],"tags":[],"class_list":["post-6146","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-brain","category-ethics","category-health-medicine","category-mental-health","category-openai"],"_links":{"self":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/6146","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/comments?post=6146"}],"version-history":[{"count":0,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/6146\/revisions"}],"wp:attachment":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/media?parent=6146"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/categories?post=6146"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/tags?post=6146"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}