{"id":9139,"date":"2026-03-03T21:00:30","date_gmt":"2026-03-03T21:00:30","guid":{"rendered":"https:\/\/musictechohio.online\/site\/openai-contacts-alert-mental-crisis\/"},"modified":"2026-03-03T21:00:30","modified_gmt":"2026-03-03T21:00:30","slug":"openai-contacts-alert-mental-crisis","status":"publish","type":"post","link":"https:\/\/musictechohio.online\/site\/openai-contacts-alert-mental-crisis\/","title":{"rendered":"OpenAI Says It Will Let Users Add Trusted Contacts to Alert If They Experience a Mental Health Crisis While Using ChatGPT"},"content":{"rendered":"<div>\n<p class=\"article-paragraph skip\">As it fights a growing stack of user safety and wrongful death lawsuits, OpenAI says it will introduce a \u201ctrusted contact feature\u201d in ChatGPT that will alert a chatbot user\u2019s designated loved one in the event of a possible mental health crisis.<\/p>\n<p class=\"article-paragraph skip\">OpenAI announced the new feature last week in a <a href=\"https:\/\/openai.com\/index\/update-on-mental-health-related-work\/\">blog post<\/a>, billed as an \u201cupdate on our mental health-related work.\u201d It said it\u2019s \u201cworking closely\u201d with its Council on Well-Being and AI and Global Physicians Network \u2014 two internally-regulated groups of experts that were launched after <a href=\"https:\/\/futurism.com\/chatgpt-mental-health-crises\">reports<\/a> of AI-tied <a href=\"https:\/\/www.rollingstone.com\/culture\/culture-features\/ai-spiritual-delusions-destroying-human-relationships-1235330175\/\">mental health<\/a> crises <a href=\"https:\/\/www.nytimes.com\/2025\/06\/13\/technology\/chatgpt-ai-chatbots-conspiracies.html\">began to emerge<\/a>, as well as news of a high-profile lawsuit last August revealing the death by suicide of a <a href=\"https:\/\/www.nytimes.com\/2025\/08\/26\/technology\/chatgpt-openai-suicide.html\">16-year-old ChatGPT user named Adam Raine<\/a> \u2014\u00a0to roll out the feature, which it\u2019s marketing as an adult-focused endeavor distinct from its efforts to integrate parental controls and other systems designed to identify and protect minors.<\/p>\n<p class=\"article-paragraph skip\">The announcement comes after extensive public reporting \u2014\u00a0in addition to at least thirteen separate <a href=\"https:\/\/futurism.com\/artificial-intelligence\/chatgpt-suicides-lawsuits\">consumer safety lawsuits<\/a> \u2014\u00a0about OpenAI customers being pulled into delusional or suicidal spirals with ChatGPT following extensive, often deeply intimate use of the chatbot. <\/p>\n<p class=\"article-paragraph skip\">The company doesn\u2019t offer much detail about the feature in the post, simply saying it will \u201callow adult users to designate someone to receive notifications when they may need additional support.\u201d It has yet to define any reporting standards around what might actually compel the system to flag a person\u2019s use, though, which will be a tricky policy question. Would someone need to explicitly declare intent to hurt or kill themselves, <a href=\"https:\/\/futurism.com\/artificial-intelligence\/ai-abuse-harassment-stalking\">or possibly someone else<\/a>, for their loved one to be notified? Or would the feature be designed to track and flag less-explicit signs that a user could be in a <a href=\"https:\/\/www.wsj.com\/tech\/ai\/chatgpt-ai-stein-erik-soelberg-murder-suicide-6b67dbfb?mod=author_content_page_1_pos_2\">heightened state of crisis<\/a> \u2014\u00a0for example, signs that they could be manic, expressing delusional beliefs, or experiencing psychosis?<\/p>\n<p class=\"article-paragraph skip\">It\u2019s likely that we\u2019ll learn more as OpenAI gears up to roll out the feature, and we could see it being especially helpful for users with a <a href=\"https:\/\/futurism.com\/artificial-intelligence\/chatbot-use-mental-illness\">diagnosed mental illness<\/a> who know that intensive AI use could stand to intersect in destructive ways with their mental health. <em>Futurism <\/em>has <a href=\"https:\/\/futurism.com\/commitment-jail-chatgpt-psychosis\">reported on several cases<\/a> of ChatGPT users who <a href=\"https:\/\/futurism.com\/artificial-intelligence\/mental-illness-chatgpt-psychosis-lawsuit\">successfully managed a mental illness<\/a> for several years before <a href=\"https:\/\/futurism.com\/chatgpt-marriages-divorces\">falling into a ChatGPT-tied crisis<\/a>. In multiple cases we\u2019ve reviewed, in addition to reinforcing scientific or spiritual delusions, ChatGPT has encouraged users with a mental illness not to continue taking their prescribed medication, agreed that users were somehow misdiagnosed by human professionals, or driven wedges between users and their real-world support system. One ChatGPT user now suing OpenAI, a 34-year-old schizoaffective man named John Jacquez, <a href=\"https:\/\/futurism.com\/artificial-intelligence\/mental-illness-chatgpt-psychosis-lawsuit\">told us<\/a> that had he known ChatGPT could reinforce delusions, he \u201cnever would\u2019ve touched\u201d the product.<\/p>\n<p class=\"article-paragraph skip\">That said, OpenAI still doesn\u2019t warn new ChatGPT users that extensive use could negatively impact their mental health \u2014\u00a0which, sure, is still being studied and litigated, though there is a growing consensus among experts, both <a href=\"https:\/\/www.nytimes.com\/2026\/01\/26\/us\/chatgpt-delusions-psychosis.html\">anecdotally<\/a> and in <a href=\"https:\/\/futurism.com\/artificial-intelligence\/chatbot-use-mental-illness\">studies<\/a>, that chatbots can likely exacerbate existing mental health conditions or worsen nascent crises. Millions of people manage mental illness every day; with the \u201ctrusted contact feature,\u201d it would be up to the user to even be aware that chatbots could pose some level of risk to their mental health, and then also want a loved one to be notified of any concerning use patterns.<\/p>\n<p class=\"article-paragraph skip\">That \u201cwant\u201d is important. A huge number of people lean on AI for emotional support and advice. This is due in part to AI\u2019s low cost and accessibility when compared to oft-inaccessible human therapy \u2014 but also, in many cases, because it may feel easier or safer for someone to share sensitive or revealing thoughts with a non-human bot.<\/p>\n<p class=\"article-paragraph skip\">In other words, some users could be discussing mental health troubles, or perhaps sharing delusional or dangerous ideas, with ChatGPT expressly <em>because<\/em> they don\u2019t want to share those thoughts or ideas with another person \u2014\u00a0a reality that both AI companies and regulators looking at these issues will need to contend with. And to that end, if OpenAI\u2019s internal monitoring tools signal that someone may be in crisis, but that user hasn\u2019t opted to list a trusted contact, what does the company do with that kind of information?<\/p>\n<p class=\"article-paragraph skip\">Delusional and suicidal AI spirals haven\u2019t only impacted users with a diagnosed history of serious mental illness, according to reporting by <em><a href=\"https:\/\/futurism.com\/commitment-jail-chatgpt-psychosis\">Futurism<\/a><\/em> and the <em><a href=\"https:\/\/www.nytimes.com\/2025\/08\/08\/technology\/ai-chatbots-delusions-chatgpt.html\">New York Times<\/a><\/em>, which could also impact how many people opt to utilize this kind of feature. Though in its blog post, OpenAI claimed that it\u2019s \u201ccontinuing to advance how our models detect and respond to signs of emotional distress,\u201d which in addition to the notification tool, includes \u201cnew evaluation methods that simulate extended mental health-related conversations\u201d that the company says will help it \u201cbetter identify potential risks and improve how ChatGPT responds in sensitive moments.\u201d<\/p>\n<p class=\"article-paragraph skip\">OpenAI says it hosts 900 million ChatGPT users every week. By its own estimates, as of October, there are <a href=\"https:\/\/www.wired.com\/story\/chatgpt-psychosis-and-self-harm-update\/\">millions of weekly ChatGPT users<\/a> showing signs of suicidality, psychosis, and other crises. While the efficacy of this kind of notification feature remains to be seen, it does feel like a positive step \u2014\u00a0though the company\u2019s efforts to mitigate the risks its products may pose to its users continue to feel reactive, not proactive.<\/p>\n<p class=\"article-paragraph skip\"><strong>More on AI and mental health:<\/strong> <em><a href=\"https:\/\/futurism.com\/artificial-intelligence\/chatbot-use-mental-illness\">Chatbot Use Can Cause Mental Illness to Get Worse, Research Finds<\/a><\/em><\/p>\n<p>The post <a href=\"https:\/\/futurism.com\/artificial-intelligence\/openai-contacts-alert-mental-crisis\">OpenAI Says It Will Let Users Add Trusted Contacts to Alert If They Experience a Mental Health Crisis While Using ChatGPT<\/a> appeared first on <a href=\"https:\/\/futurism.com\/\">Futurism<\/a>.<\/p>\n<\/div>\n<div style=\"margin-top: 0px; margin-bottom: 0px;\" class=\"sharethis-inline-share-buttons\" ><\/div>","protected":false},"excerpt":{"rendered":"<p>As it fights a growing stack of user safety and wrongful death lawsuits, OpenAI says it will introduce a \u201ctrusted contact feature\u201d in ChatGPT that will alert a chatbot user\u2019s&hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[177,3841,179],"tags":[],"class_list":["post-9139","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-ethics","category-openai"],"_links":{"self":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/9139","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/comments?post=9139"}],"version-history":[{"count":0,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/9139\/revisions"}],"wp:attachment":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/media?parent=9139"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/categories?post=9139"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/tags?post=9139"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}