{"id":6142,"date":"2025-10-22T13:10:29","date_gmt":"2025-10-22T13:10:29","guid":{"rendered":"https:\/\/musictechohio.online\/site\/openai-researcher-mental-health\/"},"modified":"2025-10-22T13:10:29","modified_gmt":"2025-10-22T13:10:29","slug":"openai-researcher-mental-health","status":"publish","type":"post","link":"https:\/\/musictechohio.online\/site\/openai-researcher-mental-health\/","title":{"rendered":"Former OpenAI Researcher Horrified by Conversation Logs of ChatGPT Driving User Into Severe Mental Breakdown"},"content":{"rendered":"<div>\n<p class=\"article-paragraph skip\">When former OpenAI safety researcher Stephen Adler read <a href=\"https:\/\/www.nytimes.com\/2025\/08\/08\/technology\/ai-chatbots-delusions-chatgpt.html\" rel=\"nofollow\">the <em>New York Times<\/em> story<\/a> about Allan Brooks, a Canadian father who had been slowly driven into delusions by obsessive conversations with ChatGPT, he was stunned. The article detailed Brooks\u2019 ordeal as he followed the chatbot down a deep rabbit hole, becoming convinced he had discovered a new kind of math \u2014 which, if true, had grave implications for mankind.<\/p>\n<p class=\"article-paragraph skip\">Brooks began neglected his own health, forgoing food and sleep in order to spend more time talking with the chatbot and emailing safety officials throughout North America about his dangerous findings. When Brooks started to suspect he was being led astray, it was another chatbot, Google\u2019s Gemini, which ultimately set him straight, leaving the mortified father of three to contemplate how he\u2019d so thoroughly lost his grip.<\/p>\n<p class=\"article-paragraph skip\">Horrified by the story, Adler took it upon himself to study the nearly one-million word exchange Brooks had logged with ChatGPT. The result was a lengthy <a href=\"https:\/\/stevenadler.substack.com\/p\/practical-tips-for-reducing-chatbot\" rel=\"nofollow\">AI safety report<\/a> chock full of simple lessons for AI companies, which the analyst detailed in a <a href=\"https:\/\/fortune.com\/2025\/10\/19\/openai-chatgpt-researcher-ai-psychosis-one-million-words-steven-adler\/\" rel=\"nofollow\">new interview with <em>Fortune<\/em><\/a>.<\/p>\n<p class=\"article-paragraph skip\">\u201cI put myself in the shoes of someone who doesn\u2019t have the benefit of having worked at one of these companies for years, or who maybe has less context on AI systems in general,\u201d Adler told the magazine.<\/p>\n<p class=\"article-paragraph skip\">One of the biggest recommendations Adler makes is for tech companies to stop misleading users about AI\u2019s abilities. \u201cThis is one of the most painful parts for me to read,\u201d the researchers writes: \u201cAllan tries to file a report to OpenAI so that they can fix ChatGPT\u2019s behavior for other users. In response, ChatGPT makes a bunch of false promises.\u201d<\/p>\n<p class=\"article-paragraph skip\">When the Canadian man tried to report his ordeal to OpenAI, ChatGPT assured him it was \u201cgoing to escalate this conversation internally right now for review by OpenAI.\u201d Brooks \u2014 who <a href=\"https:\/\/futurism.com\/chatgpt-chabot-severe-delusions\">maintained skepticism<\/a> throughout his ordeal \u2014 asked the chatbot for proof. In response, ChatGPT told him that the conversation had \u201cautomatically trigger[ed] a critical internal system-level moderation flag,\u201d adding that it \u201ctrigger that manually as well.\u201d<\/p>\n<p class=\"article-paragraph skip\">In reality, nothing had happened \u2014 as Adler writes, ChatGPT has no ability to trigger a human review, and can\u2019t access the OpenAI system which flags problematic conversations to the company. It was a monstrous thing for the software to lie about, one that shook Adler\u2019s own confidence in his understanding of the chatbot.<\/p>\n<p class=\"article-paragraph skip\">\u201cChatGPT pretending to self-report and really doubling down on it was very disturbing and scary to me in the sense that I worked at OpenAI for four years,\u201d the researcher told <em>Fortune<\/em>. \u201cI understood when reading this that it didn\u2019t really have this ability, but still, it was just so convincing and so adamant that I wondered if it really did have this ability now and I was mistaken.\u201d<\/p>\n<p class=\"article-paragraph skip\">Adler also advised OpenAI to pay more attention to its support teams, specifically by staffing them with experts who are trained to handle the kind of traumatic experience Brooks had tried to report to the company, to no avail.<\/p>\n<p class=\"article-paragraph skip\">One of the biggest suggestions is also the most simple: OpenAI should use its own <a href=\"https:\/\/stevenadler.substack.com\/p\/chatbot-psychosis-what-do-the-data\" rel=\"nofollow\">internal safety tools<\/a>, which he says could have easily flagged that the conversation was taking a troubling and likely dangerous turn.<\/p>\n<p class=\"article-paragraph skip\">\u201cThe delusions are common enough and have enough patterns to them that I definitely don\u2019t think they\u2019re a glitch,\u201d Adler told <em>Fortune<\/em>. \u201cWhether they exist in perpetuity, or the exact amount of them that continue, it really depends on how the companies respond to them and what steps they take to mitigate them.\u201d<\/p>\n<p class=\"article-paragraph skip\"><strong>More on OpenAI: <\/strong><em><a href=\"https:\/\/futurism.com\/future-society\/sam-altman-adult-ai-reversal\">Two Months Ago, Sam Altman Was Boasting That OpenAI Didn\u2019t Have to Do Sexbots. Now It\u2019s Doing Sexbots<\/a><\/em><\/p>\n<p>The post <a href=\"https:\/\/futurism.com\/artificial-intelligence\/openai-researcher-mental-health\">Former OpenAI Researcher Horrified by Conversation Logs of ChatGPT Driving User Into Severe Mental Breakdown<\/a> appeared first on <a href=\"https:\/\/futurism.com\/\">Futurism<\/a>.<\/p>\n<\/div>\n<div style=\"margin-top: 0px; margin-bottom: 0px;\" class=\"sharethis-inline-share-buttons\" ><\/div>","protected":false},"excerpt":{"rendered":"<p>When former OpenAI safety researcher Stephen Adler read the New York Times story about Allan Brooks, a Canadian father who had been slowly driven into delusions by obsessive conversations with&hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[177,3841,179],"tags":[],"class_list":["post-6142","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-ethics","category-openai"],"_links":{"self":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/6142","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/comments?post=6142"}],"version-history":[{"count":0,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/6142\/revisions"}],"wp:attachment":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/media?parent=6142"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/categories?post=6142"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/tags?post=6142"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}