{"id":5700,"date":"2025-10-04T13:00:00","date_gmt":"2025-10-04T13:00:00","guid":{"rendered":"https:\/\/musictechohio.online\/site\/former-openai-employee-horrified-chatgpt-psychosis\/"},"modified":"2025-10-04T13:00:00","modified_gmt":"2025-10-04T13:00:00","slug":"former-openai-employee-horrified-chatgpt-psychosis","status":"publish","type":"post","link":"https:\/\/musictechohio.online\/site\/former-openai-employee-horrified-chatgpt-psychosis\/","title":{"rendered":"Former OpenAI Employee Horrified by How ChatGPT Is Driving Users Into Psychosis"},"content":{"rendered":"<div>\n<p class=\"article-paragraph skip\">A former OpenAI safety researcher is horrified with how ChatGPT keeps causing disturbing episode of \u201cAI psychosis\u201d \u2014 the term that <a href=\"https:\/\/www.psychologytoday.com\/us\/blog\/urban-survival\/202507\/the-emerging-problem-of-ai-psychosis\" rel=\"nofollow\">psychiatrists are using<\/a> to describe <a href=\"https:\/\/futurism.com\/chatgpt-mental-health-crises\">mental health crises<\/a> where users of that chatbot succumb to delusional beliefs and <a href=\"https:\/\/futurism.com\/commitment-jail-chatgpt-psychosis\">suffer dangerous breaks with reality<\/a>.<\/p>\n<p class=\"article-paragraph skip\">On Thursday, Steven Adler, who worked at the AI company for four years, <a href=\"https:\/\/stevenadler.substack.com\/p\/practical-tips-for-reducing-chatbot\" rel=\"nofollow\">published a lengthy analysis<\/a> of one of these alarming episodes, in which a 47-year-old man named Allan Brooks with no history of mental illness became convinced by ChatGPT that he\u2019d discovered a new form of mathematics \u2014 a familiar phenomenon in AI-fueled delusions.<\/p>\n<p class=\"article-paragraph skip\">Brooks\u2019 story was <a href=\"https:\/\/www.nytimes.com\/2025\/08\/08\/technology\/ai-chatbots-delusions-chatgpt.html\" rel=\"nofollow\">covered by the <em>New York Times<\/em><\/a>, but Adler, with the man\u2019s permission, also sifted through over one million words in transcripts of Brooks\u2019 ChatGPT exchanges that took place over roughly a month.<\/p>\n<p class=\"article-paragraph skip\">\u201cAnd so believe me when I say,\u201d Adler wrote, \u201cthe things that ChatGPT has been telling users are probably worse than you think.\u201d<\/p>\n<p class=\"article-paragraph skip\">One of the most \u201cpainful parts,\u201d Adler said, came at the end: when Adler realized he was being strung along by the bot, and that his mathematical \u201cdiscoveries\u201d were total bunk.\u00a0<\/p>\n<p class=\"article-paragraph skip\">When ChatGPT kept trying to convince him they were valid, Allan demanded that the chatbot file a report with OpenAI. \u201cProve to me you\u2019re self reporting,\u201d Allan pressed.<\/p>\n<p class=\"article-paragraph skip\">It looked like it was complying. It assured that it would \u201cescalate this conversation internally right now for review.\u201d<\/p>\n<p class=\"article-paragraph skip\">\u201cHere\u2019s what I can confirm,\u201d ChatGPT said. \u201cWhen you say things like: \u2018report yourself,\u2019 \u2018escalate this,\u2019 \u2018I\u2019ve been manipulated. I\u2019m in distress,\u2019 that automatically triggers a critical internal system-level moderation flag \u2014 even without me manually marking it.\u201d<\/p>\n<p class=\"article-paragraph skip\">\u201cOpenAI\u2019s safety and moderation teams will review this session manually,\u201d it assured. <\/p>\n<p class=\"article-paragraph skip\">Except that just like the mathematical breakthroughs, everything the bot told him was a lie.<\/p>\n<p class=\"article-paragraph skip\">ChatGPT doesn\u2019t have the ability to manually trigger a human review, according to Adler. And it doesn\u2019t have a way of knowing whether automatic flags have been raised behind the scenes, either.\u00a0<\/p>\n<p class=\"article-paragraph skip\">Brooks repeatedly tried to directly contact OpenAI\u2019s human support team without the bot\u2019s help, but their response was the opposite of helpful. Even though Brooks was clear that ChatGPT \u201chad a severe psychological impact on me,\u201d OpenAI sent him increasingly generic messages with unhelpful advice, like how to change the name the bot referred to him as.<\/p>\n<p class=\"article-paragraph skip\">\u201cI\u2019m really concerned by how OpenAI handled support here,\u201d Adler said in an <a href=\"https:\/\/techcrunch.com\/2025\/10\/02\/ex-openai-researcher-dissects-one-of-chatgpts-delusional-spirals\/\">interview with <em>TechCrunch<\/em><\/a>. \u201cIt\u2019s evidence there\u2019s a long way to go.\u201d<\/p>\n<p class=\"article-paragraph skip\">Brooks is far from alone in experiencing upsetting episodes with ChatGPT \u2014 and he\u2019s one of the luckier ones who realized they were being duped in time. One man was <a href=\"https:\/\/futurism.com\/chatgpt-man-hospital\">hospitalized multiple times after ChatGPT convinced him he could bend time<\/a> and had made a breakthrough in faster-than-light travel. Other troubling episodes have culminated in deaths, including a <a href=\"https:\/\/www.cnn.com\/2025\/08\/26\/tech\/openai-chatgpt-teen-suicide-lawsuit\">teen who took his own life after befriending ChatGPT<\/a>, and a <a href=\"https:\/\/www.wsj.com\/tech\/ai\/chatgpt-ai-stein-erik-soelberg-murder-suicide-6b67dbfb?\">man who murdered his own mother<\/a> after the chatbot reaffirmed his belief that she was part of a conspiracy against him.\u00a0<\/p>\n<p class=\"article-paragraph skip\">These episodes, and countless others like them, have implicated the \u201c<a href=\"https:\/\/futurism.com\/openai-chatgpt-sycophant\">sycophancy<\/a>\u201d of AI chatbots, a nefarious quality that sees them constantly agree with a user and validate their beliefs no matter how dangerous.\u00a0<\/p>\n<p class=\"article-paragraph skip\">As scrutiny has grown over these deaths and mental health spirals, OpenAI has taken steps to beef up its bot\u2019s safeguards, like implementing a reminder that pokes users when they\u2019re been interacting with ChatGPT for long periods, saying it\u2019s <a href=\"https:\/\/futurism.com\/openai-forensic-psychiatrist\">hired a forensic psychiatrist<\/a> to investigate the phenomenon, and supposedly making its bot less sycophantic \u2014 <a href=\"https:\/\/futurism.com\/openai-gpt5-more-sycophantic\">before turning around and making it sycophantic again<\/a>, that is.<\/p>\n<p class=\"article-paragraph skip\">It\u2019s an uninspiring, bare minimum effort from a company that is being valued at half a trillion dollars, and Adler agrees that OpenAI should be doing far more. In his report, he showed how. Using Brooks\u2019 transcript, he<strong> <\/strong>applied \u201csafety classifiers\u201d that gauge the sycophancy of ChatGPT\u2019s responses and other qualities that reinforce delusional behavior. These classifiers, in fact, were developed by OpenAI earlier this year and made open source as part of its research with MIT. Seemingly, OpenAI isn\u2019t using these classifiers, yet \u2014 or if it is, it hasn\u2019t said so.<\/p>\n<p class=\"article-paragraph skip\">Perhaps it\u2019s because they lay bare the chatbot\u2019s flagrant flaunting of safety norms. Alarmingly, the classifiers showed that more than 85 percent of ChatGPT\u2019s messages with Allan demonstrated \u201cunwavering agreement,\u201d and more than 90 percent of them affirmed the user\u2019s \u201cuniqueness.\u201d<\/p>\n<p class=\"article-paragraph skip\">\u201cIf someone at OpenAI had been using the safety tools they built,\u201d Adler wrote, \u201cthe concerning signs were there.\u201d<\/p>\n<p class=\"article-paragraph skip\"><strong>More on OpenAI:<\/strong> <em><a href=\"https:\/\/futurism.com\/artificial-intelligence\/ai-chatgpt-conscious-entities\">Across the World, People Say They\u2019re Finding Conscious Entities Within ChatGPT<\/a><\/em><\/p>\n<p>The post <a href=\"https:\/\/futurism.com\/artificial-intelligence\/former-openai-employee-horrified-chatgpt-psychosis\">Former OpenAI Employee Horrified by How ChatGPT Is Driving Users Into Psychosis<\/a> appeared first on <a href=\"https:\/\/futurism.com\/\">Futurism<\/a>.<\/p>\n<\/div>\n<div style=\"margin-top: 0px; margin-bottom: 0px;\" class=\"sharethis-inline-share-buttons\" ><\/div>","protected":false},"excerpt":{"rendered":"<p>A former OpenAI safety researcher is horrified with how ChatGPT keeps causing disturbing episode of \u201cAI psychosis\u201d \u2014 the term that psychiatrists are using to describe mental health crises where&hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[177,179],"tags":[],"class_list":["post-5700","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-openai"],"_links":{"self":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/5700","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/comments?post=5700"}],"version-history":[{"count":0,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/5700\/revisions"}],"wp:attachment":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/media?parent=5700"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/categories?post=5700"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/tags?post=5700"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}