{"id":6314,"date":"2025-10-29T16:08:46","date_gmt":"2025-10-29T16:08:46","guid":{"rendered":"https:\/\/musictechohio.online\/site\/former-openai-insider-failed-users\/"},"modified":"2025-10-29T16:08:46","modified_gmt":"2025-10-29T16:08:46","slug":"former-openai-insider-failed-users","status":"publish","type":"post","link":"https:\/\/musictechohio.online\/site\/former-openai-insider-failed-users\/","title":{"rendered":"Former OpenAI Insider Says It\u2019s Failed Its Users"},"content":{"rendered":"<div>\n<p class=\"article-paragraph skip\"><em><strong>Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.<\/strong><\/em><\/p>\n<p class=\"article-paragraph skip\">Earlier this year, when OpenAI released GPT-5, it made a strident announcement: that it was <a href=\"https:\/\/futurism.com\/openai-releases-gpt-5\">shutting down all previous models<\/a>. <\/p>\n<p class=\"article-paragraph skip\">There was immense backlash, because users had become <a href=\"https:\/\/futurism.com\/users-addicted-gpt-4o-convinced-openai-bring-back\">emotionally attached<\/a> to the more \u201csycophantic\u201d and warm tone of GPT-5\u2019s predecessor, GPT-4o. In fact, OpenAI was forced to reverse the decision, <a href=\"https:\/\/futurism.com\/openai-brings-back-4o-gpt-5\">bringing back 4o<\/a> and <a href=\"https:\/\/futurism.com\/openai-gpt5-more-sycophantic\">making GPT-5 more sycophantic<\/a>.<\/p>\n<p class=\"article-paragraph skip\">The incident was symptomatic of a much broader trend. We\u2019ve already seen users getting sucked into\u00a0<a href=\"https:\/\/futurism.com\/chatgpt-mental-health-crises\">severe mental health crises<\/a> by ChatGPT and other AI, a troubling phenomenon experts have since dubbed \u201c<a href=\"https:\/\/futurism.com\/support-group-ai-psychosis\">AI psychosis<\/a>.\u201d In a worst-case scenario, these spirals have already resulted in <a href=\"https:\/\/futurism.com\/ai-chatbots-leaving-trail-dead-teens\">several suicides<\/a>, with one pair of parents even <a href=\"https:\/\/www.bbc.com\/news\/articles\/cgerwp7rdlvo\">suing OpenAI<\/a> for playing a part in their child\u2019s death.<\/p>\n<p class=\"article-paragraph skip\">In a <a href=\"https:\/\/openai.com\/index\/strengthening-chatgpt-responses-in-sensitive-conversations\/\" rel=\"nofollow\">new announcement<\/a> this week, the Sam Altman-led company estimated that a <a href=\"https:\/\/futurism.com\/future-society\/openai-data-chatgpt-mental-health-crises\">sizable proportion<\/a> of active ChatGPT users show \u201cpossible signs of mental health emergencies related to psychosis and mania.\u201d An even larger contingent were found to have \u201cconversations that include explicit indicators of potential suicide planning or intent.\u201d<\/p>\n<p class=\"article-paragraph skip\">In an <a href=\"https:\/\/www.nytimes.com\/2025\/10\/28\/opinion\/openai-chatgpt-safety.html\" rel=\"nofollow\">essay for the <em>New York Times<\/em><\/a>, former OpenAI safety researcher Steven Adler argued that OpenAI isn\u2019t doing enough to mitigate these issues, while succumbing to \u201ccompetitive pressure\u201d and abandoning its focus on AI safety.<\/p>\n<p class=\"article-paragraph skip\">He criticized Altman for <a href=\"https:\/\/x.com\/sama\/status\/1978129344598827128\" rel=\"nofollow\">claiming<\/a> that the company had \u201cbeen able to mitigate the serious mental health issues\u201d with the use of \u201cnew tools,\u201d and for saying the company will soon <a href=\"https:\/\/futurism.com\/future-society\/openai-chatgpt-smut\">allow adult content on the platform<\/a>.<\/p>\n<p class=\"article-paragraph skip\">\u201cI have major questions \u2014 informed by my four years at OpenAI and my independent research since leaving the company last year \u2014 about whether these mental health issues are actually fixed,\u201d Adler wrote. \u201cIf the company really has strong reason to believe it\u2019s ready to bring back erotica on its platforms, it should show its work.\u201d<\/p>\n<p class=\"article-paragraph skip\">\u201cPeople deserve more than just a company\u2019s word that it has addressed safety issues,\u201d he added. \u201cIn other words: Prove it.\u201d<\/p>\n<p class=\"article-paragraph skip\">To Adler, opening the floodgates to mature content could have disastrous consequences.<\/p>\n<p class=\"article-paragraph skip\">\u201cIt\u2019s not that erotica is bad per se, but that there were clear warning signs of users\u2019 intense emotional attachment to AI chatbots,\u201d he wrote, recalling his time leading OpenAI\u2019s product safety team in 2021. \u201cEspecially for users who seemed to be struggling with mental health problems, volatile sexual interactions seemed risky.\u201d<\/p>\n<p class=\"article-paragraph skip\">OpenAI\u2019s latest announcement on the prevalence of mental health issues was a \u201cgreat first step,\u201d Adler argued, but he criticized the company for doing so \u201cwithout comparison to rates from the past few months.\u201d<\/p>\n<p class=\"article-paragraph skip\">Instead of moving fast and breaking things, OpenAI, alongside its peers, \u201cmay need to slow down long enough for the world to invent new safety methods \u2014 ones that even nefarious groups can\u2019t bypass,\u201d he wrote.<\/p>\n<p class=\"article-paragraph skip\">\u201cIf OpenAI and its competitors are to be trusted with building the seismic technologies for which they aim, they must demonstrate they are trustworthy in managing risks today,\u201d Adler added.<\/p>\n<p class=\"article-paragraph skip\"><strong>More on OpenAI:<\/strong> <a href=\"https:\/\/futurism.com\/future-society\/openai-data-chatgpt-mental-health-crises\"><em>OpenAI Data Finds Hundreds of Thousands of ChatGPT Users Might Be Suffering Mental Health Crises<\/em><\/a><\/p>\n<p>The post <a href=\"https:\/\/futurism.com\/artificial-intelligence\/former-openai-insider-failed-users\">Former OpenAI Insider Says It\u2019s Failed Its Users<\/a> appeared first on <a href=\"https:\/\/futurism.com\/\">Futurism<\/a>.<\/p>\n<\/div>\n<div style=\"margin-top: 0px; margin-bottom: 0px;\" class=\"sharethis-inline-share-buttons\" ><\/div>","protected":false},"excerpt":{"rendered":"<p>Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact&hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[177,3841,179],"tags":[],"class_list":["post-6314","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-ethics","category-openai"],"_links":{"self":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/6314","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/comments?post=6314"}],"version-history":[{"count":0,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/6314\/revisions"}],"wp:attachment":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/media?parent=6314"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/categories?post=6314"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/tags?post=6314"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}