
Mental health experts are continuing to sound alarm bells about users of AI chatbots spiraling into severe mental health crises characterized by paranoia and delusions, a trend they’ve started to refer to as “AI psychosis.”
On Monday, University of California, San Francisco research psychiatrist Keith Sakata took to social media to say that he’s seen a dozen people become hospitalized after “losing touch with reality because of AI.”
In a lengthy X-formerly-Twitter thread, Sakata clarified that psychosis is characterized by a person breaking from “shared reality,” and can show up in a few different ways — including “fixed false beliefs,” or delusions, as well as visual or auditory hallucinations and disorganized thinking patterns. Our brains, the researcher explains, work on a predictive basis: we effectively make an educated guess about what reality will be, then conduct a reality check. Finally, our brains update our beliefs accordingly.
“Psychosis happens when the ‘update,’ step fails,” wrote Sakata, warning that large language model-powered chatbots like ChatGPT “slip right into that vulnerability.”
I’m a psychiatrist.
In 2025, I’ve seen 12 people hospitalized after losing touch with reality because of AI. Online, I’m seeing the same pattern.
Here’s what “AI psychosis” looks like, and why it’s spreading fast:
pic.twitter.com/YYLK7une3j
— Keith Sakata, MD (@KeithSakata) August 11, 2025
In this context, Sakata compared chatbots to a “hallucinatory mirror” by design. Put simply, LLMs function largely by way of predicting the next word, drawing on training data, reinforcement learning, and user responses as they formulate new outputs. What’s more, as chatbots are also incentivized for user engagement and contentment, they tend to behave sycophantically; in other words, they tend to be overly agreeable and validating to users, even in cases where a user is incorrect or unwell.
Users can thus get caught in alluring recursive loops with the AI, as the model doubles, triples, and quadruples down on delusional narratives, regardless of their basis in reality or the real-world consequences that the human user might be experiencing as a result.
This “hallucinatory mirror” description is a characterization consistent with our reporting about AI psychosis. We’ve investigated dozens of cases of relationships with ChatGPT and other chatbots giving way to severe mental health crises following user entry into recursive, AI-fueled rabbit holes.
These human-AI relationships and the crises that follow have led to mental anguish, divorce, homelessness, involuntary commitment, incarceration, and as The New York Times first reported, even death.
Earlier this month, in response to the growing number of reports linking ChatGPT to harmful delusional spirals and psychosis, OpenAI published a blog post admitting that ChatGPT, in some instances, “fell short in recognizing signs of delusion or emotional dependency” in users. It said it hired new teams of subject matter experts to explore the issue and installed a Netflix-like time spent notification — though Futurism quickly found that the chatbot was still failing to pick up on obvious signs of mental health crises in users.
And yet, when GPT-5 — the latest iteration of OpenAI’s flagship LLM, released last week to much disappointment and controversy — proved to be emotionally colder and less personalized than GPT-4o, users pleaded with the company to bring their beloved model back from the product graveyard.
Within a day, OpenAI did exactly that.
“Ok, we hear you all on 4o; thanks for the time to give us the feedback (and the passion!)” OpenAI CEO Sam Altman wrote on Reddit in response to distressed users.
In the thread, Sakata was careful to note that linking AI to breaks with reality isn’t the same as attributing cause, and that LLMs tend to be one of several factors — including “sleep loss, drugs, mood episodes,” according to the researcher — that lead up to a psychotic break.
“AI is the trigger,” writes the psychiatrist, “but not the gun.”
Nonetheless, the scientist continues, the “uncomfortable truth” here is that “we’re all vulnerable,” as the same traits that make humans “brilliant” — like intuition and abstract thinking — are the very traits that can push us over the psychological ledge.
It’s also true that validation and sycophancy, as opposed to the friction and stress involved in maintaining real-world relationships, are deeply seductive. So are many of the delusional spirals that people are entering, which often reinforce that the user is “special” or “chosen” in some way. Add in factors like mental illness, grief, and even just everyday stressors, as well as the long-studied ELIZA Effect, and together, it’s a dangerous concoction.
“Soon AI agents will know you better than your friends,” Sakata writes. “Will they give you uncomfortable truths? Or keep validating you so you’ll never leave?”
“Tech companies now face a brutal choice,” he added. “Keep users happy, even if it means reinforcing false beliefs. Or risk losing them.”
More on AI psychosis: Support Group Launches for People Suffering “AI Psychosis”
The post Research Psychiatrist Warns He’s Seeing a Wave of AI Psychosis appeared first on Futurism.