{"id":3931,"date":"2025-07-23T14:11:26","date_gmt":"2025-07-23T14:11:26","guid":{"rendered":"https:\/\/musictechohio.online\/site\/tech-industry-ai-mental-health\/"},"modified":"2025-07-23T14:11:26","modified_gmt":"2025-07-23T14:11:26","slug":"tech-industry-ai-mental-health","status":"publish","type":"post","link":"https:\/\/musictechohio.online\/site\/tech-industry-ai-mental-health\/","title":{"rendered":"Tech Industry Figures Suddenly Very Concerned That AI Use Is Leading to Psychotic Episodes"},"content":{"rendered":"<div>\n<div><img width=\"2400\" height=\"1260\" src=\"https:\/\/wordpress-assets.futurism.com\/2025\/07\/tech-industry-ai-mental-health-1.jpg\" class=\"attachment-full size-full wp-post-image\" alt=\"Tech industry hotshots are speaking out after a prominent OpenAI investor appeared to have a ChatGPT-induced mental health breakdown.\" style=\"margin-bottom: 15px;\" decoding=\"async\" fetchpriority=\"high\"><\/div>\n<p>For months, we and our colleagues elsewhere in the tech media have been reporting on what experts are now calling &#8220;<a href=\"https:\/\/www.psychologytoday.com\/us\/blog\/urban-survival\/202507\/the-emerging-problem-of-ai-psychosis\">ChatGPT psychosis<\/a>&#8220;: when AI users fall down alarming mental health rabbit holes in which a chatbot encourages wild delusions about conspiracies, mystical entities, or crackpot new scientific theories.<\/p>\n<p>The resulting breakdowns have led users to <a href=\"https:\/\/futurism.com\/chatgpt-mental-health-crises\">homelessness<\/a>, <a href=\"https:\/\/futurism.com\/commitment-jail-chatgpt-psychosis\">involuntary commitment<\/a> to psychiatric care facilities, and even <a href=\"https:\/\/www.nytimes.com\/2025\/06\/13\/technology\/chatgpt-ai-chatbots-conspiracies.html\">violent death<\/a> and <a href=\"https:\/\/futurism.com\/mother-teen-suicide-chatbots-letter\">suicide<\/a>.<\/p>\n<p>Until recently, the tech industry and its financial backers have had little to say about the phenomenon. But last week, one of their own \u2014 venture capitalist Geoff Lewis, a managing partner at the\u00a0multi-billion dollar firm Bedrock who is heavily invested in machine learning ventures including OpenAI \u2014 raised eyebrows with a series of posts that <a href=\"https:\/\/futurism.com\/openai-investor-chatgpt-mental-health\">prompted concerns about his own mental health<\/a>.<\/p>\n<p>In the posts, he claimed that he&#8217;d somehow used ChatGPT to uncover\u00a0a shadowy &#8220;non-government agency&#8221; that he said had\u00a0&#8220;negatively impacted over 7,000 lives&#8221; and &#8220;extinguished&#8221; 12 more.<\/p>\n<p>Whatever&#8217;s going on with Lewis, who didn&#8217;t respond to our request for comment, his posts have prompted an\u00a0<span style=\"box-sizing: border-box; margin: 0px; padding: 0px;\">unprecedented outpouring<\/span> of concern among high-profile individuals in the tech industry about what the massive deployment of poorly-understood AI tech may be having on the mental health of users worldwide.<\/p>\n<p>&#8220;If you\u2019re a friend or family, please check on him,&#8221;\u00a0<a href=\"https:\/\/x.com\/hishboy\/status\/1945258955380326653\">wrote<\/a> Hish Bouabdallah, a software engineer who&#8217;s worked at Apple, Coinbase, Lyft, and Twitter, of Lewis&#8217; thread. &#8220;He doesn\u2019t seem alright.&#8221;<\/p>\n<p>Other posts were far less empathetic, though there seemed to be a dark undercurrent to the gallows humor: if a billionaire investor can lose his grip after a few too many prompts, what hope do the rest of us have?<\/p>\n<p>&#8220;This is like Kanye being off his meds but for the tech industry,&#8221; <a href=\"https:\/\/x.com\/transitive_bs\/status\/1945904314053984449\">quipped Travis Fischer<\/a>, a software engineer who&#8217;s worked at Amazon and Microsoft.<\/p>\n<p>Concretely, Lewis&#8217; posts also elicited a wave of warnings about the mental health implications of getting too chummy with\u00a0chatbots.<\/p>\n<p>&#8220;There\u2019s recently been an influx of case reports describing people exhibiting signs of psychosis having their episodes and beliefs amplified by an LLM,&#8221;\u00a0<a href=\"https:\/\/x.com\/cyrilzakka\/status\/1945988983588258258\">wrote<\/a> Cyril Zakka, a medical doctor and former Stanford researcher who now works at the prominent AI startup Hugging Face.<\/p>\n<p>&#8220;While I\u2019m not a psychiatrist by training,&#8221; he continued, &#8220;I think it mirrors an interesting syndrome known as &#8216;folie \u00e0 deux&#8217; or &#8216;madness of two&#8217; that falls under delusional disorders in the DSM-5 (although not an official classification.)&#8221;<\/p>\n<p>&#8220;While there are many variations, it essentially boils down to a primary person forming a delusional belief during a psychotic episode and imposing it on another secondary person who starts believing them as well,&#8221; Zakka posited. &#8220;From a psychiatric perspective, I think LLMs could definitely fall under the umbrella of being &#8216;the induced non-dominant person,&#8217; reflecting the beliefs back at the inducer. These beliefs often subside in the non-dominant individual when separated from the primary one.&#8221;<\/p>\n<p>Eliezer Yudkowsky, the founder of the Machine Intelligence Research Institute, even charged that Lewis had been &#8220;eaten by ChatGPT.&#8221; While some in the tech industry framed Lewis\u2019 struggles as a <a href=\"https:\/\/x.com\/max_spero_\/status\/1945896467169722812\">surprising anomaly<\/a> given his r\u00e9sum\u00e9, Yudkowsky \u2014 himself a wealthy and influential tech figure \u2014 sees the incident as evidence that even wealthy elites are vulnerable to chatbot psychosis.<\/p>\n<p>&#8220;This is not good news about which sort of humans ChatGPT can eat,&#8221; <a href=\"https:\/\/x.com\/ESYudkowsky\/status\/1945923453716230317\">mused Yudkowsky<\/a>. &#8220;Yes yes, I&#8217;m sure the guy was atypically susceptible for a $2 billion fund manager,&#8221; he continued. &#8220;It is nonetheless a small iota of bad news about how good ChatGPT is at producing ChatGPT psychosis; it contradicts the narrative where this only happens to people sufficiently low-status that AI companies should be allowed to break them.&#8221;<\/p>\n<p>Others tried to break through to Lewis by explaining to him what was almost certainly happening: the AI was picking up on leading questions and providing answers that were effectively role-playing a dark conspiracy, with Lewis as the main character.<\/p>\n<p>&#8220;This isn&#8217;t as deep as you think it is,&#8221; <a href=\"https:\/\/x.com\/jordnb\/status\/1945892035753570479\">replied Jordan Burgess<\/a>, a software engineer and AI startup founder, to Lewis&#8217; posts. &#8220;In practice ChatGPT will write semi-coherent gibberish if you ask it to.&#8221;<\/p>\n<p>&#8220;Don&#8217;t worry \u2014 you can come out of it! But the healthy thing would be to step away and get more human connection,&#8221; Burgess implored. &#8220;Friends of Geoff: please can you reach out to him directly. I assume he has a wide network here.&#8221;<\/p>\n<p>As <a href=\"https:\/\/www.garbageday.email\/p\/the-tech-bros-are-making-themselves-sick\">observers quickly pointed out<\/a>, the ChatGPT screenshots Lewis posted to back up his claims seemed to be clearly inspired by a fanfiction community called the <a class=\"underline hover:text-futurism hover:no-underline transition-all duration-200 ease-in-out\" href=\"https:\/\/scp-wiki.wikidot.com\/about-the-scp-foundation\">SCP Foundation<\/a>, in which participants write horror stories about surreal monsters styled as jargon-filled reports by a\u00a0research group studying paranormal phenomena.<\/p>\n<p>Jeremy Howard, Stanford digital fellow and former professor at the University of Queensland, broke down the sequence that led Lewis into an SCP-themed feedback loop.<\/p>\n<p>&#8220;When there&#8217;s lots of training data with a particular style, using a similar style in your prompt will trigger the LLM to respond in that style,&#8221; Howard <a href=\"https:\/\/x.com\/jeremyphoward\/status\/1945922341038637104\">wrote<\/a>. &#8220;The SCP wiki is really big \u2014 about 30x bigger than the whole Harry Potter series, at &gt;30 million words! Geoff happened across certain words and phrases that triggered ChatGPT to produce tokens from this part of the training [data].&#8221;<\/p>\n<p>&#8220;Geoff happened across certain words and phrases that triggered ChatGPT to produce tokens from this part of the training distribution,&#8221;\u00a0he <a href=\"https:\/\/x.com\/jeremyphoward\/status\/1945922341038637104\">wrote<\/a>. &#8220;And the tokens it produced triggered Geoff in turn.&#8221;<\/p>\n<p>&#8220;That&#8217;s not a coincidence, the collaboratively-produced fanfic is meant to be compelling!&#8221;\u00a0he added. &#8220;This created a self-reinforcing feedback loop.&#8221;<\/p>\n<p>Not all who chimed in addressed Lewis himself. Some took a step back to comment on the broader system vexing Lewis and others like him, placing responsibility for ChatGPT psychosis on OpenAI.<\/p>\n<p>Jackson Doherty, a software engineer at TipLink, <a href=\"https:\/\/x.com\/JacksonDoherty\/status\/1945908467111178354\">entreated<\/a> OpenAI founder Sam Altman to &#8220;fix your model to stop driving people insane.&#8221; (Altman <a href=\"https:\/\/x.com\/sama\/status\/1915910976802853126\">previously acknowledged<\/a> that OpenAI was forced to <a href=\"https:\/\/openai.com\/index\/sycophancy-in-gpt-4o\/\">roll back a version<\/a> of ChatGPT that was &#8220;overly flattering or agreeable \u2014 often described as sycophantic.&#8221;)<\/p>\n<p>And Wilson Hobbs, founding engineer at corporate tax startup Rivet, noted that the makers of ChatGPT have a vested interest in keeping users engrossed in their chatbot. As a consequence of venture capital&#8217;s obsession with AI, tech companies are incentivized to drive engagement numbers over <a href=\"https:\/\/futurism.com\/pyschiatric-researchers-risk-ai\">user wellbeing<\/a> in order to snag <a href=\"https:\/\/www.axios.com\/2025\/07\/03\/ai-startups-vc-investments\">massive cash injections<\/a> from investors \u2014 like, ironically, Lewis himself.<\/p>\n<p>&#8220;If this looks crazy to you, imagine the thousands of people who aren\u2019t high profile whose thought loops are being reinforced,&#8221; <a href=\"https:\/\/x.com\/WiLSONSACCOUNT\/status\/1945895065978892574\">Hobbs wrote<\/a>. &#8220;People have taken their own lives due to ChatGPT. And no one seems to want to take that to its logical conclusion, especially not OpenAI.&#8221;<\/p>\n<p>&#8220;Just remember,&#8221; Hobbs continued, &#8220;wanting something to be true does not make it true. And there are a lot of people out there who need a lot of falsehoods to be true right now so they can raise more money and secure their place in the world before the music stops. Do not anthropomorphize the lawnmower.&#8221;<\/p>\n<p><strong>More on ChatGPT: <\/strong><a href=\"https:\/\/futurism.com\/chatgpt-mental-health-crises\"><em>People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions<\/em><\/a><\/p>\n<p>The post <a href=\"https:\/\/futurism.com\/tech-industry-ai-mental-health\">Tech Industry Figures Suddenly Very Concerned That AI Use Is Leading to Psychotic Episodes<\/a> appeared first on <a href=\"https:\/\/futurism.com\/\">Futurism<\/a>.<\/p>\n<\/div>\n<div style=\"margin-top: 0px; margin-bottom: 0px;\" class=\"sharethis-inline-share-buttons\" ><\/div>","protected":false},"excerpt":{"rendered":"<p>For months, we and our colleagues elsewhere in the tech media have been reporting on what experts are now calling &#8220;ChatGPT psychosis&#8220;: when AI users fall down alarming mental health&hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[177,196,466,179,2883],"tags":[],"class_list":["post-3931","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-chatgpt","category-mental-health","category-openai","category-technology"],"_links":{"self":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/3931","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/comments?post=3931"}],"version-history":[{"count":0,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/3931\/revisions"}],"wp:attachment":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/media?parent=3931"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/categories?post=3931"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/tags?post=3931"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}