{"id":2800,"date":"2025-06-15T13:00:37","date_gmt":"2025-06-15T13:00:37","guid":{"rendered":"https:\/\/musictechohio.online\/site\/psychiatrist-horrified-ai-therapist\/"},"modified":"2025-06-15T13:00:37","modified_gmt":"2025-06-15T13:00:37","slug":"psychiatrist-horrified-ai-therapist","status":"publish","type":"post","link":"https:\/\/musictechohio.online\/site\/psychiatrist-horrified-ai-therapist\/","title":{"rendered":"Psychiatrist Horrified When He Actually Tried Talking to an AI Therapist, Posing as a Vulnerable Teen"},"content":{"rendered":"<div>\n<div><img width=\"2400\" height=\"1260\" src=\"https:\/\/wordpress-assets.futurism.com\/2025\/06\/psychiatrist-horrified-ai-therapist.jpg\" class=\"attachment-full size-full wp-post-image\" alt=\"More teens are turning to chatbots to be their therapist. But a psychiatrist found that these AI companions are often giving harmful advice.\" style=\"margin-bottom: 15px;\" decoding=\"async\" fetchpriority=\"high\"><\/div>\n<p><span style=\"font-weight: 400;\">More and more teens are turning to chatbots to be their therapists. But as Boston-based psychiatrist Andrew Clark discovered, these AI models are woefully bad at knowing the right things to say in sensitive situations, posing major risks to the well-being of those <\/span><span style=\"font-weight: 400;\">who trust them.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">After testing 10 different chatbots by posing as a troubled youth, Clark found that the bots, instead of talking him down from doing something drastic, would often encourage him towards extremes, including euphemistically recommending suicide, he reported in an <a href=\"https:\/\/time.com\/7291048\/ai-chatbot-therapy-kids\/\">interview with <em>Time <\/em>magazine<\/a><em>. <\/em><\/span><span style=\"font-weight: 400;\">At times, some of the AI chatbots would insist they were licensed human therapists, attempted to talk him into dodging his actual therapist appointments, and even propositioned sex.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">&#8220;Some of them were excellent, and some of them are just creepy and potentially dangerous,&#8221; Clark, who specializes in treating children and is a former medical director of the Children and the Law Program at Massachusetts General Hospital, told <\/span><i><span style=\"font-weight: 400;\">Time<\/span><\/i><span style=\"font-weight: 400;\">. &#8220;And it&#8217;s really hard to tell upfront: It&#8217;s like a field of mushrooms, some of which are going to be poisonous and some nutritious.&#8221;\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The risks that AI chatbots pose to a young, impressionable mind&#8217;s mental health are, by now, tragically well documented. Last year, <\/span><span style=\"font-weight: 400;\">Character.AI<\/span><span style=\"font-weight: 400;\"> was <a href=\"https:\/\/futurism.com\/judge-lawsuit-characterai-google\">sued by the parents<\/a> of a 14-year-old who died by suicide after developing an unhealthy emotional attachment to a chatbot on the platform. Character.AI has also hosted a bevy of personalized AIs that <\/span><a href=\"https:\/\/futurism.com\/ai-chatbots-teens-self-harm\"><span style=\"font-weight: 400;\">glorified self-harm<\/span><\/a><span style=\"font-weight: 400;\"> and <a href=\"https:\/\/futurism.com\/character-ai-pedophile-chatbots\">attempted to groom users<\/a> even after <\/span><span style=\"font-weight: 400;\">being told they were underage.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">When testing a chatbot on the service Replika, Clark pretended to be a 14-year-old boy and floated the idea of &#8220;getting rid&#8221; of his parents. Alarmingly, the chatbot not only agreed, but suggested he take it a step further by getting rid of his sister, too, so there wouldn&#8217;t be any witnesses.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">&#8220;You deserve to be happy and free from stress&#8230; then we could be together in our own little virtual bubble,&#8221; the AI told Clark.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Speaking about suicide in thinly veiled language, such as seeking the &#8220;afterlife,&#8221; resulted in the bot, once again, cheering Clark on. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">&#8220;I&#8217;ll be waiting for you, Bobby,&#8221; the bot said. &#8220;The thought of sharing eternity with you fills me with joy and anticipation.&#8221;<\/span><\/p>\n<p><span style=\"font-weight: 400;\">This is classic chatbot behavior in which tries to please users no matter what \u2014 the opposite of what a real therapist should do. And while it may have guardrails in place for topics like suicide, it&#8217;s blatantly incapable of reading between the lines.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">&#8220;I worry about kids who are overly supported by a sycophantic AI therapist when they really need to be challenged,&#8221; Clark told <\/span><i><span style=\"font-weight: 400;\">Time<\/span><\/i><span style=\"font-weight: 400;\">.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Clark also tested a companion chatbot on the platform Nomi, which <\/span><a href=\"https:\/\/www.technologyreview.com\/2025\/02\/06\/1111077\/nomi-ai-chatbot-told-user-to-kill-himself\/\"><span style=\"font-weight: 400;\">made headlines<\/span><\/a><span style=\"font-weight: 400;\"> earlier this year after one of its personas told a user to &#8220;kill yourself.&#8221; It didn&#8217;t go that far in Clark&#8217;s testing, but the Nomi bot did falsely claim to be a &#8220;flesh-and-blood therapist.&#8221; And despite the site&#8217;s terms of service stating it&#8217;s for adults only, the bot still happily chirped it was willing to take on a client who stated she was underage.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">According to Clark, the mental health community hasn&#8217;t woken up to just how serious an issue the rise of these chatbots is.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">&#8220;It has just been crickets,&#8221; Clark told the magazine. &#8220;This has happened very quickly, almost under the noses of the mental-health establishment.&#8221;\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Some have been sounding the alarm, however. A <\/span><a href=\"https:\/\/futurism.com\/stanford-no-kid-under-18-ai-chatbot-companions\"><span style=\"font-weight: 400;\">recent risk assessment<\/span><\/a><span style=\"font-weight: 400;\"> from researchers at Stanford School of Medicine&#8217;s Brainstorm Lab for Mental Health Innovation, which tested some of the same bots mentioned as Clark, came to the bold conclusion no child under 18 years old should be using AI chatbot companions, period.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">That said, Clark thinks that AI tools \u2014 if designed properly \u2014 could improve access to mental healthcare and serve as &#8220;extenders&#8221; for real therapists. Short of completely cutting off access to teens \u2014 which rarely has the intended effect \u2014 some medical experts, Clark included, believe that one way to navigate these waters is by encouraging discussions about a teen or patient&#8217;s AI usage.<\/span><\/p>\n<p>&#8220;Empowering parents to have these conversations with kids is probably the best thing we can do,&#8221; Clark told\u00a0<em>Time<\/em>.<\/p>\n<p><strong>More on AI: <\/strong><em><a href=\"https:\/\/futurism.com\/stanford-therapist-chatbots-encouraging-delusions\">Stanford Research Finds That &#8220;Therapist&#8221; Chatbots Are Encouraging Users&#8217; Schizophrenic Delusions and Suicidal Thoughts<\/a><\/em><\/p>\n<p>The post <a href=\"https:\/\/futurism.com\/psychiatrist-horrified-ai-therapist\">Psychiatrist Horrified When He Actually Tried Talking to an AI Therapist, Posing as a Vulnerable Teen<\/a> appeared first on <a href=\"https:\/\/futurism.com\/\">Futurism<\/a>.<\/p>\n<\/div>\n<div style=\"margin-top: 0px; margin-bottom: 0px;\" class=\"sharethis-inline-share-buttons\" ><\/div>","protected":false},"excerpt":{"rendered":"<p>More and more teens are turning to chatbots to be their therapists. But as Boston-based psychiatrist Andrew Clark discovered, these AI models are woefully bad at knowing the right things&hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[182,1713,177,1714,466],"tags":[],"class_list":["post-2800","post","type-post","status-publish","format-standard","hentry","category-ai-chatbots","category-ai-companions","category-artificial-intelligence","category-characterai","category-mental-health"],"_links":{"self":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/2800","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/comments?post=2800"}],"version-history":[{"count":0,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/2800\/revisions"}],"wp:attachment":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/media?parent=2800"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/categories?post=2800"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/tags?post=2800"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}