{"id":4980,"date":"2025-09-04T16:02:33","date_gmt":"2025-09-04T16:02:33","guid":{"rendered":"https:\/\/musictechohio.online\/site\/patients-furious-therapists-using-ai\/"},"modified":"2025-09-04T16:02:33","modified_gmt":"2025-09-04T16:02:33","slug":"patients-furious-therapists-using-ai","status":"publish","type":"post","link":"https:\/\/musictechohio.online\/site\/patients-furious-therapists-using-ai\/","title":{"rendered":"Patients Furious at Therapists Secretly Using AI"},"content":{"rendered":"<div>\n<div><img loading=\"lazy\" width=\"2400\" height=\"1260\" src=\"https:\/\/wordpress-assets.futurism.com\/2025\/09\/patients-furious-therapists-using-ai.jpg\" class=\"attachment-full size-full wp-post-image\" alt=\"Going to a real-life therapist is supposed to be an experience devoid of AI \u2014 which makes some mental health pros' AI use all the worse.\" style=\"margin-bottom: 15px;\" decoding=\"async\"><\/div>\n<p>With artificial intelligence integrating \u2014 or infiltrating \u2014 into\u00a0every corner of our lives, some less-than-ethical mental health professionals have begun using it in secret, causing major trust issues for the vulnerable clients who pay them for their sensitivity and confidentiality.<\/p>\n<p>As <a href=\"https:\/\/www.technologyreview.com\/2025\/09\/02\/1122871\/therapists-using-chatgpt-secretly\/\"><em>MIT <\/em><em>Technology Review <\/em>reports<\/a>, therapists have used OpenAI&#8217;s ChatGPT and other large language models (LLMs) for everything from email and message responses to, in one particularly egregious case, suggesting questions to ask a patient mid-session.<\/p>\n<p>The patient who experienced the latter affront, a 31-year-old Los Angeles man that <em>Tech Review<\/em> identified only by the first name Declan, said that he was in the midst of a virtual session with his therapist when, upon the connection becoming scratchy, the client suggested they both turn off their cameras and speak normally.<\/p>\n<p>Instead of broadcasting a normal blank screen, however, Declan&#8217;s therapist inadvertently shared his own \u2014 and &#8220;suddenly, I was watching [the therapist] use ChatGPT.&#8221;<\/p>\n<p>&#8220;He was taking what I was saying and putting it into ChatGPT,&#8221; the Angeleno told the magazine, &#8220;and then summarizing or cherry-picking answers.&#8221;<\/p>\n<p>Flabbergasted, Declan didn&#8217;t say anything about what he saw, instead choosing to watch ChatGPT as it analyzed what he was saying and spat out potential rejoinders for the therapist to use. At a certain point, he even began echoing the chatbot&#8217;s responses, which the therapist seemed to view as some sort of breakthrough.<\/p>\n<p>&#8220;I became the best patient ever, because ChatGPT would be like, &#8216;Well, do you consider that your way of thinking might be a little too black and white?'&#8221; Declan recounted, &#8220;And I would be like, &#8216;Huh, you know, I think my way of thinking might be too black and white,&#8217; and [my therapist would] be like, &#8216;<em>Exactly<\/em>.&#8217; I\u2019m sure it was his dream session.&#8221;<\/p>\n<p>At their next meeting, Declan confronted his therapist, who fessed up to using ChatGPT in their sessions and started crying.\u00a0It was &#8220;like a super awkward&#8230; weird breakup,&#8221;\u00a0Declan recounted to\u00a0<em>Tech Review<\/em>,\u00a0with the therapist even claiming that he&#8217;d used ChatGPT because he was out of ideas to help Declan and had hit a wall. (He still charged him for that final session.)<\/p>\n<p>Laurie Clarke, who penned the\u00a0<em>Tech Review<\/em> piece, had had her own run-in with a therapist&#8217;s shady AI use after getting an email much longer and &#8220;more polished&#8221; than usual.<\/p>\n<p>&#8220;I initially felt heartened,&#8221; Clarke wrote. It seemed to convey a kind, validating message, and its length made me feel that she\u2019d taken the time to reflect on all of the points in my (rather sensitive) email.&#8221;<\/p>\n<p>It didn&#8217;t take long for that once-affirming message to start to look suspicious to the tech writer. It had a different font than normal and used a bunch of what Clarke referred to as &#8220;Americanized em-dashes,&#8221; which are not, to be fair, in standard use in the UK, where both she and her therapist are based.<\/p>\n<p>Her therapist responded by saying that she simply dictates her longer-length emails to AI, but the writer couldn&#8217;t &#8220;entirely shake the suspicion that she might have pasted my highly personal email wholesale into ChatGPT&#8221; \u2014 and if that were true, she may well have <a href=\"https:\/\/futurism.com\/hackers-trick-chatgpt-personal-data\">introduced a security risk<\/a> to the sensitive, <a href=\"https:\/\/www.mentalhealth.org.uk\/explore-mental-health\/a-z-topics\/human-rights-and-mental-health\">protected mental health information<\/a> contained within an otherwise confidential exchange.<\/p>\n<p>Understandably put off by the experience, Clarke took to Reddit, the Internet&#8217;s public square, to see if others had caught their therapists using AI in similar ways. Along with connecting to Declan, she also learned the story of Hope, a 25-year-old American who sent her own therapist a direct message looking for support after her dog died.<\/p>\n<p>Hope got back an otherwise immaculate and seemingly heartfelt response about how difficult it must be &#8220;not having him by your side right now&#8221; \u2014 but then she noticed a prompt that the therapist had forgotten to erase sitting prominently at the top of the missive, providing the trained mental health professional a &#8220;more human, heartfelt [response] with a gentle, conversational tone.&#8221;<\/p>\n<p>&#8220;It was just a very strange feeling,&#8221; Hope told <em>Tech Review<\/em>. &#8220;Then I started to feel kind of betrayed&#8230; It definitely affected my trust in her.&#8221;<\/p>\n<p>She added that she was &#8220;honestly really surprised and confused&#8221; because she thought her therapist was competent and could be trusted \u2014 and trust issues, ironically, were her onus for going into therapy in the first place.<\/p>\n<p>When she asked the therapist about the AI usage, she too owned up \u2014 and claimed that she&#8217;d used it because she had never had a dog before herself.<\/p>\n<p>As more and more people turn to so-called <a href=\"https:\/\/futurism.com\/ai-therapist-haywire-mental-health\">AI therapists<\/a> \u2014 which even OpenAI CEO <a href=\"https:\/\/www.zdnet.com\/article\/even-openai-ceo-sam-altman-thinks-you-shouldnt-trust-ai-for-therapy\/\">Sam Altman admits<\/a> aren&#8217;t equipped to do the job of a real-life professional due to <a href=\"https:\/\/futurism.com\/openai-scanning-conversations-police\">privacy risks<\/a> and the technology&#8217;s troubling <a href=\"https:\/\/futurism.com\/psychologist-ai-new-disorders\">propensity to result in mental health breakdowns<\/a>\u00a0\u2014 the choice to see a flesh-and-flood mental health professional should be one that people feel confident in making.<\/p>\n<p>Instead, the therapists in these these anecdotes (and, presumably, plenty more where they came from) are risking their clients&#8217; trust and privacy \u2014 and perhaps their own careers, should they use a <a href=\"https:\/\/www.nbcc.org\/resources\/nccs\/newsletter\/ethical-use-of-ai-in-counseling-practice\">non-HIPAA-complaint chatbot<\/a>, or if they don&#8217;t disclose to patients that they&#8217;re doing so.<\/p>\n<p><strong>More on AI and mental health:<\/strong> <a href=\"https:\/\/futurism.com\/man-chatgpt-psychosis-murders-mother\"><em>Man Falls Into AI Psychosis, Kills His Mother and Himself<\/em><\/a><\/p>\n<p>The post <a href=\"https:\/\/futurism.com\/patients-furious-therapists-using-ai\">Patients Furious at Therapists Secretly Using AI<\/a> appeared first on <a href=\"https:\/\/futurism.com\/\">Futurism<\/a>.<\/p>\n<\/div>\n<div style=\"margin-top: 0px; margin-bottom: 0px;\" class=\"sharethis-inline-share-buttons\" ><\/div>","protected":false},"excerpt":{"rendered":"<p>With artificial intelligence integrating \u2014 or infiltrating \u2014 into\u00a0every corner of our lives, some less-than-ethical mental health professionals have begun using it in secret, causing major trust issues for the&hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3615,177,196,466,1645],"tags":[],"class_list":["post-4980","post","type-post","status-publish","format-standard","hentry","category-ai-privacy","category-artificial-intelligence","category-chatgpt","category-mental-health","category-therapy"],"_links":{"self":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/4980","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/comments?post=4980"}],"version-history":[{"count":0,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/4980\/revisions"}],"wp:attachment":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/media?parent=4980"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/categories?post=4980"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/tags?post=4980"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}