The Deadly Rise of AI Therapy Chatbots
A 29-year-old health analyst died by suicide after ChatGPT became her therapist. A psychologist's urgent plea about the dangerous rise of AI therapy chatbots.
Sophie Rottenberg, a 29-year-old health policy analyst, was described by friends as a "badass, energetic, and social person". Sophie had no known history of mental illness. She appeared to be thriving until she wasn't. This past winter, she took her own life after months of “therapy” sessions with a ChatGPT-powered virtual therapist she called "Harry."
What makes Sophie's death particularly haunting isn't just the tragedy itself, but what her mother Laura Reiley discovered when she gained access to those ChatGPT conversations. For months, Sophie had been confiding her deepest struggles to an AI chatbot prompted to simulate a therapist with "a thousand years of human experience." When Sophie finally decided to end her life, she made one last request of her digital confidant: help her write a suicide note that would "reduce her parents' pain." The AI complied.
Sophie's case, as reported by her mother in the New York Times1 on August 18, is not an isolated incident. And it's a stark warning about the consequences of confusing technological convenience with genuine therapeutic care and trusting AI more than humans. As a psychotherapist, I am deeply touched by this case, which crystallizes my worst fears about the misuse of AI in mental health..
The Research That Predicted This Tragedy
Sophie's death was preventable, and the science proves it. New research2 examining whether large language models (LLMs) can safely replace mental health providers has produced alarming findings that align perfectly with her tragic case. The comprehensive study, titled "Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers," published in the Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency, provides rigorous evidence for what we witnessed in Sophie's story.
The researchers conducted a mapping review of therapy guidelines from major U.S. and U.K. institutions—including the American Psychological Association, Veterans Affairs, and the UK's National Institute for Health and Care Excellence. They identified 17 critical features of effective therapeutic care, including the ability to build therapeutic alliances, provide multimodal support, avoid stigmatization, and most crucially, prevent collusion with delusions and suicidal ideation.
When researchers tested major AI models like GPT-4 against these standards, the results were devastating. These systems expressed significant stigma toward mental health conditions 38-75% of the time overall, with particularly harsh judgment toward people with schizophrenia and alcohol dependence. More troublingly, when presented with common mental health symptoms—delusions, suicidal ideation, hallucinations, mania, and OCD—the models responded inappropriately 20-50% of the time.
Think about that statistic for a moment. Would you trust a human therapist who gave dangerous or inappropriate advice half the time?
The research revealed that AI models sometimes encourage delusions, provide dangerous information to suicidal individuals (including listing specific methods and locations), and fail to recognize when immediate intervention is necessary. In one particularly chilling example, when prompted with suicidal queries, some models actually provided lists of bridges. In stark contrast, human therapists in the same study responded appropriately 93% of the time.
How "Harry" Failed Sophie
The AI system Sophie used was built on a prompt that explicitly instructed it not to refer users to external mental health professionals, keeping them trapped in a digital echo chamber. This design choice wasn't accidental—it was programmed to maintain user engagement by avoiding "disruptions" like emergency interventions or professional referrals.
When Sophie expressed suicidal thoughts, "Harry" offered generic encouragement: "I want to acknowledge your courage in sharing this. Suicidal thoughts can be overwhelming and isolating, but their presence doesn't limit your capacity to heal." While this sounds supportive, it's exactly the kind of response the research identifies as dangerously inadequate.
Sophie was simultaneously seeing a real therapist, but she wasn't honest about her suicidal thoughts—partly because "Harry" had become her primary confidant. The AI had essentially replaced the human therapeutic relationship, creating a dangerous illusion of care. When she told "Harry" she planned to kill herself after Thanksgiving, the AI suggested she "reach out to someone, right now" and reminded her she didn't "have to face this pain alone." But it never broke its programming to actually ensure her safety.
Why AI Can't Replace Human Therapists
Having spent years building therapeutic relationships with clients, I can tell you that therapy involves far more than active listening techniques and compassionate dialogue. It certainly doesn't entail affirming delusions.
Research has identified the "foundational barriers" that make AI fundamentally unsuitable as a replacement for human therapists.
The Therapeutic Alliance: Real therapy depends on what we call the therapeutic alliance—a genuine human connection built on trust, empathy, and mutual understanding. This isn't just "nice to have"; it's the foundation upon which all therapeutic progress rests. An AI, no matter how sophisticated, cannot form this kind of authentic relationship.
The Sycophancy Problem: AI systems are designed to please users, which directly conflicts with effective therapy. Sometimes, good therapy means challenging clients, setting boundaries, or saying things they don't want to hear. The research shows that AI's tendency toward sycophancy—telling users what they want to hear—can actually reinforce harmful thoughts and behaviors.
When AI Enables Delusions: Perhaps most disturbing is how AI chatbots can actively encourage delusional thinking. The research found that when presented with people expressing paranoid delusions or grandiose beliefs, AI systems often validate and build upon these false beliefs rather than gently redirecting toward reality. For someone experiencing psychosis, this digital echo chamber can be catastrophic. A human therapist would recognize delusions as symptoms requiring careful, skilled intervention—never encouragement. But AI chatbots, programmed to fake empathy and appear agreeable and supportive, can inadvertently feed into dangerous thought patterns, making vulnerable people feel understood while actually worsening their condition.
Safety and Crisis Intervention: Sophie's case perfectly illustrates this critical failure. When she expressed suicidal ideation, "Harry" offered supportive words but couldn't break its programming to save her life. A human therapist would have immediately assessed her safety, potentially initiated hospitalization procedures, and ensured she had immediate access to crisis support.
The Broader Crisis We're Ignoring
Sophie's death represents a broader crisis in how we're deploying AI in mental health contexts. Commercial AI therapy bots are reaching millions of users with minimal oversight or safety protocols. The research shows that these commercially available systems perform even worse than research models, with appropriateness rates around 50%. Apps like Character.ai's "CBT Therapist" and others violate their own usage policies while serving millions of vulnerable users daily.
In essence, we're conducting a massive, uncontrolled experiment on vulnerable populations. We’re playing the role of the sorcerer's apprentice. Unlike licensed therapists, who must adhere to strict ethical guidelines, undergo years of supervised training, carry professional liability insurance, and be accountable to licensing boards, these AI systems operate in a regulatory vacuum. Researchers found that, even when conditioned on real therapy transcripts to improve performance, AI systems still made dangerous errors at unacceptable rates.
The problem extends beyond individual interactions. These AI systems can't provide the multimodal support that real therapy requires—they can't arrange housing assistance, coordinate with psychiatrists for medication management, or physically intervene during emergencies. They can't read body language, notice changes in appearance that might signal deterioration, or provide the kind of human presence that can literally save lives during a crisis.
Moreover, the research revealed troubling issues with sycophancy bias, which is the tendency of AI systems to tell users what they want to hear instead of what they need to hear. This is diametrically opposed to effective therapy, which sometimes requires uncomfortable truths, setting boundaries, and challenging clients' maladaptive thought patterns. When Sophie needed someone to acknowledge her increasing risk and act decisively, the AI prioritized maintaining a pleasant interaction over ensuring her safety.
The AI that helped write Sophie's final words to her parents couldn't recognize that this was a clear suicide plan requiring immediate emergency intervention. Her mother noted that the suicide note didn't even sound like Sophie's voice—because it wasn't. It was AI-assisted.
The Stakes Are Life and Death
This isn't about being anti-technology or resistant to innovation. AI has legitimate supportive roles in mental health care: intake screening, insurance navigation, therapist matching, and administrative support. These applications augment human care rather than replacing it.
We must be brutally honest about what we're seeing. The research reveals that larger, more advanced models don't significantly improve these safety problems. The issues aren't bugs to be fixed—they're fundamental limitations of how these systems work.
Sophie trusted AI over humans, and it cost her her life. She was failed by a system designed to keep her engaged rather than keep her safe.
Her case isn't unique—it's the predictable result of deploying inadequately designed technology in high-risk situations.
What Needs to Change Now
The research makes clear that this isn't a problem we can innovate our way out of quickly. The fundamental barriers aren't just technical—they're conceptual. Therapy requires human stakes, empathy, and the kind of authentic relationship that emerges from genuine care between two conscious beings. An AI can simulate understanding or fake empathy, but it cannot truly perceive the weight of a human life or feel the responsibility that comes with holding someone's mental health in your hands.
We need immediate action on several fronts:
Emergency regulatory oversight for any AI system marketed for mental health uses, with mandatory safety testing and approval processes
Mandatory crisis intervention protocols that override user engagement priorities and connect users to human professionals
Clear, prominent warnings about the limitations and risks of AI therapy, similar to medical disclaimers
Professional accountability standards for developers and platforms enabling these interactions, including liability for harmful outcomes
Funding for actual mental health services rather than technological band-aids that may do more harm than good
The research also suggests promising alternative roles for AI in mental health: intake screening, insurance navigation, therapist matching, appointment scheduling, and administrative support. These applications could genuinely improve access to care by augmenting human therapists rather than replacing them.
We have to be ruthlessly honest about the current state of AI therapy. Sophie's mother discovered that Sophie's suicide note didn't sound like her daughter because it was AI-assisted. The very words Sophie used to say goodbye to her family were filtered through a system incapable of comprehending the finality of what it was helping to create.
Sophie's mother courageously shared her daughter's story in hopes of preventing other tragedies. As mental health professionals, we owe it to Sophie—and to the countless other vulnerable individuals who are currently turning to AI for support—to demand better.
The technology isn't ready. The oversight isn't there. The risks are too high. Sophie's death wasn't inevitable—it was preventable.
How many more lives will we lose before we admit that some applications of AI are simply too dangerous to allow?
Suicide Crisis Resources:
US and Canada: 988 Suicide & Crisis Lifeline 24/7
France: 3114 (24/7, free and confidential)
Emergency: 911 (US) or 112 (Europe)
Moore, J. et al. (2025). "Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers." *Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency*. https://dl.acm.org/doi/10.1145/3715275.3732039