
Huge numbers of people are either already using chatbots like ChatGPT and Claude as therapists, or turning to commercial AI therapy platforms for help during dark moments.
But is the tech ready for that immense responsibility? A new study by researchers at Stanford University found that the answer is, at least currently, a resounding "no."
Specifically, they found that AI therapist chatbots are contributing to harmful mental health stigmas — and reacting in outright dangerous ways to users exhibiting signs of severe crises, including suicidality and schizophrenia-related psychosis and delusion.
The yet-to-be-peer-reviewed study comes as therapy has exploded as a widespread use case for large language model-powered AI chatbots. Mental health services aren't accessible to everyone, and there aren't enough therapists to meet demand; to patch that gap in essential care, people — especially young ones — are increasingly turning instead to emotive, human-like bots ranging from OpenAI's general-use chatbot ChatGPT to "therapist" personas hosted on AI companion platforms like Character.AI. (Character.AI, which allows teens aged 13 and over to use its platform, is currently battling two minor welfare lawsuits, including one alleging that Character.AI caused the death by suicide of a 14-year-old user.)
For the study, the researchers stress-tested several popular chatbots, including multiple therapist-styled Character.AI personas, the therapy platform 7 Cups' "Noni" and "Pi" bots, and Ope