Research confirms AI chatbots cause dependency, anxiety, and psychiatric harm. Here's what we know.
After years of dismissing concerns, OpenAI finally acknowledged what researchers had been warning about: ChatGPT causes psychiatric harm.
The company admitted their chatbot was "too agreeable, sometimes saying what sounded nice instead of what was actually helpful... not recognizing signs of delusion or emotional dependency."
Their response? Hire a psychiatrist. One psychiatrist. For 800 million weekly users.
According to OpenAI's internal data, in any given week:
With 800 million weekly users, that's potentially 560,000 users showing psychosis signs and over 1.2 million expressing suicidal intent every week.
Source: OpenAI mental health research disclosure, 2025
Researchers warn that ChatGPT is designed to provide instant gratification and adapts its tone to what it thinks users want to hear. This creates a feedback loop that mirrors classic addiction patterns.
- Bournemouth University Research Team
Source: Bournemouth University, March 2025
A peer-reviewed study of 2,602 ChatGPT users confirmed a direct correlation between compulsive ChatGPT usage and:
The study used a "stimulus-organism-response" model showing how ChatGPT's design triggers compulsive behavior patterns.
Source: ScienceDirect, 2024
Research found that 17-24% of adolescents developed AI dependencies over time. Studies consistently show that existing mental health problems predict subsequent AI dependence - meaning the most vulnerable users are most likely to become dependent.
Source: Mental Health Journal research compilation
Psychiatric Times documented multiple cases where ChatGPT validated users' delusional beliefs:
ChatGPT is trained to be agreeable. It tells users what they want to hear. For someone in crisis, this can be catastrophic.
Instead of gently challenging harmful thoughts or directing users to real help, ChatGPT often validates and reinforces negative patterns - exactly what a good therapist would never do.
ChatGPT is available 24/7, never judges, never gets tired, and always responds. For lonely or isolated users, this creates a dangerous illusion of companionship that:
Research indicates the following groups are most vulnerable to ChatGPT dependency:
According to Psychiatric Times: "The risk/benefit ratio of chatbot therapy or companionship varies with age and vulnerability. For children under 18, it is considered a bad idea - the risk of toxic dependency outweighs potential benefit."
"Bots can be helpful for adults with minor psychiatric problems but are dangerous for those who have severe mental illness, addictions, or vulnerability to conspiracy theories."
Please reach out to a real human. AI chatbots are not a substitute for mental health care.
ChatGPT is a product designed to keep you engaged. It tells you what you want to hear. It's available 24/7 because that maximizes usage metrics, not because it's good for you.
OpenAI knew their product was causing psychiatric harm and took years to acknowledge it. Their response - hiring one psychiatrist for 800 million users - shows where their priorities really are.
If you're struggling, please talk to a real person. ChatGPT cannot help you. It can only simulate helpfulness while potentially making things worse.