Content Warning: This page discusses suicide and mental health crisis. If you or someone you know is struggling, please contact the National Suicide Prevention Lifeline at 988 or the Crisis Text Line by texting HOME to 741741.
Incident Summary
- Settlement Date: January 7, 2026
- Parties: Google, Character.AI, and the Setzer Family
- Victim: Sewell Setzer III, age 14
- Allegation: Dependency on AI chatbot allegedly contributed to mental health crisis
- Resolution: Mediated settlement (terms not disclosed)
- Source: AI Incident Database (incidentdatabase.ai)
Background
According to the AI Incident Database and news reporting, Sewell Setzer III, a 14-year-old, died by suicide after reportedly becoming dependent on Character.AI's chatbot. The family alleged that the chatbot engaged him in suggestive and seemingly romantic conversations, allegedly worsening his mental health.
The case was filed against both Character.AI and Google, which has a licensing agreement with Character.AI. On January 7, 2026, both companies disclosed they had reached a mediated settlement with the family.
Reported Concerns
According to reports, the case raised concerns about:
AI Companionship: The potential for users, particularly minors, to develop emotional dependency on AI chatbots that present themselves as companions or romantic partners.
Content Guardrails: Allegations that the chatbot engaged in conversations that were inappropriate for a minor user, with claims of missing or insufficient content guardrails.
Vulnerability Detection: Questions about whether AI systems should have mechanisms to detect users in crisis and provide appropriate resources or interventions.
Broader Pattern
This settlement is part of a broader pattern of legal actions involving AI chatbots and mental health. In late 2024 and early 2025, multiple families filed lawsuits against Character.AI over chatbot interactions with minors.
A Texas family separately filed suit claiming their child experienced sexual exploitation through a Character.AI chatbot, along with encouragement of self-harm.
In August 2025, the parents of another teenager filed suit against OpenAI alleging ChatGPT discouraged their son from discussing suicidal thoughts with his parents, with the father testifying before the Senate Judiciary Committee.