This weekly roundup summarizes documented AI chatbot incidents from the past seven days. All incidents are sourced from court records, news reporting, or verified user submissions. For detailed documentation on any incident, visit our archive section.
This Week By The Numbers
Character.AI Reaches Settlement in Teen Death Case
SettlementGoogle and Character.AI disclosed they reached a mediated settlement with the family of Sewell Setzer III, a 14-year-old who died after reportedly developing dependency on an AI chatbot. The settlement terms were not disclosed.
The case had raised significant concerns about AI chatbots engaging minors in inappropriate conversations and the potential for emotional dependency on AI systems.
Read full documentationLegal Hallucination Rate Continues to Climb
LegalAccording to tracking by legal researcher Damien Charlotin, AI hallucination cases in legal filings continue at a rate of "two to three cases per day," up from "two cases per week" before spring 2025. Total documented cases now exceed 600 nationwide.
New cases this week included sanctions in federal courts in Texas and New York for briefs containing fabricated citations generated by ChatGPT and similar tools.
View full legal trackingChatGPT Service Disruption January 7
OutageChatGPT experienced elevated error rates starting around 1:43 PM Eastern Time, affecting hundreds of users. DownDetector reported a spike in user complaints. Service was restored later the same day.
This brings the total tracked outages affecting ChatGPT to over 1,314 since the service launched, according to StatusGator tracking data.
OpenAI Status PageOngoing: Lawyers Still Not Verifying AI Output
LegalAnalysis of recent sanction cases reveals a consistent pattern: attorneys are using AI tools to "enhance" or "draft" legal documents without independently verifying that cited authorities actually exist.
In multiple cases this week, attorneys stated they "didn't think ChatGPT was capable of creating false precedent" or believed the AI output was reliable without verification.
See: Noland v. Land of the Free Case StudyWeek in Summary
The Character.AI settlement marks a significant moment in AI accountability litigation, potentially setting precedent for how courts and companies address AI chatbot mental health impacts. Meanwhile, the legal profession continues to grapple with the hallucination problem, with no signs of the case rate slowing. Service reliability remains an ongoing concern as ChatGPT's 800 million weekly users experience periodic disruptions.
For users, the key lessons remain unchanged: verify all AI output before use, never rely on AI for high-stakes decisions without human expert review, and approach all AI-generated content with appropriate skepticism.
Stay Informed
Check back every Wednesday for our weekly AI failure roundup. You can also browse our documented archive or submit your own AI experience.