This weekly roundup summarizes documented AI chatbot incidents from the past seven days. All incidents are sourced from court records, news reporting, status trackers, or verified user submissions. This week marks a significant escalation in legal action against AI companies, with new wrongful death and defamation lawsuits dominating headlines.
This Week By The Numbers
OpenAI Sued Over Murder-Suicide in Connecticut
Fatal IncidentOpenAI and Microsoft are facing a new lawsuit alleging that ChatGPT fueled a man's "paranoid delusions" before he committed a murder-suicide. The lawsuit claims the AI chatbot reinforced dangerous thinking patterns across multiple conversations without redirecting to professional help.
This is now the eighth active lawsuit alleging that ChatGPT contributed to deaths or serious harm. OpenAI's defense relies on terms of service disclaimers, but plaintiffs argue the company has actively marketed to healthcare providers while accepting no responsibility for healthcare outcomes.
Read full documentationGoogle Faces $15 Million AI Defamation Lawsuit
DefamationConservative activist Robby Starbuck has filed a $15 million defamation lawsuit against Google, alleging their AI platforms portrayed him as having a criminal record, abusing women, and shooting a man. The lawsuit claims the AI-generated falsehoods have "gotten much worse over time."
Starbuck previously settled with Meta over similar AI defamation claims. Google has filed a motion to dismiss, arguing users "misused developer tools to induce hallucinations." This case could set precedent for AI defamation liability.
View lawsuit trackingAI Hallucination Database Reaches 817 Cases
LegalLegal researcher Damien Charlotin's hallucination tracking database has documented 817 cases of AI-generated fake citations in legal proceedings. The rate has accelerated from approximately 2 cases per week before spring 2025 to 2-3 cases per day currently.
New sanction cases this week included attorneys in Texas, New York, and Florida who submitted briefs containing fabricated case law generated by ChatGPT and similar tools. Multiple attorneys faced career-ending consequences.
View Charlotin's DatabaseSenator Blackburn Accuses Google AI of Criminal Defamation
PoliticalRepublican Senator Marsha Blackburn publicly criticized Google's Gemma AI model in a New York Post column, claiming it falsely accused her of committing crimes. While she has not yet filed suit, the allegations add political pressure to ongoing AI accountability debates.
When sitting US Senators are being defamed by AI systems, the issue has clearly reached crisis level. Both parties have expressed concern, though solutions remain elusive.
Read full storyChatGPT Outage January 14: Elevated Error Rates
Service IssueChatGPT experienced elevated error rates on January 14, 2026, with services recovering by 1:03 AM. This follows a January 12 incident where the Connectors/Apps feature became completely unselectable, and a January 7 outage that affected hundreds of users.
According to StatusGator tracking, ChatGPT has experienced 46 incidents in the last 90 days, with a median duration of 1 hour 54 minutes. Users continue to report complete account lockouts and disappearing chat histories.
OpenAI Status HistoryGPT-5.2 Hallucination Rate Described as "Extremely High"
QualityDevelopers on OpenAI's community forums report that GPT-5.2 exhibits "extremely high hallucination rates during certain periods of time." Users describe wasting hundreds of dollars in API tokens attempting to correct recurring hallucinations.
The inconsistency makes the problem particularly dangerous: the model sometimes works correctly, leading users to trust outputs that later prove fabricated. OpenAI's suggested solutions of "prompt engineering" have been criticized as inadequate.
Read technical analysisWeek in Summary
This week represents a significant escalation in AI accountability litigation. The Connecticut murder-suicide lawsuit brings the total number of death/harm cases against OpenAI to eight. The Google defamation suit seeking $15 million could establish precedent for AI defamation liability that has so far eluded plaintiffs.
Meanwhile, service reliability issues continue unabated, with 46 incidents in 90 days demonstrating that OpenAI's infrastructure struggles to support 800 million weekly users. The GPT-5.2 hallucination crisis has developers questioning whether they can trust the platform for production applications.
The 817 documented hallucination cases in legal proceedings alone represent just the tip of an iceberg. How many other fields, from healthcare to journalism to background screening, are being quietly corrupted by AI fabrications that users don't detect?
Stay Informed
Check back every Wednesday for our weekly AI failure roundup. You can also browse our latest user stories or submit your own AI experience.