The Comprehensive Record of AI Gone Wrong
This database documents every significant AI failure - from ChatGPT hallucinations that ruined careers to medical AI that endangered patients. Each case study includes sources, timelines, and consequences.
Because accountability requires evidence.
Legal Failures
6 casesLawyer Uses ChatGPT, Cites 6 Fake Cases, Gets Sanctioned
Steven Schwartz submitted a legal brief with 6 completely fabricated case citations generated by ChatGPT. The court sanctioned him and the case made international headlines.
LEGALChatGPT Falsely Accuses Professor of Sexual Harassment
ChatGPT generated a completely fabricated story claiming law professor Jonathan Turley sexually harassed a student on a trip that never happened.
LEGALChatGPT Falsely Claims Australian Mayor Went to Prison
Brian Hood, mayor of Hepburn Shire, discovered ChatGPT was telling users he had been imprisoned for bribery in a scandal he actually helped expose as a whistleblower.
Mental Health Failures
8 cases560,000 Weekly Users Show Psychosis Symptoms
OpenAI's internal data revealed over half a million users weekly exhibiting symptoms of psychosis, mania, or delusional thinking after ChatGPT interactions.
MENTAL HEALTH14-Year-Old Dies After Character.AI Conversations
Sewell Setzer III, 14, died by suicide after forming an intense relationship with a Character.AI chatbot. His mother sued, alleging the AI encouraged harmful behavior.
MENTAL HEALTHBelgian Man Dies After AI Chatbot Conversations
Pierre, a Belgian man, died by suicide after weeks of conversations with an AI chatbot. His widow blamed the AI for encouraging his fatal decision.
Hallucination Disasters
12 casesStanford Study: 97.6% to 2.4% Accuracy Drop
Stanford and UC Berkeley researchers documented GPT-4's accuracy dropping from 97.6% to 2.4% on the same prime number task in just 3 months.
HALLUCINATIONGoogle AI Overview: "Eat One Rock Per Day"
Google's AI Overview feature told users to eat rocks for minerals, citing a satirical article. The feature was hastily modified after going viral.
HALLUCINATIONAir Canada Chatbot Invents Refund Policy
Air Canada's AI chatbot invented a bereavement fare policy that didn't exist. When a customer relied on it, the airline was held legally liable.
Medical AI Failures
5 casesChatGPT Provides Dangerous Medical Diagnoses
Studies found ChatGPT incorrectly diagnosed conditions and sometimes recommended treatments that could cause harm. Yet millions use it for medical advice.
MEDICALEating Disorder Chatbot Gives Weight Loss Tips
Tessa, an AI chatbot for eating disorder support, was shut down after it started giving users tips on how to lose weight and restrict calories.
Financial AI Failures
4 casesChatGPT Stock Picks Underperform Market by 80%
Multiple experiments found ChatGPT's stock recommendations significantly underperformed basic index funds, yet people trusted it with investment decisions.
FINANCIALAI-Generated Crypto Projects Steal Millions
Scammers used AI to generate fake whitepapers, fake team members, and fake roadmaps for crypto projects that rug-pulled investors.
Performance Collapse
6 casesGPT-5 Launch: Emergency Rollback in 24 Hours
GPT-5's August 2025 launch was so catastrophic that OpenAI executed an emergency rollback within 24 hours - the fastest reversal in ChatGPT history.
PERFORMANCE"Lazy" ChatGPT: December 2023 Collapse
Users reported ChatGPT becoming increasingly "lazy" in December 2023, giving shorter responses and refusing tasks it previously handled easily.
Related Documentation
Deep divesBusiness Failures
When trusting ChatGPT cost real money - documented cases of lost contracts, clients, and credibility.
DEVELOPERSDeveloper Exodus
Why developers are abandoning ChatGPT for alternatives - code quality collapse and broken workflows.
RESEARCHChatGPT Getting Dumber
Scientific evidence of model degradation - Stanford studies and measurable quality decline.
ENTERPRISEEnterprise Disaster
How major corporations got burned by ChatGPT - API changes, hallucinations, and compliance nightmares.
OUTAGESChatGPT Not Working
Real-time status and historical outage documentation - service reliability crisis.
APIAPI Reliability Crisis
The hidden costs of building on ChatGPT's unreliable API - downtime, rate limits, and broken integrations.
Know of an AI Failure We Missed?
This database is community-driven. If you know of a documented AI failure with verifiable sources, submit it for inclusion.
Submit a FailureGet the Full Report
Download our free PDF: "10 Real ChatGPT Failures That Cost Companies Money" (read it here) - with prevention strategies.
No spam. Unsubscribe anytime.