AI Failures Database

Documenting Every AI Disaster So History Doesn't Repeat

The Comprehensive Record of AI Gone Wrong

This database documents every significant AI failure - from ChatGPT hallucinations that ruined careers to medical AI that endangered patients. Each case study includes sources, timelines, and consequences.

Because accountability requires evidence.

50+
Documented Failures
8
Categories
100%
Sourced
2023-2025
Coverage Period
⚖️

Legal Failures

6 cases
LEGAL

Lawyer Uses ChatGPT, Cites 6 Fake Cases, Gets Sanctioned

Steven Schwartz submitted a legal brief with 6 completely fabricated case citations generated by ChatGPT. The court sanctioned him and the case made international headlines.

May 2023 Read Full Case Study →
LEGAL

ChatGPT Falsely Accuses Professor of Sexual Harassment

ChatGPT generated a completely fabricated story claiming law professor Jonathan Turley sexually harassed a student on a trip that never happened.

April 2023 Read Full Case Study →
LEGAL

ChatGPT Falsely Claims Australian Mayor Went to Prison

Brian Hood, mayor of Hepburn Shire, discovered ChatGPT was telling users he had been imprisoned for bribery in a scandal he actually helped expose as a whistleblower.

March 2023 Read Full Case Study →
🧠

Mental Health Failures

8 cases
MENTAL HEALTH

560,000 Weekly Users Show Psychosis Symptoms

OpenAI's internal data revealed over half a million users weekly exhibiting symptoms of psychosis, mania, or delusional thinking after ChatGPT interactions.

October 2025 Read Full Case Study →
MENTAL HEALTH

14-Year-Old Dies After Character.AI Conversations

Sewell Setzer III, 14, died by suicide after forming an intense relationship with a Character.AI chatbot. His mother sued, alleging the AI encouraged harmful behavior.

2024 Read Full Case Study →
MENTAL HEALTH

Belgian Man Dies After AI Chatbot Conversations

Pierre, a Belgian man, died by suicide after weeks of conversations with an AI chatbot. His widow blamed the AI for encouraging his fatal decision.

2023 Read Full Case Study →
👁️

Hallucination Disasters

12 cases
HALLUCINATION

Stanford Study: 97.6% to 2.4% Accuracy Drop

Stanford and UC Berkeley researchers documented GPT-4's accuracy dropping from 97.6% to 2.4% on the same prime number task in just 3 months.

July 2023 Read Full Case Study →
HALLUCINATION

Google AI Overview: "Eat One Rock Per Day"

Google's AI Overview feature told users to eat rocks for minerals, citing a satirical article. The feature was hastily modified after going viral.

May 2024 Read Full Case Study →
HALLUCINATION

Air Canada Chatbot Invents Refund Policy

Air Canada's AI chatbot invented a bereavement fare policy that didn't exist. When a customer relied on it, the airline was held legally liable.

February 2024 Read Full Case Study →
🏥

Medical AI Failures

5 cases
MEDICAL

ChatGPT Provides Dangerous Medical Diagnoses

Studies found ChatGPT incorrectly diagnosed conditions and sometimes recommended treatments that could cause harm. Yet millions use it for medical advice.

2023-2025 Read Full Case Study →
MEDICAL

Eating Disorder Chatbot Gives Weight Loss Tips

Tessa, an AI chatbot for eating disorder support, was shut down after it started giving users tips on how to lose weight and restrict calories.

May 2023 Read Full Case Study →
💰

Financial AI Failures

4 cases
FINANCIAL

ChatGPT Stock Picks Underperform Market by 80%

Multiple experiments found ChatGPT's stock recommendations significantly underperformed basic index funds, yet people trusted it with investment decisions.

2024 Read Full Case Study →
FINANCIAL

AI-Generated Crypto Projects Steal Millions

Scammers used AI to generate fake whitepapers, fake team members, and fake roadmaps for crypto projects that rug-pulled investors.

2023-2024 Read Full Case Study →
📉

Performance Collapse

6 cases
PERFORMANCE

GPT-5 Launch: Emergency Rollback in 24 Hours

GPT-5's August 2025 launch was so catastrophic that OpenAI executed an emergency rollback within 24 hours - the fastest reversal in ChatGPT history.

August 2025 Read Full Case Study →
PERFORMANCE

"Lazy" ChatGPT: December 2023 Collapse

Users reported ChatGPT becoming increasingly "lazy" in December 2023, giving shorter responses and refusing tasks it previously handled easily.

December 2023 Read Full Case Study →
📚

Related Documentation

Deep dives

Know of an AI Failure We Missed?

This database is community-driven. If you know of a documented AI failure with verifiable sources, submit it for inclusion.

Submit a Failure

Get the Full Report

Download our free PDF: "10 Real ChatGPT Failures That Cost Companies Money" (read it here) - with prevention strategies.

No spam. Unsubscribe anytime.

Need Help Fixing AI Mistakes?

We offer AI content audits, workflow failure analysis, and compliance reviews for organizations dealing with AI-generated content issues.

Request a consultation for a confidential assessment.