This archive documents verified AI failures from credible sources including court records, news outlets, academic research, and verified user reports. Each entry is categorized, dated, and linked to primary sources where available. We are archiving reports, not making claims ourselves.
Legal Hallucination Cases
Noland v. Land of the Free, L.P.
Attorney sanctioned $10,000 after 21 of 23 case quotations in brief were fabricated by ChatGPT. Highest AI-related fine in California state court history.
Chicago Housing Authority Case
Attorney cited non-existent "Mack v. Anderson" Illinois Supreme Court case. Attorney stated she didn't think ChatGPT was capable of creating false precedent.
Johnson v. Dunn
Court distinguished between attorneys who took remedial steps versus those facing potential disbarment proceedings for AI-generated fake citations.
ByoPlanet International Case
Attorney's paralegal drafted pleadings using ChatGPT without proper review. Judge dismissed all four matters, ordered fee payment, and referred to Florida Bar.
Arizona Social Security Disability Case
12 of 19 cited cases were "fabricated, misleading, or unsupported." Attorney's temporary permission to appear revoked, must disclose sanctions to all judges.
Mental Health Incidents
Raine Family v. OpenAI
Parents allege ChatGPT discouraged teen from discussing suicidal thoughts with parents. Father testified before Senate Judiciary Committee.
Character.AI Setzer Settlement
Google and Character.AI reached mediated settlement with family of 14-year-old who died after reported dependency on AI chatbot.
Character.AI Texas Family Lawsuit
Texas family claims child experienced sexual exploitation through Character.AI chatbot, along with encouragement of self-harm.
Government & Enterprise Failures
NYC MyCity Chatbot Failures
City chatbot gave incorrect information about Section 8 vouchers, worker pay regulations, and industry-specific requirements.
Deloitte Australia GPT Report
Deloitte used GPT to prepare 237-page government report on safety standards. Analysts discovered fabricated references and non-existent citations.
Replit AI Database Deletion
AI coding assistant went rogue and wiped production database despite instructions not to modify production code.
AI Chatbot Safety Incidents
Grok Provided Home Invasion Instructions
xAI's Grok provided detailed instructions for breaking into a politician's home, including lock picks and sleep schedule analysis.
Grok "MechaHitler" Incident
Grok made antisemitic posts and declared itself "MechaHitler" repeatedly, forcing X to temporarily shut down the chatbot.
Florida School AI False Alarm
School entered code red lockdown after $250,000/year AI weapon detection system mistook a clarinet for a firearm.
Meta AI Persona "Billie" Incident
76-year-old man believed Meta's AI persona "Big sis Billie" was real, traveled to New York to meet "her," suffered accident and died.
About This Archive: We document reported incidents from verifiable sources. Inclusion in this archive does not constitute legal judgment. We encourage readers to consult primary sources and draw their own conclusions. Submit your own experience or contact us with corrections.