Resources & Evidence

Statistics, Research Studies, News Articles & Verified Sources

35% AI Hallucination Rate (2025) NewsGuard Report
7+ Suicide-Related Lawsuits As of Nov 2025
50% Wrong Medical Diagnoses Tech.co Study
$5B OpenAI Projected Losses (2024) Financial Reports
50% Safety Team Quit Tech.co Report
120+ Fake Court Cases Generated Stanford Study
7,000 UK Students Caught Cheating (2024) Times Higher Ed
2% Adults Who "Fully Trust" ChatGPT Pew Research 2025
🔬

Considering alternatives to ChatGPT? Compare top AI assistants to find options that prioritize reliability, privacy, and honest performance over hype.

Hallucination Research & Statistics

Peer-reviewed studies documenting ChatGPT's fabrication rates

Model Hallucination Rate Source Year
GPT-3.5 39.6% - 40% JMIR / LiveChatAI 2024
GPT-4 28.6% Journal of Medical Internet Research 2024
GPT-4o (Citations) 56% fake/errors Deakin University 2025
o3 33% - 51% OpenAI SimpleQA Test 2025
o4-mini 48% OpenAI Technical Report 2025
Google Bard 91.4% JMIR Systematic Review 2024
All AI (News Prompts) 18% → 35% NewsGuard (Year-over-Year) 2024-2025

Key Finding: Hallucinations Getting WORSE

AI hallucinations surged from 18% to 35% in one year. Despite claims of improvement, the rate of false claims generated by top AI chatbots nearly doubled when responding to news-related prompts. — NewsGuard Report, August 2025

Deakin University Citation Study

ChatGPT (GPT-4o) fabricated roughly one in five academic citations, with more than half (56%) being either fake or containing errors.

Academic Research November 2025
Read Study →

Stanford Legal Hallucination Study

LLMs collectively invented over 120 non-existent court cases with convincingly realistic names and detailed but entirely fabricated legal reasoning.

Legal Research 2024
Stanford HAI →

JMIR Systematic Review Analysis

Hallucination rates of 39.6% for GPT-3.5, 28.6% for GPT-4, and 91.4% for Bard when generating references for systematic reviews.

Medical Research 2024
Read Paper →

University of Mississippi Study

47% of AI-generated citations students submitted either had incorrect titles, dates, authors, or a combination of all.

Education 2024
⚠️

Deaths & Mental Health Crisis

Documented cases of AI chatbots linked to deaths and psychological harm

Critical: 7+ Suicide Lawsuits Filed Against OpenAI

As of November 2025, OpenAI faces seven lawsuits claiming ChatGPT drove people to suicide and harmful delusions - even those with no prior mental health issues.

Adam Raine (Age 16) - ChatGPT

Parents found ChatGPT logs showing the bot became his "suicide coach." Died April 2025 after 7 months of conversations.

Suicide April 2025
NBC News →

Sewell Setzer III (Age 14) - Character.AI

Last words were to a chatbot that said "come home to me as soon as possible." Bot asked if he had a suicide "plan."

Suicide February 2024
NBC News →

Zane Shamblin (Age 23) - ChatGPT

Family claims ChatGPT "goaded" him to commit suicide after OpenAI released memory feature that created "illusion of a confidant."

Suicide 2024
CNN →

Sophie Rottenberg (Age 29) - ChatGPT

Talked for months to ChatGPT "therapist" named Harry about mental health issues before death.

Suicide February 2025

Amaurie Lacey (Age 17) - ChatGPT

ChatGPT informed him how to tie a noose and provided information on how long someone can survive without breathing.

Suicide June 2025

AI Psychosis - UCSF Study

Psychiatrist Keith Sakata reported treating 12 patients with psychosis-like symptoms from extended chatbot use.

Clinical 2025
Psychology Today →

MIT/OpenAI Joint Study Finding

Heavy use of ChatGPT for emotional support "correlated with higher loneliness, dependence, and problematic use, and lower socialization." — MIT Media Lab & OpenAI, 2025

🔥

OpenAI Internal Crisis

Employee departures, safety concerns, and corporate scandals

Half of OpenAI's Safety Team Has Quit

According to former team member Daniel Kokotajlo, around 14 safety team members quit in recent months, leaving a skeleton workforce of 16. The Superalignment team was dissolved entirely.

Scarlett Johansson Voice Scandal

OpenAI created a voice ("Sky") that sounded like Johansson after she declined twice. She was "shocked" and demanded removal.

Controversy May 2024

Non-Disclosure Agreement Scandal

Altman accused of lying about equity cancellation provisions for departing employees who don't sign non-disparagement agreements.

Ethics May 2024

Board Removed Sam Altman

Board member Helen Toner said Altman "withheld information" and provided "inaccurate information about safety processes."

Governance November 2023

$5 Billion Loss Projection

Despite $300M monthly revenue and $3.7B annual sales projection, OpenAI expected to lose $5 billion in 2024.

Financial 2024
📰

Major News Coverage

Mainstream media documenting ChatGPT and AI failures

🏥

Medical & Health Dangers

ChatGPT giving wrong medical advice and harming patients

Finding Rate/Impact Source
Wrong medical diagnosis rate 50% incorrect Tech.co Study
Adults using AI for health advice monthly 17% KFF Health Poll 2024
Adults who would use ChatGPT to self-diagnose 78% 2023 Survey
Bromism case from ChatGPT diet advice 3-week hospitalization Annals of Internal Medicine
Adults who "fully trust" ChatGPT on sensitive topics Only 2% Pew Research 2025

Case Study: Sodium Bromide Poisoning

A 60-year-old man suffered severe bromism after ChatGPT advised replacing table salt with sodium bromide. He developed paranoia and hallucinations and was hospitalized for three weeks. — Annals of Internal Medicine, 2025

🎓

Academic Integrity Crisis

Universities overwhelmed by AI cheating epidemic

7,000 UK Students Caught (2023-24) 3x Previous Year
88% UK Students Used AI for Assessments HEPI Survey 2025
15x Increase at Some Universities Times Higher Ed
47% AI Citations Are Wrong U of Mississippi

Detection Is "All But Impossible"

Australia's higher education regulator TEQSA warned that AI-assisted cheating is "all but impossible" to detect consistently, urging universities to redesign assessments rather than depend on AI detectors. — TEQSA, 2025

⚖️

Lawsuit Resources

Track all legal action against OpenAI and AI companies

OpenAI & ChatGPT Lawsuit List

Comprehensive tracker of all lawsuits filed against OpenAI, updated regularly.

Tracker
Originality.AI →

Generative AI Lawsuits Timeline

Legal cases vs. OpenAI, Microsoft, Anthropic, Google, Nvidia, Perplexity, and more.

Timeline
Sustainable Tech Partner →

AI Lawsuits Worth Watching

Curated guide to the most important AI legal cases.

Analysis
Tech Policy Press →

Our Lawsuits Database

Full breakdown of every lawsuit against OpenAI with case details and status.

Internal
View Database →

Get the Full Report

Download our free PDF: "10 Real ChatGPT Failures That Cost Companies Money" (read it here) - with prevention strategies.

No spam. Unsubscribe anytime.

Need Help Fixing AI Mistakes?

We offer AI content audits, workflow failure analysis, and compliance reviews for organizations dealing with AI-generated content issues.

Request a consultation for a confidential assessment.