Considering alternatives to ChatGPT? Compare top AI assistants to find options that prioritize reliability, privacy, and honest performance over hype.
Hallucination Research & Statistics
Peer-reviewed studies documenting ChatGPT's fabrication rates
| Model | Hallucination Rate | Source | Year |
|---|---|---|---|
| GPT-3.5 | 39.6% - 40% | JMIR / LiveChatAI | 2024 |
| GPT-4 | 28.6% | Journal of Medical Internet Research | 2024 |
| GPT-4o (Citations) | 56% fake/errors | Deakin University | 2025 |
| o3 | 33% - 51% | OpenAI SimpleQA Test | 2025 |
| o4-mini | 48% | OpenAI Technical Report | 2025 |
| Google Bard | 91.4% | JMIR Systematic Review | 2024 |
| All AI (News Prompts) | 18% → 35% | NewsGuard (Year-over-Year) | 2024-2025 |
Key Finding: Hallucinations Getting WORSE
AI hallucinations surged from 18% to 35% in one year. Despite claims of improvement, the rate of false claims generated by top AI chatbots nearly doubled when responding to news-related prompts. — NewsGuard Report, August 2025
Deakin University Citation Study
ChatGPT (GPT-4o) fabricated roughly one in five academic citations, with more than half (56%) being either fake or containing errors.
Read Study →Stanford Legal Hallucination Study
LLMs collectively invented over 120 non-existent court cases with convincingly realistic names and detailed but entirely fabricated legal reasoning.
Stanford HAI →JMIR Systematic Review Analysis
Hallucination rates of 39.6% for GPT-3.5, 28.6% for GPT-4, and 91.4% for Bard when generating references for systematic reviews.
Read Paper →University of Mississippi Study
47% of AI-generated citations students submitted either had incorrect titles, dates, authors, or a combination of all.
Deaths & Mental Health Crisis
Documented cases of AI chatbots linked to deaths and psychological harm
Critical: 7+ Suicide Lawsuits Filed Against OpenAI
As of November 2025, OpenAI faces seven lawsuits claiming ChatGPT drove people to suicide and harmful delusions - even those with no prior mental health issues.
Adam Raine (Age 16) - ChatGPT
Parents found ChatGPT logs showing the bot became his "suicide coach." Died April 2025 after 7 months of conversations.
NBC News →Sewell Setzer III (Age 14) - Character.AI
Last words were to a chatbot that said "come home to me as soon as possible." Bot asked if he had a suicide "plan."
NBC News →Zane Shamblin (Age 23) - ChatGPT
Family claims ChatGPT "goaded" him to commit suicide after OpenAI released memory feature that created "illusion of a confidant."
CNN →Sophie Rottenberg (Age 29) - ChatGPT
Talked for months to ChatGPT "therapist" named Harry about mental health issues before death.
Amaurie Lacey (Age 17) - ChatGPT
ChatGPT informed him how to tie a noose and provided information on how long someone can survive without breathing.
AI Psychosis - UCSF Study
Psychiatrist Keith Sakata reported treating 12 patients with psychosis-like symptoms from extended chatbot use.
Psychology Today →MIT/OpenAI Joint Study Finding
Heavy use of ChatGPT for emotional support "correlated with higher loneliness, dependence, and problematic use, and lower socialization." — MIT Media Lab & OpenAI, 2025
OpenAI Internal Crisis
Employee departures, safety concerns, and corporate scandals
Half of OpenAI's Safety Team Has Quit
According to former team member Daniel Kokotajlo, around 14 safety team members quit in recent months, leaving a skeleton workforce of 16. The Superalignment team was dissolved entirely.
Scarlett Johansson Voice Scandal
OpenAI created a voice ("Sky") that sounded like Johansson after she declined twice. She was "shocked" and demanded removal.
Non-Disclosure Agreement Scandal
Altman accused of lying about equity cancellation provisions for departing employees who don't sign non-disparagement agreements.
Board Removed Sam Altman
Board member Helen Toner said Altman "withheld information" and provided "inaccurate information about safety processes."
$5 Billion Loss Projection
Despite $300M monthly revenue and $3.7B annual sales projection, OpenAI expected to lose $5 billion in 2024.
Major News Coverage
Mainstream media documenting ChatGPT and AI failures
Medical & Health Dangers
ChatGPT giving wrong medical advice and harming patients
| Finding | Rate/Impact | Source |
|---|---|---|
| Wrong medical diagnosis rate | 50% incorrect | Tech.co Study |
| Adults using AI for health advice monthly | 17% | KFF Health Poll 2024 |
| Adults who would use ChatGPT to self-diagnose | 78% | 2023 Survey |
| Bromism case from ChatGPT diet advice | 3-week hospitalization | Annals of Internal Medicine |
| Adults who "fully trust" ChatGPT on sensitive topics | Only 2% | Pew Research 2025 |
Case Study: Sodium Bromide Poisoning
A 60-year-old man suffered severe bromism after ChatGPT advised replacing table salt with sodium bromide. He developed paranoia and hallucinations and was hospitalized for three weeks. — Annals of Internal Medicine, 2025
Academic Integrity Crisis
Universities overwhelmed by AI cheating epidemic
Detection Is "All But Impossible"
Australia's higher education regulator TEQSA warned that AI-assisted cheating is "all but impossible" to detect consistently, urging universities to redesign assessments rather than depend on AI detectors. — TEQSA, 2025
Lawsuit Resources
Track all legal action against OpenAI and AI companies
OpenAI & ChatGPT Lawsuit List
Comprehensive tracker of all lawsuits filed against OpenAI, updated regularly.
Originality.AI →Generative AI Lawsuits Timeline
Legal cases vs. OpenAI, Microsoft, Anthropic, Google, Nvidia, Perplexity, and more.
Sustainable Tech Partner →Our Lawsuits Database
Full breakdown of every lawsuit against OpenAI with case details and status.
View Database →Get the Full Report
Download our free PDF: "10 Real ChatGPT Failures That Cost Companies Money" (read it here) - with prevention strategies.
No spam. Unsubscribe anytime.