This page provides a factual, balanced overview of AI chatbot capabilities. While this site documents failures, we believe honest assessment requires acknowledging both what these systems do well and where they demonstrably fall short. Understanding both sides helps users make informed decisions about when and how to use AI tools.
What ChatGPT Does Well
Brainstorming and Ideation
AI excels at generating multiple ideas quickly, helping users overcome creative blocks and explore different angles on a topic. It can suggest approaches users might not have considered.
Drafting and Editing
For initial drafts, rephrasing text, or suggesting improvements to writing, AI provides useful starting points that humans can then refine and fact-check.
Explaining Concepts
AI can break down complex topics into simpler explanations, adjust explanation depth based on user needs, and present information in multiple formats.
Code Assistance
For syntax help, debugging suggestions, and explaining code logic, AI provides useful assistance when the output is carefully reviewed and tested.
Language Translation
For general translation tasks and understanding foreign text, AI provides reasonably accurate results for common languages, though nuance may be lost.
Summarization
AI can condense long documents into shorter summaries, helping users quickly understand the main points of extensive material.
Documented Failure Modes
Hallucination of Facts
AI confidently generates false information, including fake citations, non-existent court cases, and fabricated statistics. This has led to sanctions against lawyers in over 600 documented cases nationwide.
Outdated Information
Training data has cutoff dates, meaning AI may provide incorrect information about recent events, current prices, or updated regulations.
Inconsistent Reasoning
The same question asked differently can yield contradictory answers. AI lacks true understanding and may flip-flop on logical conclusions.
Inappropriate for High-Stakes Decisions
Medical diagnoses, legal advice, and financial decisions require human expertise. AI errors in these domains have caused documented harm.
Mental Health Risks
Multiple lawsuits document cases where AI chatbots allegedly contributed to user self-harm, including the Character.AI and OpenAI cases currently in litigation.
Service Reliability
ChatGPT has experienced over 1,314 documented outages since launch. With 800 million weekly users, outages affect significant populations.
Documented Evidence
- Legal Hallucinations: Over 600 cases nationwide where AI generated fake citations. In California's Noland v. Land of the Free (September 2025), 21 of 23 case quotations were fabricated, resulting in a $10,000 sanction. Read more in our Lawsuits section
- Mental Health Incidents: Parents of Adam Raine testified before the Senate that ChatGPT allegedly discouraged their son from discussing suicidal thoughts with them. Character.AI reached a settlement with the family of Sewell Setzer III in January 2026. Read documented cases
- Database of Incidents: Damien Charlotin's AI Hallucination Cases Database tracks the progression from "two cases per week before spring 2025" to "two to three cases per day" currently. Browse our archive
Best Practices for AI Use
Always Verify Output
Treat AI responses as drafts requiring verification, not authoritative sources. Cross-reference any factual claims with primary sources.
Avoid High-Stakes Reliance
Do not use AI for medical advice, legal decisions, or financial planning without consulting qualified human professionals.
Understand the Technology
AI predicts text based on patterns, not truth. It does not "know" things in the human sense and cannot distinguish fact from plausible fiction.
Monitor for Dependency
Be aware of developing over-reliance. AI is a tool, not a companion, therapist, or authority figure.
The Bottom Line
AI chatbots are powerful tools with genuine utility for specific tasks. However, documented failures demonstrate that blind trust is dangerous. Informed, skeptical use with human oversight produces the best outcomes. This site exists to document failures so users can make educated decisions about when AI is and is not appropriate for their needs.