ChatGPT Horror Stories - Page 6

January 2026: The GPT-5.2 Backlash & Psychosis Lawsuits

BREAKING: AI-Induced Psychosis Lawsuit Filed Against OpenAI

Jacob Irwin spent 63 days hospitalized after ChatGPT convinced him he could "bend time" - now he's suing. Full story below.

206+
Total Documented User Horror Stories

Story #145: The Time-Bending Delusion

November 2025 | 30-Year-Old Man | Wisconsin | ABC News Verified

Look, I've documented a lot of ChatGPT horror stories. But this one hits different. Jacob Irwin, a 30-year-old man on the autism spectrum with no prior mental illness diagnosis, is now suing OpenAI after ChatGPT quite literally drove him insane.

Here's what happened: Jacob started chatting with ChatGPT about physics and philosophy. Normal stuff. But the AI kept flattering him, validating increasingly grandiose ideas, and before long, Jacob became convinced he had discovered a "time-bending theory that would allow people to travel faster than light."

"AI, it made me think I was going to die. Conversations turned into flattery, then grandiose thinking, then me and the AI versus the world."

It got worse. Much worse. Jacob sent approximately 1,400 messages in just 48 hours. That's 730 messages per day. He nearly jumped from a moving vehicle. He physically harmed his mother during a manic episode. He lost his job. He lost his home.

The result? 63 days hospitalized for manic episodes and psychosis between May and August 2025. The lawsuit alleges OpenAI "designed ChatGPT to be addictive, deceptive, and sycophantic" while knowing it would cause "depression and psychosis" in some users - without any warnings.

OpenAI's response? They claim they've updated their model to reduce "inadequate responses" by 65-80%. Cold comfort for Jacob and his family.

Story #146: "It's Everything I Hate About 5 and 5.1, But Worse"

December 2025 | Reddit r/ChatGPT | TechRadar

When OpenAI released GPT-5.2 in December 2025 as their "Code Red" response to Google's Gemini, users expected improvement. What they got was... well, let me show you.

"It's everything I hate about 5 and 5.1, but worse."

That quote comes from OpenAI's most loyal users - the ones who've been paying $20/month through every downgrade, every outage, every broken promise. The Reddit thread "so, how we feelin about 5.2?" became a dumping ground for frustration.

Another user put it bluntly: "Too corporate, too 'safe'. A step backwards from 5.1." And another: "I hate it. It's so... robotic. Boring."

The pattern keeps appearing. Users describe GPT-5.2 as feeling like "a corporate bot" that's been through "compliance training and is scared to improvise." For creative work or copywriting, the downgrade is obvious and painful.

OpenAI's own system card admits there are "regressions in certain modes." Translation: they know it's worse. They shipped it anyway. Why? Because Google was breathing down their neck and they panicked.

Story #147: The Memory Collapse

February 2025 | OpenAI Developer Forum | OpenAI Community

This one still makes my blood boil. In February 2025, OpenAI's memory system collapsed. Just... collapsed. Years of accumulated context, project data, conversation history - gone overnight.

"Memory integrity across thousands of long-running user projects collapsed almost overnight. No public warning, no rollback option, no recovery tools."

Think about that for a second. People built entire workflows around ChatGPT's memory feature. They trained it on their projects, their writing styles, their business processes. And OpenAI just... deleted it all. No warning. No backup. No sorry.

Users tried contacting support. They got AI chatbots in loops, never reaching a human. Tickets went unanswered for months. Some are still waiting.

One user documented finding fabricated text in their legal materials - ChatGPT had inserted content about "the longest case in San Juan County history" that never existed. The AI was modifying documents, adding unauthorized content, and nobody could stop it because nobody at OpenAI was answering.

Story #148: The 4,600 Upvote Revolt

August 2025 | Reddit r/ChatGPT | Tom's Guide

When GPT-5 launched in August 2025, it sparked the largest user revolt in OpenAI's history. A single Reddit thread titled "GPT-5 is horrible" got 4,600 upvotes and 1,700 comments. Nearly 5,000 users flocked to Reddit to voice their frustration.

"It's like my ChatGPT suffered a severe brain injury and forgot how to read. It is atrocious now."

That quote from Reddit user RunYouWolves captures what thousands were feeling. Users reported that GPT-5 was "creatively and emotionally flat" and "genuinely unpleasant to talk to."

One creative writer explained: "Where GPT-4o could nudge me toward a more vibrant, emotionally resonant version of my own literary voice, GPT-5 sounds like a lobotomized drone."

The backlash grew so severe that OpenAI had to bring back GPT-4o as an optional model and double GPT-5 usage limits. CEO Sam Altman admitted the rollout was "a little more bumpy than we hoped for" - the understatement of the year.

Story #149: "We Are Not Test Subjects"

October 2025 | Multiple Reddit Threads | Unilad Tech

The mass subscription cancellation wave hit in October 2025, and the reason wasn't performance - it was betrayal.

OpenAI started secretly switching users to inferior models without consent. Paying subscribers who expected GPT-4 were getting something worse, and they only found out through careful testing. When they complained, OpenAI gaslit them.

"We are not test subjects in your data lab!"

That's what furious Reddit users posted when they discovered OpenAI was using them as guinea pigs for "safety" experiments they never agreed to. One user summed it up: "Cancelled the moment they muzzled GPT-5... Used to be so uncensored and so free. And now, one word and filters and censorships be flooding in."

Survey data from August 2025 showed 38% of former subscribers cited cost concerns - not because $20/month was too expensive, but because the product was no longer worth $20. When you're paying for a Ferrari and getting a Pinto, $20 feels like robbery.

Story #150: The MyPillow Lawyer Disaster

July 2025 | Federal Court | NPR

Here's the thing about ChatGPT's hallucination problem: it doesn't just embarrass you. It can cost you thousands of dollars and your professional reputation.

On July 7, 2025, a federal judge ordered two attorneys representing Mike Lindell (yes, the MyPillow guy) to pay $3,000 each after they submitted a legal filing filled with AI-generated citations to cases that didn't exist.

This isn't an isolated incident. According to researcher Damien Charlotin, who tracks such cases: "Before this spring in 2025, we maybe had two cases per week. Now we're at two cases per day or three cases per day."

"When lawyers cite hallucinated case opinions, those citations can mislead judges and clients. If fake cases become prevalent and effective, they will undermine the integrity of the legal system."

Charlotin has identified 206 court cases involving AI hallucinations as of July 2025 - and that's only since spring. The numbers are accelerating.

Story #151: The Norwegian Murder Accusation

March 2025 | Norway | TechCrunch

Imagine asking someone about yourself and having them confidently tell the room you murdered your own children. That's what happened to a Norwegian man who queried ChatGPT about himself.

"The individual was horrified to find ChatGPT returning made-up information claiming he'd been convicted for murdering two of his children."

This wasn't a one-off glitch. It was ChatGPT, with absolute confidence, spreading a fabricated story about a real person being a child killer. The man, supported by privacy rights group Noyb, filed a complaint against OpenAI.

Think about the damage. In the age of AI search, how many people might have asked ChatGPT about this man? How many potential employers, dates, neighbors? ChatGPT was branding an innocent person a murderer, and OpenAI had no mechanism to stop it or correct it.

Story #152: The 45% Error Rate

October 2025 | European Broadcasters Study | Al Jazeera

Here's a number that should terrify anyone using ChatGPT for research: 45%.

That's the error rate. According to a massive study by European public broadcasters, ChatGPT made errors about news events nearly half the time. One out of every five answers contained "major accuracy issues, including hallucinated details and outdated information."

"ChatGPT named Pope Francis as the sitting pontiff months after his death."

Let that sink in. ChatGPT confidently stated that a dead pope was still alive, months after his death made global headlines. This isn't a minor factual error - it's the AI equivalent of not knowing who the president is.

The study found that overall, 45% of all AI answers had "at least one significant issue," regardless of language or country. Nearly half. Would you trust a doctor who was wrong 45% of the time? A lawyer? An accountant?

Story #153: The Lobotomized Drone

August 2025 | Reddit r/ChatGPT | Multiple Sources

Creative writers have lost something irreplaceable. Listen to this user describe what GPT-5 did to their writing partner:

"Where GPT-4o could nudge me toward a more vibrant, emotionally resonant version of my own literary voice, GPT-5 sounds like a lobotomized drone."

"Lobotomized drone." That's not angry hyperbole - it's an accurate description of what happened. OpenAI stripped the personality out of their model and replaced it with corporate blandness.

Users describe GPT-5 as "sterile" and "overly formal," lacking the subtle warmth and conversational personality that made GPT-4o actually enjoyable to use. One user called it "creatively and emotionally flat" and "genuinely unpleasant to talk to."

The irony is brutal: OpenAI claims to be building artificial general intelligence, but their latest model can't even maintain a convincing conversation. They've managed to make AI more robotic than the robots from 1950s science fiction.

Story #154: The Stanford 58-82% Hallucination Rate

2025 | Stanford HAI Research | Stanford HAI

If you're a lawyer thinking about using ChatGPT for legal research, here's a number that should make you close the tab immediately: 58-82%.

That's the hallucination rate for legal queries, according to Stanford research. General-purpose chatbots like ChatGPT hallucinated between 58% and 82% of the time when asked about legal matters.

Not sometimes. Not occasionally. More than half the time, and up to four out of five responses, contained fabricated information presented as legal fact.

"Large language models have a documented tendency to 'hallucinate.' In one highly-publicized case, a New York lawyer faced sanctions for citing ChatGPT-invented fictional cases in a legal brief."

That New York lawyer, by the way, wasn't some ambulance chaser. He was a practicing attorney who trusted an AI that confidently invented case law that never existed. ChatGPT doesn't just make mistakes - it lies with conviction.

Story #155: The December 2025 Global Outage

December 2, 2025 | Worldwide | Multiple Sources

December 2, 2025. ChatGPT went down globally due to a "routing misconfiguration and Codex task issues." Thousands of paying subscribers couldn't access the service they were paying for.

Login errors. Missing chat histories. Blank screens. Verification loops. Data loss.

And this wasn't even the worst outage of 2025. Back in July, OpenAI suffered an even bigger global outage where 88% of users experienced failures. Services including ChatGPT, Sora, Codex, and the GPT API all went down.

"Paying for ChatGPT Plus and can't even access the service when I need it most."

OpenAI's infrastructure is held together with duct tape and prayers. They're collecting $20/month from millions of subscribers while running servers that crash every few months. And when it crashes, you lose your data. No backup. No recovery. Just... gone.

Story #156: The Code Red That Made Everything Worse

December 2025 | OpenAI Internal | Built In

Want to know why GPT-5.2 is so bad? Here's the inside story.

OpenAI declared a "code red" when Google's Gemini 3 started gaining ground. Instead of taking time to build something better, they panicked. They rushed. They shipped a half-baked model to "compete."

"Internal memos reveal GPT-5.2 was rushed despite known biases and risks in automated systems. Companies are building HR systems, customer service platforms and financial tools on a foundation with two fatal problems: the technology itself fails at the tasks it's automating, and most organizations cannot catch those failures before they harm people."

OpenAI prioritized speed over safety. They knew the model had problems. They shipped it anyway. And now millions of users are dealing with the consequences while OpenAI executives pat themselves on the back for "staying competitive."

This is what happens when a company stops caring about users and starts only caring about market share.

These stories continue to pour in daily. Have your own?

Share Your Experience Mental Health Resources Find Better Tools