Verdict, April 21, 2026

Yes, ChatGPT has gotten worse. The evidence is in five categories.

Paying-user testimony, peer-reviewed research, mainstream tech-press reporting, OpenAI's own release-note and executive admissions, and observable market behavior (cancellation campaigns, App Store migration to Claude). No single category is conclusive alone. Together, they are overdetermined.

This page exists because "has chat gpt gotten worse" is a question millions of users are typing into Google every month. Most of the results they get are blog posts with a hedging answer. This is the direct-answer page, with the receipts in the order they were produced.

The User Evidence

The most upvoted post on r/ChatGPT in late 2025 was a cancellation post. u/l30's "Just Cancelled my ChatGPT Subscription" hit 1,952 upvotes and 600 comments. The dominant reply pattern was users saying they had already made the same call or were about to. That was the first signal that a critical mass of power users had decided the answer was yes.

"I think I'm done with ChatGPT unless they drastically upgrade their offering. Gemini and Claude have been absolutely blowing me away the last few weeks. Now when I try to go back it's honestly a bit painful."u/l30, r/ChatGPT, December 2025 - 1,952 upvotes

The GPT-5 launch megathread on r/ChatGPT attracted nearly 5,000 comments in the first 24 hours of release. The OpenAI Community Forum thread titled "5.2 regressed behavior: bad memory, hallucinates" collected hundreds of independent reports of the same failure mode. The OpenAI Developer Community thread "Hallucinations and headaches using GPT-5 in production" has become the go-to citation for enterprise procurement teams who want to argue against continuing a GPT-5 deployment.

"It's like my chatGPT suffered a severe brain injury and forgot how to read. It is atrocious now."u/RunYouWolves, r/ChatGPT, GPT-5 launch megathread - cited by Tom's Guide and TechRadar

The Research Evidence

The Stanford sycophancy study is the cleanest peer-reviewed input. Researchers tested how frequently GPT-5-class models agree with false premises embedded in user prompts. The agreement rates were, in the authors' words, "alarming," and the study recommended against deployment in high-stakes contexts until the sycophancy behavior is addressed. Our coverage of the paper is on the Stanford study page.

The BBC's independent news-accuracy evaluation of GPT-5 (our coverage) found unacceptable error rates on basic factual questions about current events, with fabricated citations in a material percentage of responses. The Elephas AI "brain rot" research documented measurable loss of stylistic range and factual anchoring in models trained on post-2023 synthetic-heavy data.

95%
Accuracy collapse on specific task classes documented by Stanford and independent evaluators in 2025-2026 benchmarks.

The Press Evidence

This is where it becomes impossible to write the question off as a Reddit pile-on. TechRadar, Tom's Guide, MIT Technology Review, The Week, Futurism, Windows Central, NPR, XDA Developers, Ars Technica, Fast Company, The New York Times, and Silicon Republic have all published coverage in 2025-2026 documenting ChatGPT quality regression, cancellations, or the user-reported harms downstream of it. That list is not comprehensive; it is what shows up in the first search-result pages for the most common variants of the question.

TechRadar's headline on GPT-5.2 read: "ChatGPT 5.2 branded a step backwards by disappointed early users." Tom's Guide's read: "ChatGPT 5 users are not impressed - here's why it feels like a downgrade." Silicon Republic's read: "Altman admits GPT-5 currently 'way dumber' amid rough roll-out." When three major outlets write the same headline, the headline has already become the story.

The OpenAI Admission Evidence

Sam Altman went on multiple podcasts in the weeks after the GPT-5 launch and conceded that OpenAI "screwed up" the rollout and "shouldn't have rushed" it. He described the post-launch model as "way dumber" than the team had expected users would experience, framed the regression as a deployment issue that would be addressed, and promised future improvements. The specific quotes, with dates and podcast sources, are archived on our Altman admissions page.

The point of citing Altman here is not that he is uniquely credible. The point is that when a CEO says, on the record, that his product is "way dumber" than promised, the question "has the product gotten worse" is answered by the CEO and no longer requires further defense.

The Market Evidence

The QuitGPT campaign passed 2.5 million pledged cancellations by mid-March 2026. ChatGPT mobile app uninstalls spiked roughly fourfold on a single Saturday during the peak of the cancellation news cycle. Claude hit the number-one slot on the Apple U.S. App Store in March, the first time an AI assistant overtook ChatGPT at that ranking. Our dedicated pages:

When users cancel at this rate, the question is answered in the same way a restaurant that empties out every night answers a critic's review. Aggregate behavior is the most reliable verdict a consumer market produces.

The Five Most-Asked Follow-Up Questions

When did ChatGPT start getting worse?

The complaints predate GPT-5, but the durable backlash began with the GPT-5 launch in August 2025. r/ChatGPT's GPT-5 megathread, the Tom's Guide and TechRadar coverage, and Altman's own podcast admissions all converge on that window. Every subsequent point release (5.1, 5.2, 5.2-Thinking) has reinforced the pattern.

Is it actually getting worse or am I prompting it wrong?

Both can be true, but the "prompt better" advice stopped working around GPT-5.1. Paying users with years of prompt-engineering experience are reporting that workflows they built carefully in 2024 no longer produce reliable output in 2026. When the prompt hasn't changed and the output has degraded, the prompt is not the variable.

Did OpenAI do this on purpose?

"On purpose" is the wrong frame. The decisions that produced the regression (RLHF optimization, synthetic-data training, cost-driven routing to cheaper inference, liability-driven safety-filter accretion) were each defensible on their own terms. The cumulative effect on user experience was not the goal, but it was the predictable outcome. We walk through the mechanisms on our AI models getting dumber research page.

What should I switch to if ChatGPT is this bad?

The dominant answer from the cancellation testimonials is Claude. The technical-user second choice is Gemini. For specific workflows (writing, coding, research), we maintain side-by-side comparisons: ChatGPT vs Claude 2026 and ChatGPT vs Gemini 2026. Neither competitor is perfect. Both are currently better than GPT-5.2 for most paying-user workflows.

Will ChatGPT get better again?

OpenAI has the engineering talent and the capital to ship a credibly better model. Whether they will do it before a regulatory event or a procurement-side revolt forces their hand is the open question. The five underlying mechanisms (see above) are industry-wide, not OpenAI-specific, which means the next release has to do more than patch; it has to reverse the training-incentive drift that produced the regression.

The Bottom Line

The word "worse" in the original search query is doing most of the work. Worse at what, and for whom, both matter. ChatGPT in April 2026 is better than ChatGPT in April 2022 on almost every axis you can benchmark. It is worse than ChatGPT in April 2024 on the specific axes paying subscribers care most about: long-context coherence, personality, writing nuance, willingness to give direct answers, refusal rate on harmless questions, and ability to solve the kind of problem a Plus subscriber is paying $20 a month to solve.

That is what "has ChatGPT gotten worse" means when a paying subscriber types the question into Google. Measured against that user's own 2024 workflow, the answer is yes. The record says yes. The research says yes. The press says yes. The CEO says yes. The users say yes by cancelling. If anyone is still saying no, it is the marketing department, and the marketing department does not have to use the product.