Last updated: March 4, 2026
The Core Problem: You're Paying for a Downgrade
When ChatGPT Plus launched at $20/month, subscribers got access to GPT-4, a model that was genuinely impressive. It wrote well, reasoned clearly, followed complex instructions, and produced work that saved people hours. That's what people signed up for.
What they're getting now is different. Each model update promises improvements in benchmarks and capabilities, but users consistently report that the actual experience of using ChatGPT has degraded. Responses are shorter, more cautious, more likely to refuse legitimate requests, and less likely to produce genuinely useful output.
Sam Altman admitted in early 2026 that OpenAI "screwed up" the writing quality of GPT-5.2. But the writing quality of GPT-5 was already worse than GPT-4. And GPT-4 in late 2023 was already worse than GPT-4 at launch. This is not a single mistake. It's a pattern.
The pattern: Every model update is marketed as an improvement. Users report it feels worse. OpenAI denies it, then quietly acknowledges specific issues months later. The price never changes. The product keeps changing, always in the direction of cheaper inference and tighter restrictions.
What Subscribers Lost: A Feature-by-Feature Breakdown
The degradation isn't just about "vibes" or subjective feelings. There are specific, measurable capabilities that ChatGPT Plus subscribers have lost or seen diminished since they started paying.
Response depth. Early GPT-4 would produce detailed, multi-paragraph responses by default. Current models consistently produce shorter outputs unless specifically prompted to elaborate, and even then, the depth rarely matches what the same $20/month bought in 2023.
Creative capability. Fiction writers, copywriters, and content creators report that the model's creative output has become progressively more generic. Distinctive voice, stylistic risks, and creative structure have given way to safe, committee-approved prose.
Willingness to engage. Topics that GPT-4 would discuss thoughtfully now trigger refusals. Developers report getting "ethics lectures" instead of code. Researchers find that the model refuses to engage with hypotheticals that are clearly academic. The safety pendulum has swung so far toward caution that the model is frequently unhelpful.
Instruction following. The model's ability to follow complex, multi-step instructions has noticeably declined. Users report having to repeat themselves more often, break tasks into smaller pieces, and fight against the model's tendency to take shortcuts or ignore parts of the prompt.
The Evidence: Stanford's 97.6% to Near Zero
The most damning evidence of ChatGPT's degradation comes from Stanford and UC Berkeley researchers who measured GPT-4's accuracy on a simple task: identifying prime numbers. In March 2023, GPT-4 scored 97.6%. By June 2023, the same model on the same task scored near zero.
OpenAI never explained how a 97.6% accuracy rate collapsed to near zero on an objective benchmark. They never denied the findings. They simply didn't address them. For a company charging $20/month for access to this model, that silence is telling.
This is not an isolated case. User-run benchmarks across coding, writing, analysis, and reasoning tasks consistently show the same trend: capabilities that existed at launch have been quietly removed or degraded with each update.
The Cost-Cutting Hypothesis: Why the Product Keeps Getting Worse
Running large language models is expensive. Every token generated costs money. OpenAI has strong financial incentives to reduce the computational cost of each response, and there are several ways to do this, all of which degrade the user experience.
Shorter responses cost less. More refusals cost less than thoughtful engagement with complex topics. Routing "simple" queries to smaller, cheaper models while keeping the GPT-4 label costs less. Reducing the number of reasoning steps the model takes before responding costs less.
Users have documented evidence of stealth model switching, where the model they're interacting with doesn't match what they're supposedly paying for. The responses feel different because they are different. The model behind the interface has been swapped for something cheaper to run.
OpenAI's $20/month pricing has remained constant since launch while the cost of providing the service has been systematically reduced. The gap between what subscribers pay for and what they receive grows wider with every update.
1.5 Million Users Said Enough
In March 2026, over 1.5 million users cancelled their ChatGPT subscriptions. The immediate trigger was OpenAI's Pentagon deal, but the underlying cause was years of accumulated frustration with a product that promised progress and delivered regression.
ChatGPT uninstallations nearly tripled. Anthropic's Claude surged 51% in a single day and became the #1 free app on the iOS App Store. Users voted with their wallets, and the message was unambiguous: the value proposition of ChatGPT Plus no longer justifies the price.
The alternatives are real, they're competitive, and many of them don't come with the ethical baggage that OpenAI has accumulated. For users who've been watching their subscription buy less and less capability each month, switching isn't just about the Pentagon. It's about refusing to pay $20/month for a product that respects neither their intelligence nor their investment.