GPT-5.2
"The Most Disappointing AI Release of 2025"

The GPT-5 Disaster: What Went Wrong

OpenAI launched GPT-5 with massive fanfare, promising revolutionary improvements. Instead, users got a model that many say is worse than GPT-4 in key areas. Here's the complete breakdown of everything wrong with GPT-5.

Critical Problems

CRITICAL

Increased Hallucinations

Despite claims of improved accuracy, GPT-5 hallucinates more confidently than ever. Users report fabricated citations, invented statistics, and completely made-up information delivered with absolute certainty.

CRITICAL

"Getting Dumber" Over Time

Users consistently report that GPT-5 provides worse answers now than at launch. This is part of a broader pattern of ChatGPT degradation. Tasks that worked perfectly weeks ago now fail. OpenAI denies any changes, but the user experience tells a different story.

CRITICAL

Worse at Following Instructions

GPT-5 frequently ignores explicit instructions, adds unwanted content, removes requested elements, and "helps" in ways users specifically asked it not to. The model seems to have its own agenda.

MAJOR

Lazy Responses

Users report GPT-5 frequently gives abbreviated, incomplete answers with phrases like "etc." or "and so on" instead of completing the requested work. The model appears to be conserving compute at user expense.

MAJOR

Coding Regression

Developers report GPT-5 produces more buggy code than GPT-4, fails to understand context from earlier in conversations, and frequently suggests deprecated or incorrect solutions. See our AI coding quality decline analysis.

MAJOR

Memory Issues

GPT-5 "forgets" instructions given earlier in the same conversation, contradicts itself within a single response, and fails to maintain context that GPT-4 handled easily.

MODERATE

Excessive Refusals

GPT-5 refuses to help with increasingly mundane requests, citing safety concerns for completely benign queries. Users report having to "trick" the AI into helping with legitimate tasks.

MODERATE

Slow Response Times

Despite promises of faster inference, GPT-5 is often slower than GPT-4, especially during peak hours. Users paying $20/month for Plus expect better performance.

User Complaints (Real Quotes)

"I've been using ChatGPT since launch. GPT-5 is genuinely worse at everything I use it for. It's like they made it dumber on purpose."
- Reddit r/ChatGPT, January 2026
"Asked GPT-5 for the same coding task I've done 100 times with GPT-4. It gave me completely broken code and argued with me when I pointed out the errors."
- Developer on Twitter/X, January 2026
"The hallucinations are out of control. It cited three academic papers that DON'T EXIST with such confidence I almost cited them in my research."
- Graduate student, Reddit, January 2026
"Paying $20/month for a service that gets worse every update. At this point I'm using Claude for everything important."
- ChatGPT Plus subscriber, January 2026
"GPT-5 just told me it couldn't help me write a birthday card for my mom because it 'could be used to manipulate emotions.' What???"
- Twitter/X user, January 2026

GPT-5 vs GPT-4: What Changed?

Worse:

  • Instruction following
  • Coding accuracy
  • Long-form content quality
  • Contextual memory
  • Hallucination rate (higher)
  • Response completeness
  • Refusal rate (higher)

Same or Marginally Better:

  • Multimodal capabilities (images)
  • Voice mode features
  • Some reasoning benchmarks (disputed)

Why Did This Happen?

Several theories from AI researchers and industry insiders:

  • Cost cutting: OpenAI may be using cheaper compute, resulting in lower quality
  • Model collapse: Training on AI-generated data is degrading quality
  • RLHF overtuning: Safety training has made the model excessively cautious
  • Scale limits: Simply making models bigger isn't improving them anymore
  • Stealth downgrades: OpenAI may be quietly serving cheaper models

What OpenAI Says

OpenAI has largely dismissed user complaints, claiming:

  • Benchmarks show GPT-5 outperforms GPT-4
  • No intentional downgrades have occurred
  • User perception may be affected by "expectation inflation"
  • They are "constantly improving" the model
The disconnect between OpenAI's claims and user experience has never been wider. Thousands of paying customers report degraded quality while the company insists everything is fine.

What You Can Do

Short-term fixes:

  • Request GPT-4 specifically in your prompts (sometimes works)
  • Use more explicit, detailed instructions
  • Break complex tasks into smaller steps
  • Verify all AI output manually

Long-term solutions:

  • Try ChatGPT alternatives like Claude or Gemini
  • Keep backups of prompts that worked with GPT-4
  • Consider whether $20/month is worth it for degraded service
  • Document issues and submit feedback (though OpenAI rarely responds)