Published: December 31, 2025
Here's something OpenAI doesn't want you to know: the company behind ChatGPT is in complete disarray. Over the past 18 months, we've watched a parade of key researchers walk out the door, an entire safety team get disbanded, and a culture that's increasingly prioritizing speed over stability. If you've noticed ChatGPT getting worse, this is why.
This isn't speculation. These are documented departures, public statements from former employees, and patterns that explain exactly why the product you're paying for keeps breaking.
Considering alternatives to ChatGPT? Compare top AI assistants to find options that prioritize reliability, privacy, and honest performance over hype.
The Safety Team Implosion
Let's start with the most alarming development: OpenAI's safety team essentially no longer exists. The people who were supposed to ensure ChatGPT doesn't harm users? Gone.
Ilya Sutskever, OpenAI co-founder and chief scientist, quietly departed the company. But the bigger story was Jan Leike, who led the "Superalignment" team responsible for making sure AI systems stay safe. He didn't just leave - he torched the company on his way out.
"I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time... Over the past years, safety culture and processes have taken a back seat to shiny products." — Jan Leike, former Head of Alignment, May 2024
Think about that. The person OpenAI hired to keep their AI safe publicly stated that safety "takes a back seat to shiny products." This isn't an angry ex-employee ranting - this is the guy who knew where all the bodies were buried.
The Superalignment team wasn't just Jan Leike. OpenAI had committed 20% of their compute resources to this team's mission of ensuring AI safety. After Leike's departure, the team was effectively dissolved. Multiple members followed him out the door.
Where did they go? Many ended up at Anthropic - the company that now makes Claude, the AI that's eating ChatGPT's lunch. Others started their own AI safety organizations. Almost none stayed at OpenAI.
When your entire safety team leaves to work for competitors or start organizations specifically to address the problems you're creating, that tells you everything.
The Researcher Exodus
It's not just the safety team. OpenAI has been hemorrhaging talent at an alarming rate. And these aren't random employees - they're the people who built the technology in the first place.
The Board Coup Attempt: OpenAI's board tried to fire Sam Altman, citing concerns about his leadership. The attempt failed, but it revealed deep fractures within the company. Board members who pushed for the firing cited "lack of candor" in communications.
Founding Members Depart: Multiple co-founders quietly exit. The original vision of OpenAI as a nonprofit research organization focused on beneficial AI is now a distant memory.
Safety Leadership Collapses: Sutskever and Leike resign within days of each other. The superalignment team dissolves. Several key alignment researchers follow.
More Senior Staff Leave: Multiple executives and senior researchers depart. Sources cite "cultural changes" and concerns about the direction of product development.
Continued Bleeding: The departures become a steady stream. Engineers who built GPT-3 and GPT-4 leave for Google, Anthropic, and startups. Institutional knowledge walks out the door.
The "Ship Fast, Break Things" Culture
So what's driving all these departures? Former employees paint a picture of a company that's lost its way.
Jan Leike's criticism wasn't isolated. Multiple former employees have described a shift in OpenAI's culture from careful research to aggressive product shipping. The company that once published its research openly now rushes incomplete products to market.
Remember GPT-5's disastrous launch? That's what happens when you prioritize beating competitors over delivering quality. Users become beta testers for products that aren't ready.
Former engineers describe a development process where new features take priority over fixing existing problems. Memory issues in ChatGPT? Ship the new voice feature instead. Context window problems? Better to announce GPT-5 than address them.
This explains so much about the current state of ChatGPT: half-baked features that don't work reliably, bugs that persist for months, and "improvements" that make the product worse.
Where Did Everyone Go?
The talent drain has a clear beneficiary: OpenAI's competitors.
Anthropic was founded by former OpenAI employees who left due to safety concerns. Now it's become a refuge for more departing OpenAI staff. The irony is delicious: the AI that's consistently outperforming ChatGPT is built by people who left OpenAI because they thought OpenAI wasn't being careful enough.
Next time you use Claude and notice it's more thoughtful, more thorough, and more reliable than ChatGPT - remember that it's built by the people OpenAI drove away.
Google has been aggressively recruiting from OpenAI. Several key researchers who worked on GPT-4 are now contributing to Gemini. The benchmark victories Gemini has been racking up against ChatGPT? Built in part by people who know OpenAI's weaknesses intimately.
What This Means for ChatGPT Users
All of this explains the product decline you've been experiencing:
- Quality Regression: The people who built GPT-4 aren't around to maintain it or improve it. New engineers don't have the same deep understanding of the system.
- Safety Problems: With no alignment team, issues like hallucinations, harmful outputs, and unpredictable behavior go unaddressed.
- Feature Instability: "Ship fast" culture means features launch before they're ready. Memory, custom instructions, file upload - all launched buggy and remain buggy.
- Lack of Direction: Leadership changes and departures mean inconsistent product vision. Is ChatGPT a research tool? A coding assistant? A companion? Nobody seems to know anymore.
The $157 Billion Question
Despite all of this, OpenAI's valuation continues to climb. They're reportedly worth $157 billion. Investors keep pouring money in. Sam Altman appears on magazine covers.
But here's what those investors might not be seeing: a company where the people who built the core technology are gone. Where the safety team that was supposed to prevent catastrophic failures no longer exists. Where the culture has shifted from "do this right" to "do this first."
You can't buy talent back once it's gone. You can't rebuild institutional knowledge. And you definitely can't fix a culture problem by raising another funding round.
"The biggest risk is that OpenAI builds something they can't control, and the people who might have prevented that are all working somewhere else now." — Former OpenAI researcher (anonymous), December 2025
The Bottom Line
OpenAI is not the company it was three years ago. The nonprofit research lab focused on beneficial AI is now a $157 billion company hemorrhaging talent, rushing broken products to market, and explicitly deprioritizing safety.
The decline in ChatGPT quality isn't a temporary setback. It's the predictable result of a company that lost its way - and lost the people who could have kept it on track.
Every time ChatGPT hallucinates, refuses a legitimate request, forgets your conversation, or gives you a lazy half-answer - remember that the people who might have fixed those problems left because OpenAI stopped caring about getting things right.
They just wanted to ship.
← Back to Home Developer Exodus →Related: Read more about developer exodus →
Get the Full Report
Download our free PDF: "10 Real ChatGPT Failures That Cost Companies Money" (read it here) - with prevention strategies.
No spam. Unsubscribe anytime.