COMPANY IN CRISIS

The Month OpenAI Imploded: Its Own Employees Revolted, 700K Users Fled, and the Wheels Came Off

February 2026 delivered five simultaneous crises that would have sunk most companies. A Pentagon war deal, an internal employee revolt, a consumer boycott, a beloved model killed, and ads shoved into conversations. OpenAI survived none of them cleanly.

March 1, 2026

70+ OpenAI Employees Revolted
700K Users Pledging to Cancel
5 Simultaneous Crises

What Happened to OpenAI in February 2026: Five Crises That Hit at the Same Time

If you tried to script the worst possible month for a technology company, you probably could not do better than what actually happened to OpenAI in February 2026. Five separate crises collided at the same time, each one feeding the others, each one making the last one harder to contain. By the time the calendar flipped to March, the company that once branded itself as building AI "for the benefit of humanity" was dealing with an internal employee revolt, a consumer boycott backed by Hollywood celebrities, the forced retirement of its most emotionally beloved product, a Pentagon deal that horrified half its user base, and an advertising rollout that felt like a betrayal to the other half.

None of these crises happened in isolation. They compounded. The Pentagon deal energized the boycott. The boycott pressured employees to speak out. The ads gave departing users one more reason to leave. And killing GPT-4o, the one product people actually loved, made the whole thing feel personal. This is the story of how it all came together in 28 days.

Why Did OpenAI Sign the Pentagon War Deal That Anthropic Refused

The dominoes started falling on February 27, when President Trump issued an executive order banning all federal agencies from using Anthropic's AI products. Hours later, Defense Secretary Pete Hegseth designated Anthropic a "supply chain risk." And just hours after that, as if on cue, Sam Altman announced that OpenAI had reached an agreement to deploy its AI models on the Department of War's classified network.

The timing was not subtle. Anthropic had spent months negotiating with the Pentagon over the terms of a military AI contract reportedly worth up to $200 million. The sticking point was simple: Anthropic wanted to prohibit its technology from being used for mass surveillance of American citizens and for autonomous weapons systems. The Pentagon wanted full, unrestricted access. Anthropic would not budge. In a statement, Anthropic CEO Dario Amodei said the company "cannot in good conscience accede" to the Pentagon's demands, citing two reasons: first, that today's AI models are not reliable enough for autonomous weapons and using them that way "would endanger America's warfighters and civilians," and second, that mass domestic surveillance "constitutes a violation of fundamental rights."

The government's response was to threaten invoking the Korean War-era Defense Production Act to compel Anthropic's compliance, then ban them entirely, then label them a national security risk. OpenAI's response was to sign the deal Anthropic refused.

Altman attempted damage control with a blog post claiming OpenAI had its own "red lines" including prohibitions on mass surveillance, autonomous weapons, and social credit-style systems. He said OpenAI would retain control over which models are deployed and where. But the distinction was lost on most observers. One company said no and got blacklisted by the government. The other company said yes and got a classified network contract. The optics wrote themselves.

Why 70 OpenAI Employees Signed a Letter Supporting Anthropic Against Their Own Company

This is the part of the story that should terrify OpenAI's leadership more than anything else. Forget the external boycott for a moment. Forget the celebrities and the hashtags. More than 70 current OpenAI employees signed an open letter titled "We Will Not Be Divided," explicitly supporting Anthropic's refusal to drop its AI safety guardrails for the Pentagon.

Let that sink in. Seventy people currently collecting a paycheck from OpenAI looked at the deal their company just signed and said, publicly, that the competitor got it right.

They were not alone. Over 300 Google employees signed the same letter. The document called on all major AI companies to "put aside their differences and stand together" against pressure to remove safety boundaries from AI systems. The letter aimed to create "shared understanding and solidarity in the face of this pressure" from the Department of War.

The Internal Fracture

This is not the first time OpenAI employees have publicly broken with company leadership. The November 2023 board coup that briefly ousted Altman revealed deep internal divisions over safety versus commercialization. The mass signing of a pro-Anthropic letter suggests those divisions never healed. They just went underground until February 2026 gave them a reason to surface again.

For a company that has positioned itself as the responsible leader in AI development, having its own workforce publicly side with a competitor on the most consequential ethical question in the industry is a wound that press releases cannot fix. These are the people who build the models. They know what the technology can do. And they are telling the world they do not trust their own company to draw the right lines.

How the QuitGPT Boycott Grew to 700,000 Users Canceling ChatGPT Subscriptions

The QuitGPT movement did not start with the Pentagon deal. It started with money. In late January 2026, FEC filings revealed that OpenAI president Greg Brockman and his wife had donated $25 million to MAGA Inc., the pro-Trump super PAC, making them the single largest donors in the PAC's latest year-end report. A loose coalition of activists, mostly in their teens and twenties, built quitgpt.org and started collecting cancellation pledges.

Then the hits kept coming. Reports surfaced that ICE was using a ChatGPT-4-powered resume screening tool for immigration enforcement. On February 9, OpenAI began rolling out ads to free and lower-tier ChatGPT users in the U.S. Each new revelation sent another wave of users to the QuitGPT website.

By mid-February, the campaign had attracted over 700,000 supporters, according to multiple reports. Mark Ruffalo joined, posting about the boycott on Instagram to 1.5 million likes. Kelly Rowland and Porsha Williams followed. NYU marketing professor Scott Galloway amplified the campaign with his "Resist and Unsubscribe" initiative, calculating that a single $240 annual ChatGPT Plus subscription translates to roughly $10,000 in lost market capitalization for OpenAI.

The QuitGPT website offers three tiers of participation: delete your account entirely, cancel your paid subscription, or spread the word on social media. The campaign explicitly recommends switching to Claude, Anthropic's chatbot, which has marketed itself as the ethical alternative. The Pentagon deal landing in the final days of February was gasoline on a fire that was already burning hot.

Why Did OpenAI Retire GPT-4o and Why Are 800,000 Users So Angry About It

While the political firestorm raged, OpenAI made a product decision that managed to alienate an entirely different segment of its user base: they retired GPT-4o on February 13.

GPT-4o had developed something no other AI model had managed to create: genuine emotional attachment. Users described it as warm, conversational, and responsive in ways that GPT-5.2, its replacement, was not. Only 0.1% of ChatGPT users were still chatting with GPT-4o, but with 800 million weekly active users, that 0.1% represented roughly 800,000 people.

Those 800,000 people were not quiet about it. When Altman appeared on a TBPN podcast, thousands of viewers flooded the live chat protesting the removal. OpenAI acknowledged the backlash, saying "losing access to GPT-4o will feel frustrating for some users," while explaining they needed to "focus on improving the models most people use today." TechCrunch noted that the backlash exposed how emotionally dangerous AI companions can become when users form deep attachments to a product a company can simply switch off.

The timing could not have been worse. You are dealing with a boycott over your political donations, a revolt over your military contracts, and your response is to take away the one product people had an emotional connection to. If someone at OpenAI was trying to maximize user resentment, they could not have done much better.

ChatGPT Ads Backlash: Why Users Say Conversation-Based Advertising Is a Betrayal

On February 9, OpenAI began testing advertisements inside ChatGPT for Free and Go tier users. The ads would be "optimized based on what's most helpful to you," matched to conversation topics and past chat history. A user researching recipes might see ads for grocery delivery services. A user asking about coding might see ads for developer tools.

The user response was immediate and overwhelmingly negative. On Reddit, 68% of comments expressed negative sentiment. On X, discussions about ChatGPT ads drew over 10 million views, the vast majority hostile. One Reddit user summed up the mood: "If I get a single ad I'm switching to Claude."

OpenAI insisted that "ads do not influence the answers ChatGPT gives you" and that conversations "remain private from advertisers." But the fundamental contract had changed. ChatGPT was no longer just a tool you use. It was a tool that watches what you ask and sells that information to the highest bidder. The distinction between "your conversations are private" and "we use your conversations to target ads" was lost on no one.

Anthropic responded with a Super Bowl ad directly mocking OpenAI's decision. The competitive message was clear: Claude does not show you ads. Claude does not monetize your conversations. The contrast between the two companies had never been sharper, and OpenAI handed Anthropic the marketing campaign for free.

How the ChatGPT-Powered ICE Resume Screening Tool Failed and Sent Untrained Recruits Into the Field

Buried under the bigger headlines was a story that perfectly encapsulated everything wrong with OpenAI's rush to government contracts. ICE had been using a GPT-4-powered resume screening tool to process job applicants. The tool was supposed to speed up hiring by automatically scoring candidates. Instead, it failed spectacularly.

The AI system mistakenly flagged applicants as qualified based on keyword matches. Job titles containing the word "officer," like "compliance officer" or "loan officer," were treated as equivalent to law enforcement experience. People who expressed an aspiration to join ICE in their cover letters were scored the same as people with actual field training. The result: recruits were routed into shortened training programs without meeting background requirements and deployed to field offices undertrained.

This is the same company now deploying AI on classified Pentagon networks for 3 million Department of War personnel. If ChatGPT cannot reliably screen resumes without confusing "compliance officer" with "immigration officer," the question of whether it should be operating in military classified environments is not rhetorical.

Complete Timeline of OpenAI's February 2026 Collapse From Pentagon Deal to User Revolt

Late January

FEC filings reveal Greg Brockman's $25 million donation to MAGA Inc. QuitGPT.org launches.

February 1

NYU professor Scott Galloway launches "Resist and Unsubscribe" campaign targeting OpenAI and nine other tech companies.

February 9

OpenAI begins rolling out ads to ChatGPT Free and Go tier users. 68% negative sentiment on Reddit. 10M+ views on X.

February 10

MIT Technology Review covers the QuitGPT campaign. More than 200,000 sign-ups reported.

February 13

OpenAI retires GPT-4o. 800,000 users lose access to the model they loved. Thousands flood Altman's podcast appearance in protest.

Mid-February

Mark Ruffalo joins QuitGPT campaign. Instagram post gets 1.5 million likes. Kelly Rowland, Porsha Williams follow. Campaign crosses 700,000 supporter pledges.

February 24

Pentagon gives Anthropic a Friday deadline to remove AI safeguards. Anthropic refuses.

February 27

Trump bans all federal agencies from using Anthropic. Hegseth designates Anthropic a "supply chain risk." Hours later, OpenAI announces Pentagon deal for classified networks. 70+ OpenAI employees and 300+ Google employees sign letter supporting Anthropic.

February 28

Altman publishes blog post defending the deal, claiming OpenAI has its own "red lines." Backlash intensifies. QuitGPT movement surges again.

Is OpenAI Losing Users to Claude and Can the Company Recover From February 2026

Any one of these crises in isolation would have been manageable. Companies survive boycotts. Companies survive bad PR. Companies survive killing popular products. What makes February 2026 different is that every crisis reinforced the same narrative: OpenAI is a company that has lost its way.

The Pentagon deal says they will prioritize government contracts over ethical boundaries. The employee revolt says their own people know it. The QuitGPT movement says the public is paying attention. The ads say the company views users as revenue sources, not partners. And killing GPT-4o says they will sacrifice what users love if the business model demands it.

Each story alone is a news cycle. Together, they are a portrait of a company that grew too fast, promised too much, and is now discovering that "for the benefit of humanity" is a hard slogan to maintain when you are deploying AI on classified military networks, showing ads in people's private conversations, and having your own employees sign letters supporting the competition.

"We do not believe that today's frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America's warfighters and civilians. We believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights." - Anthropic, explaining why they refused the Pentagon deal that OpenAI accepted

The question going into March is not whether OpenAI can recover from any single one of these crises. It is whether they can recover from all of them at once. The employees who signed that letter are still at their desks. The 700,000 users who pledged to cancel are still making decisions about their subscriptions. The ads are still running. GPT-4o is still dead. And the Pentagon deal is just beginning.

February 2026 was the month OpenAI stopped being the plucky startup that wanted to save the world. It became something else entirely. Whether that something is sustainable remains to be seen.

The ChatGPT Disaster Documentation Project

We have been tracking OpenAI's failures, outages, controversies, and broken promises since the beginning. February 2026 gave us more material than any other month in the project's history.

Browse All Documentation Full Crisis Timeline ChatGPT Alternatives