Somewhere inside OpenAI's San Francisco headquarters, months before a teenager opened fire at a school in Tumbler Ridge, British Columbia, a moderation system flagged an account. The messages were violent. Disturbing. Specific enough that the account was banned. Then the flag was filed away, and nobody picked up the phone.

Eight people died on February 10, 2026. Five of them were children.

That alone would be enough to define a company's worst quarter in memory. But for OpenAI, the Tumbler Ridge failure was just one entry in a staggering run of self-inflicted crises that have consumed the first ten weeks of 2026. A rushed Pentagon surveillance deal that cost the company a senior executive. A broader AI industry firestorm over synthetic actresses threatening Hollywood livelihoods. The cumulative damage goes beyond OpenAI's brand. These incidents are actively reshaping how governments, workers, and ordinary users think about whether AI companies can be trusted to govern themselves at all, and the early answer is a resounding no.

The Tumbler Ridge Tragedy: OpenAI Knew, and Said Nothing

The facts of the Tumbler Ridge shooting are almost unbearable to recount. On February 10, 18-year-old Jesse Van Rootselaar attacked a school in the small northeastern British Columbia community. Eight people were killed: five children between the ages of 12 and 13, an educational assistant named Shannda Aviugana-Durand, her mother, and her 11-year-old half-brother. Twenty-seven more were injured. A 12-year-old girl, shot multiple times in the head and neck, was airlifted to BC Children's Hospital in Vancouver, where she spent days in critical condition.

It was the worst school shooting in Canadian history in decades. Then the Wall Street Journal published what should have been unthinkable: OpenAI had banned Van Rootselaar's ChatGPT account months before the attack because of violent, disturbing messages. The company's own systems caught it. Flagged it. Suspended the account. And that was the end of it. No call to the RCMP. No report to any authority. Nothing.

The lawsuit filed by the family of survivor Maya Gebala makes the negligence even harder to stomach. According to the complaint, the shooter simply created a second OpenAI account and continued using ChatGPT to plan scenarios involving gun violence, including a mass casualty event mirroring what eventually happened. The lawsuit alleges OpenAI "had specific knowledge of the shooter's long-range planning of a mass casualty event" and "took no steps to act upon this knowledge."

"Very disturbing." That's what Canada's AI Minister Evan Solomon said when he learned OpenAI had flagged and banned the shooter's account but never contacted police.

Canada's government responded with fury. Solomon summoned OpenAI's senior leadership to Ottawa. The meeting did not go well. Solomon told reporters he was "disappointed," noting that while OpenAI expressed willingness to strengthen its law enforcement referral protocols, the company had not provided a detailed implementation plan. By early March, B.C. Premier David Eby announced that CEO Sam Altman would personally apologize to Tumbler Ridge and push for stronger regulations.

The question this raises is not complicated. A user wrote violent, threatening content to your chatbot. Your systems flagged it. You banned the account. And then you moved on? There is no sophisticated policy debate here. There is no gray area. A company that markets itself as the responsible steward of the most powerful technology in a generation failed the most basic test imaginable: seeing danger, and telling someone.

The Pentagon Deal: Opportunism on a Remarkable Timeline

The Pentagon contract tells a different kind of story. Not negligence. Opportunism, executed at a speed that stunned even hardened industry observers.

Anthropic, OpenAI's chief rival, had been negotiating with the Department of Defense for months. Anthropic's position was firm: no contract without explicit prohibitions against its AI being used for mass domestic surveillance of Americans or incorporated into fully autonomous weapons systems. The Pentagon refused those terms. On February 27, Defense Secretary Pete Hegseth designated Anthropic a supply-chain risk to national security.

Hours later, that same day, OpenAI signed its own deal to deploy AI models inside classified military systems.

What makes this genuinely remarkable is the timing of Altman's public statements. Earlier that same week, he had voiced support for Anthropic's position, essentially agreeing that surveillance and autonomous weapons prohibitions were the right demands to make. His company then stepped over Anthropic's still-warm body to sign the contract those demands were meant to protect. Altman later admitted the rushed deal "looked opportunistic and sloppy." That's one way to describe it.

3x
Nearly triple the normal rate of ChatGPT uninstalls after the Pentagon deal announcement

The contract language didn't quiet anyone. The agreement states that "the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals." Read that again. "Intentionally." The Electronic Frontier Foundation spotted the obvious: that single word does enormous lifting, and the contract appeared to carve out exceptions for intelligence agencies within the Department of Defense. The EFF titled its analysis "Weasel Words." No one in the industry disagreed.

Caitlin Kalinowski resigned within days. She had led OpenAI's hardware and robotics operations since November 2024. Her public statement was careful, precise, and devastating in its restraint: "Surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got." She emphasized it was "about principle, not people," adding that she held deep respect for Altman and the team. When someone that senior writes something that measured on their way out the door, every word is chosen with surgical intent. The message was clear.

CNN reported that OpenAI employees were "fuming." Multiple current staffers told reporters they "really respect" Anthropic for standing firm and were frustrated with their own leadership. Google and OpenAI employees began publicly backing Anthropic's legal fight against the Pentagon. Claude, Anthropic's chatbot, climbed to the top of the App Store charts. Users were voting with their downloads, and the ballot was not close.

The AI Actress Debacle: Hollywood Draws a Line

The third controversy reaches beyond OpenAI, but it captures something essential about the industry OpenAI leads and the recklessness it has normalized. In September 2025, a European AI production company called Particle6 introduced Tilly Norwood, a hyper-realistic, AI-generated "actress" with a polished demo reel and press releases claiming talent agencies had expressed interest in representing her.

Hollywood's reaction was hostile and immediate. SAG-AFTRA, the actors' union fresh off a months-long strike fought partly over AI protections, declared bluntly that Norwood "is not an actor." Emily Blunt called the rise of AI actors "terrifying." Whoopi Goldberg was dismissive: "You can always tell them from us." The controversy landed in an industry still nursing wounds from labor battles over AI's role in creative work. The timing could not have been worse.

Particle6's founder, Eline Van der Velden, tried reframing. Norwood was "a new paintbrush" for artists, a creative tool, not a replacement for human performers. "Tilly was meant to inspire, not replace human performers," she told reporters. According to an Inc.com report from March 14, Van der Velden believes the controversy was ultimately worth it: the attention led to production work, and Particle6 landed a deal to produce the History Channel's "Streets of the Past," a Dutch documentary series using AI to recreate historical scenes.

Then came the attempted redemption. In early March 2026, Norwood released a music video called "Take The Lead," with AI-generated vocals produced by Suno. The lyrics addressed the backlash directly: "When they talk about me, they don't see the human spark, the creativity." The video carried a disclaimer noting it was "made by 18 real humans." Nobody was moved. The Los Angeles Times wrote that the video "is so bad that it proves AI won't be putting actors out of work any time soon." TechCrunch called it "the worst song I've ever heard." What was supposed to be vindication became a punchline.

What This Means Beyond OpenAI

The specific failures here belong to specific companies. But the erosion of public trust they represent is industry-wide, and it is accelerating. Every poll taken in early 2026 shows the same trajectory: public confidence in AI companies' ability to self-regulate is collapsing. The Tumbler Ridge shooting didn't just damage OpenAI. It gave ammunition to every legislator who has argued that voluntary safety commitments are worthless. The Pentagon deal didn't just embarrass Altman. It validated the fear that commercial pressure will always override ethical guardrails when the contract is big enough. The Norwood debacle didn't just anger actors. It crystallized a suspicion held by millions of workers in creative fields: that AI companies view human labor as a temporary inconvenience to be automated away, and will dress it up as empowerment while doing it.

This is the real cost. Not to any single company's stock price or brand reputation, but to the foundational premise on which the entire AI industry has been built: trust us, we'll get this right. That premise is dying in public, one headline at a time.

Where This Goes Next

The through-line connecting these three crises is not subtle. OpenAI banned a future mass shooter and never made a phone call. OpenAI signed a military surveillance contract hours after its rival was punished for demanding safeguards. An AI production company manufactured a synthetic actress and told Hollywood to get on board. Speed over deliberation. Growth over consequence. Every time.

Altman has acknowledged mistakes. He has promised meetings, apologies, revised protocols. But OpenAI's promises in 2026 have the shelf life of milk in August. The next controversy isn't a question of whether. It's a question of what, and how bad.

Canada is drafting legislation. The EU is watching. Congressional hearings are being scheduled. For years, the AI industry has operated in a regulatory vacuum, asking the public to trust that the people building the most powerful technology in human history would also be the ones to police it responsibly. The first quarter of 2026 has answered that question definitively. The policing will come from outside now. The only remaining question is how much damage accumulates before it arrives.