On February 10, 2026, a mass shooting tore through the small community of Tumbler Ridge, British Columbia, Canada. In the weeks that followed, a devastating detail emerged: the killer, Jesse Van Rootselaar, had been banned from ChatGPT months before the attack. OpenAI's internal systems had flagged his account. They identified concerning behavior. They terminated his access to the platform.
And then they did nothing else.
They didn't call the police. They didn't notify Canadian authorities. They didn't alert the RCMP. They saw something that was alarming enough to warrant banning a user from their platform, but not alarming enough, in their judgment, to pick up the phone and tell someone with a badge.
People died. And Canada's government wants to know why.
The Timeline of Failure
How OpenAI's Safety System Failed
Canada's AI Minister Is Not Impressed
Artificial Intelligence Minister Evan Solomon has been publicly and systematically escalating pressure on OpenAI since the connection between the shooter and ChatGPT became clear. He summoned OpenAI's safety representatives to Ottawa. He sat through their explanations. And then he told the press he was "disappointed."
In government-speak, "disappointed" is one step below "furious" and two steps below "we're writing legislation." And Solomon appears to be climbing that ladder quickly. After his initial meeting with OpenAI officials, he declared that the company's commitments to adjust its policies in the wake of the shooting "do not go far enough" and announced plans to meet directly with CEO Sam Altman.
The "Threshold" Problem
This is the core of the disaster, and it extends far beyond this single incident. OpenAI's defense is that while they flagged and banned the account, the activity they observed didn't meet their internal "threshold" for reporting to law enforcement. They didn't see what they would classify as "credible or imminent planning" of violence.
Think about what that means in practice. A private technology company, with no law enforcement training, no access to criminal databases, no ability to cross-reference their user's online activity across platforms, and no understanding of local threat environments, decided on its own that a user who was concerning enough to ban was not concerning enough to report. They made a threat assessment that is, by any standard, the job of trained professionals with access to the full picture.
OpenAI is not qualified to make these calls. Nobody at a tech company is. The entire premise, that a Silicon Valley company should be the one deciding whether a flagged user represents a "credible or imminent" threat, is fundamentally broken. That's what law enforcement exists to determine. The company's job should be simple: if you see something alarming enough to ban someone, you report it and let the professionals decide.
OpenAI's Response: Better Rules, Same Structure
In the aftermath, OpenAI announced it would strengthen its safeguards. The Detroit News reported that under new rules, OpenAI says it "would've flagged" the Tumbler Ridge suspect. OpenAI agreed to adjust its reporting thresholds and protocols.
But here's the problem with "new rules": the old rules also existed. OpenAI already had safety systems in place. Those systems worked, in the sense that they identified and banned a dangerous user. What failed was the decision-making framework that said "ban but don't report." Adjusting thresholds doesn't fix the structural problem of a tech company playing amateur threat assessor.
The family of at least one victim has filed a lawsuit against OpenAI, according to Anadolu Agency. The legal question, whether a technology company has a duty to report flagged users to law enforcement, could reshape the entire AI safety landscape.
The Bigger Pattern
This is not an isolated incident in the broader pattern of AI safety failures. It sits alongside a growing list of cases where AI systems operated exactly as designed and still contributed to harm. The AI worked. The safety system worked. The ban worked. What didn't work was the gap between "we handled it on our platform" and "we told someone who could handle it in the real world."
Tech companies have spent years building increasingly sophisticated internal safety tools. Content moderation. Automated flagging. Account suspension. And they've convinced themselves and regulators that these internal tools constitute "safety." But platform safety is not public safety. Banning someone from ChatGPT doesn't prevent them from acting in the physical world. It just means they can't use your product while they do it.
Minister Solomon's frustration, and his escalation from staff meetings to demanding a direct audience with Sam Altman, suggests that Canada is moving toward legislation that would make reporting mandatory rather than discretionary. Other countries are watching. The European Union's AI Act already contains provisions for safety reporting. The United States has no comparable requirement.
What Should Have Happened
The simple version: OpenAI's system flags concerning activity. OpenAI bans the account. OpenAI simultaneously files a report with local law enforcement in the user's jurisdiction. Law enforcement, the people actually trained to assess threats, investigates. Maybe they find nothing actionable. Maybe they intervene. Either way, the decision about whether the threat is "credible or imminent" is made by the right people with the right tools and the right authority.
That's not what happened. What happened is that a tech company decided it knew better than law enforcement, kept the information to itself, and people died eight months later.
OpenAI's systems caught the threat. Their judgment failed the victims. And now a country is asking why the most advanced AI company in the world couldn't figure out that when you see something, you should say something.