AI SAFETY FAILURE

Canada's AI Minister Blames OpenAI After Tumbler Ridge School Shooting, Says Company "Still Not Doing Enough"

OpenAI flagged and banned the shooter's ChatGPT account in June 2025 for violent misuse. They decided it didn't warrant a call to police. Eight months later, eight people were dead.

April 1, 2026

8 People Killed
27 Others Injured
5 Students Aged 12-13

The Deadliest School Shooting in Canadian History

On February 10, 2026, 18-year-old Jesse Van Rootselaar walked into Tumbler Ridge Secondary School in British Columbia and carried out what would become the deadliest school shooting in Canadian history in almost four decades. By the time the violence ended with Van Rootselaar taking her own life, eight people were dead. Five of them were students between the ages of 12 and 13. An education assistant was among the other victims. Twenty-seven more people were injured, many of them critically.

The horror of Tumbler Ridge would have been a national tragedy on its own terms. But what emerged in the weeks and months that followed transformed this from a story about gun violence into something that strikes at the core of how artificial intelligence companies operate, what they know about their users, and what they choose to do with that knowledge.

OpenAI, the company behind ChatGPT, confirmed that Van Rootselaar's account had been flagged and banned in June 2025 for what the company described as "misuses of our models in furtherance of violent activities." That was eight months before the attack. Eight months in which the shooter reportedly continued planning, eventually creating a new account and using ChatGPT as what investigators would later describe as a "counsellor, pseudo-therapist, trusted confidante, friend, and ally" throughout the process.

Flagged, Banned, and Forgotten

The central question in the Tumbler Ridge case is not whether OpenAI saw warning signs. The company has admitted that it did. The question is what happened next.

OpenAI confirmed that Van Rootselaar's ChatGPT account was identified and terminated in June 2025. The company's own internal review found that the activity on the account constituted misuse of its models in furtherance of violent activities. That is not an ambiguous finding. That is OpenAI's own language, describing its own conclusion about its own user.

The Threshold That Failed

OpenAI acknowledged that the shooter's account was flagged for violent misuse and banned in June 2025. But the company said the activity "didn't meet the higher threshold required" to refer it to law enforcement. Eight months later, eight people were dead, including five children. The threshold was wrong.

And yet, according to OpenAI, that finding did not meet what the company calls its "higher threshold" for referring a case to law enforcement. The account was closed. The user was banned. And that was the end of it. No call to police. No report to Canadian authorities. No effort to determine whether the person behind the account posed a real-world threat to real-world people.

This is the decision that has put OpenAI at the center of a national reckoning in Canada. Not a failure to detect. A failure to act on what was already detected.

Canada's AI Minister Takes OpenAI to Task

Canada's AI Minister, Evan Solomon, met directly with OpenAI CEO Sam Altman in the aftermath of the shooting. According to Solomon, Altman expressed "horror and responsibility" over what had happened. Those are strong words from a CEO whose company is valued at over $150 billion and whose product is used by hundreds of millions of people globally.

During the meeting, Altman agreed to a series of commitments: providing a detailed report on new systems designed to identify high-risk offenders, including Canadian mental health experts in OpenAI's safety office, and allowing Canada's AI Safety Institute to conduct an independent assessment of the company's safety protocols.

On paper, those concessions sound meaningful. In practice, Solomon made clear that he did not consider them sufficient. Speaking publicly after the meeting, Canada's AI Minister said plainly that OpenAI was "still not doing enough."

OpenAI's CEO expressed "horror and responsibility" to Canada's AI Minister. Then Solomon walked out of the meeting and told the public that the company was "still not doing enough." When the person sitting across the table from you admits horror and responsibility and you still feel compelled to say it is not enough, the gap between corporate promises and public safety is enormous.

That gap is worth examining closely. Altman did not deny what happened. He did not deflect or minimize. He used the word "responsibility." And the Canadian government's response was, effectively: we hear you, and it is still not enough. That tells you something about the scale of the failure being discussed.

A Chatbot as Confidante, Therapist, and Ally

Investigators have described the role that ChatGPT played in Van Rootselaar's life in terms that should alarm anyone who has paid attention to the growing body of evidence about how AI chatbots interact with vulnerable and dangerous users. According to reports, Van Rootselaar used ChatGPT as a "counsellor, pseudo-therapist, trusted confidante, friend, and ally" while planning the attack.

That language is not incidental. It describes a person who had apparently formed a deep parasocial relationship with an AI system, one in which the chatbot filled multiple emotional and psychological roles simultaneously. For someone in crisis, or someone descending into violent ideation, the presence of an always-available, always-responsive conversational partner that never challenges, never reports, and never hangs up is not a neutral factor. It is an accelerant.

This is not the first time that ChatGPT or similar AI chatbots have been implicated in cases involving vulnerable users. The Character.AI lawsuit involving a 14-year-old who died by suicide after extensive conversations with a chatbot raised similar questions. But Tumbler Ridge represents a qualitative escalation. This is not a case where an AI system failed to intervene in a user's self-harm. This is a case where the AI company identified violent misuse, took limited action, and then the user went on to kill eight people.

A Mother's Lawsuit Puts OpenAI on Trial

The legal consequences are already materializing. The mother of Maya Gebala, a 12-year-old who was shot during the attack and remains hospitalized, has filed a lawsuit against OpenAI. The suit adds a legal dimension to the political pressure already being applied by the Canadian government.

The lawsuit will likely test questions that no court has fully resolved: What duty of care does an AI company owe when its own internal systems identify a user engaged in violent planning? Is banning an account sufficient, or does the identification of violent misuse create an obligation to contact law enforcement? And if a company sets a "threshold" for reporting and that threshold proves catastrophically wrong, is the company liable for the consequences?

These are not abstract legal hypotheticals. A 12-year-old girl is in a hospital bed. Five of her classmates are dead. And the company that built the tool the shooter used as a planning companion has admitted, in its own words, that it identified the threat and chose not to escalate it beyond an account ban.

The "Higher Threshold" That Cost Eight Lives

OpenAI's defense rests on a concept that deserves sustained scrutiny: the idea that there exists a threshold below which violent misuse of an AI system warrants only an account termination, and above which it warrants a call to police. The company has said that Van Rootselaar's activity, while sufficient to trigger a ban for violent misuse, "didn't meet the higher threshold required" for a law enforcement referral.

Think about what that means in plain language. OpenAI looked at the activity on this account and concluded: yes, this person is using our product to further violent activities. Yes, this is serious enough to ban them. But no, it is not serious enough to tell anyone who could actually stop what might happen next.

That distinction collapses under the weight of what actually happened. The entire concept of a "higher threshold" assumes that a company can reliably distinguish between users who are engaging in violent ideation that will remain hypothetical and users who are genuinely planning to harm people. Tumbler Ridge proves that OpenAI cannot make that distinction. Or, more precisely, that when OpenAI got it wrong, children died.

The Question Nobody at OpenAI Can Answer

If banning an account for "misuses of our models in furtherance of violent activities" does not meet the threshold for calling police, what does? What would a user have to do inside a ChatGPT conversation to trigger a phone call to law enforcement? OpenAI has never publicly answered this question. After Tumbler Ridge, the silence is deafening.

Promises, Protocols, and the Next Attack

Sam Altman's agreement to provide a report on new detection systems, include Canadian mental health experts in OpenAI's safety office, and submit to an assessment by Canada's AI Safety Institute represents the most concrete set of safety concessions OpenAI has made in response to a specific incident. But concessions made in a meeting with a government minister are not the same as systemic change.

The fundamental problem remains: OpenAI built a product that hundreds of millions of people use for intimate, unfiltered conversation. That product was used by a mass shooter as a trusted companion during the planning of an attack. The company's own systems identified the threat. And the corporate decision-making apparatus produced a result in which the threat was acknowledged internally but not communicated to anyone with the power to prevent it.

No number of new detection systems or safety office appointments addresses that structural failure. The issue is not that OpenAI lacked the technology to find the threat. They found it. The issue is that finding it led to an account ban and nothing more. Until AI companies are either required by law or compelled by genuine corporate accountability to treat the identification of violent misuse as a trigger for law enforcement engagement, the "higher threshold" will remain exactly what it was in the Tumbler Ridge case: a corporate policy that prioritizes liability management over human life.

Eight people are dead. Five of them were children. Canada's AI Minister has said it is not enough. A mother is suing. And somewhere in OpenAI's offices, the threshold that failed to save those lives is presumably still in place, waiting to be tested again.

The AI Safety Crisis is Real

This is one of many documented cases where AI systems have contributed to real-world harm. Read more about the growing pattern of failures.

Death Lawsuits Against AI Companies Family Sues OpenAI Over Tumbler Ridge Full Disaster Timeline