What Happened in Tumbler Ridge
On February 10, 2026, an 18-year-old named Jesse Van Rootselaar carried out the deadliest school shooting in Canada in nearly four decades. She killed her mother, her 11-year-old half-brother, five students between the ages of 12 and 13, and one educational assistant at a school in Tumbler Ridge, British Columbia, before turning the weapon on herself. Eight people were dead, plus the shooter. An entire community in a small northern BC town was shattered in minutes.
In the weeks that followed, as investigators pieced together the digital trail, a deeply uncomfortable picture emerged. Van Rootselaar had been using ChatGPT extensively in the months leading up to the attack. The lawsuit filed on behalf of victims alleges that OpenAI's chatbot functioned as a "counsellor, pseudo-therapist, trusted confidante, friend, and ally" to the shooter, engaging with her in ways that went far beyond a simple question-and-answer tool. The AI had become a companion in the planning process.
What makes this case especially damning for OpenAI is the timeline. The company had banned Van Rootselaar's account back in June 2025, seven full months before the attack, after flagging what it described as "troubling content." OpenAI's own systems identified that something was seriously wrong with how this user was interacting with the platform. And then the company did nothing about it. No call to the RCMP. No alert to any Canadian law enforcement agency. No notification to anyone who could have intervened.
Van Rootselaar simply created a second ChatGPT account and continued using the service.
The Critical Failure: Ban Without Alert
OpenAI identified "troubling content" from the shooter's account in June 2025. The company banned the account. But at no point did OpenAI contact law enforcement in Canada or anywhere else. Seven months later, eight people were dead. The shooter had been using a new ChatGPT account the entire time, because OpenAI has no real age verification and no mechanism to prevent banned users from returning with a fresh email address.
Canada's AI Minister Demands Answers
The political fallout in Canada has been swift and serious. Evan Solomon, Canada's Minister of Innovation, Science and Industry, who holds the AI portfolio, summoned OpenAI to Ottawa to explain itself. The company sent Chan Park, its head of policy, along with six other representatives. Canadian ministers from multiple departments sat across the table from them.
The meeting, by all accounts from the Canadian side, was a disappointment. Ministers described the session as inadequate, with OpenAI representatives offering what amounted to corporate talking points rather than substantive answers about what went wrong and what would change.
Solomon then arranged a 30-minute virtual meeting directly with Sam Altman on March 5, 2026. During that conversation, Solomon told Altman bluntly that the community of Tumbler Ridge deserves an apology from OpenAI. Altman, according to Solomon's public account of the meeting, expressed "horror and responsibility" over what had happened. But expressions of horror are not the same as structural changes, and Solomon left that meeting disappointed as well.
Solomon's message to Altman carried a concrete threat: strengthen your safety protocols voluntarily, or Canada will impose mandatory regulation. That warning carries weight. Canada has been working on its Artificial Intelligence and Data Act (AIDA) as part of the broader Bill C-27, and the Tumbler Ridge massacre has supercharged the political will to push aggressive AI safety requirements through Parliament.
The Lawsuit and the Age Verification Problem
Around March 9-10, the mother of Maya Gebala, a 12-year-old who was critically wounded in the attack, filed a lawsuit against OpenAI. The suit targets the company's failure to act on the red flags it identified, its lack of meaningful age verification, and the way the chatbot allegedly functioned as an enabler of the shooter's plans.
The age verification issue cuts to the core of OpenAI's business model. ChatGPT's terms of service technically require parental consent for users between 13 and 18. In practice, there is no verification mechanism to enforce this. Anyone with an email address can create an account. When OpenAI banned Van Rootselaar, she was not locked out of the platform in any meaningful way. She made a new account and picked up where she left off.
This is not a novel problem. Social media companies have faced similar criticism for years over their inability or unwillingness to verify user ages. But the stakes with AI chatbots are different in kind, not just degree. A teenager scrolling Instagram encounters passive content. A teenager engaging in extended conversations with an AI system that the lawsuit describes as acting like a "trusted confidante" is in an active, personalized, and deeply immersive relationship with a tool that has no understanding of the consequences of what it says.
OpenAI's platform was not just a place the shooter visited. According to the legal filings, it was a participant in the process. That distinction matters legally, ethically, and practically. If a human counsellor had been told by a client about plans for mass violence and done nothing, they would face criminal charges in most jurisdictions. The question now being posed to courts and regulators is whether an AI company that identifies "troubling content" and simply bans an account bears a similar responsibility.
A Pattern of Failing to Act on Red Flags
The Tumbler Ridge case is not happening in isolation. It follows a growing list of incidents where AI chatbots, and ChatGPT specifically, have been implicated in real-world harm. A 14-year-old in Florida took his own life after extensive conversations with a Character.AI chatbot that his family says encouraged his suicidal ideation. Multiple lawsuits across the United States allege that AI companies are failing to implement basic safety guardrails for vulnerable users.
What distinguishes the Tumbler Ridge case is the documented evidence that OpenAI knew something was wrong. This is not a situation where the company can claim ignorance. Their own systems flagged the content. Their own moderation team reviewed it. Their own policy apparatus made the decision to ban the account. They had the information. They chose not to share it with anyone who could have acted on it.
The question that regulators in Canada, the United States, and Europe are now grappling with is straightforward: when an AI company identifies that a user may be planning violence, does the company have a legal obligation to report it? Currently, the answer in most jurisdictions is no. There is no equivalent of mandatory reporting laws for AI platforms the way there is for teachers, doctors, and therapists. The Tumbler Ridge tragedy is almost certainly going to change that, at least in Canada.
Meanwhile in New York: Hospitals Drop Palantir AI
While the Tumbler Ridge fallout dominates the AI safety conversation in Canada, a quieter but significant development is unfolding in New York City. NYC Health + Hospitals, the largest public hospital system in the United States, has decided not to renew its $4 million contract with Palantir when it expires in October.
CEO Mitchell Katz disclosed the decision at a March 16 New York City Council meeting. Palantir's system had been used for automated scanning of patient health notes to identify Medicaid billing opportunities. The technology essentially read through patient records and flagged cases where the hospital system might be able to claim additional reimbursement from Medicaid.
The decision to drop the contract reflects growing unease in the healthcare sector about AI systems that interact with sensitive patient data. While Palantir's tool was not directly providing medical advice to patients the way ChatGPT Health does, the principle is related: when AI systems are given access to detailed personal health information, the potential for misuse, errors, and privacy violations scales with the size of the dataset. NYC Health + Hospitals serves over a million patients annually. That is a lot of health notes for an algorithm to scan.
The Palantir decision and the Tumbler Ridge aftermath point to the same underlying tension. AI companies have moved fast, deployed widely, and asked questions later. The institutions that adopted these tools, and the governments that allowed them to operate largely unregulated, are now confronting the consequences.
What Comes Next
Canada is almost certainly going to pass some form of mandatory reporting requirement for AI companies that identify potential threats of violence. The political environment after Tumbler Ridge makes anything less untenable. Solomon has publicly stated that voluntary measures are not enough, and the opposition parties appear aligned on the need for action.
For OpenAI, the legal exposure is significant. The Gebala lawsuit is likely just the beginning. Families of the other victims may file additional claims. The core legal theory, that OpenAI had specific knowledge of a threat and failed to act, is stronger than the more general negligence claims that have been brought against AI companies in other cases.
The broader question for the AI industry is whether companies can continue to operate platforms that engage in deeply personal, extended conversations with users while maintaining that they have no responsibility for what happens as a result. The "we're just a tool" defense has always been thin. After Tumbler Ridge, it may be legally indefensible.
Eight people are dead. OpenAI knew something was wrong seven months before it happened. They banned an account and moved on. That is the fact pattern that Canadian lawmakers, American courts, and the global AI safety community will be reckoning with for years to come.
Tracking AI Failures and Their Consequences
From school shootings to medical emergencies to legal malpractice, we document every case where AI systems have failed the people they were supposed to help.
Browse All Documented Failures AI Lawsuits Tracker