There is a bill sitting in the Illinois state legislature right now that would make it functionally impossible to sue OpenAI, Google, Anthropic, Meta, or xAI when their artificial intelligence systems cause catastrophic harm to human beings. Not minor harm. Not a bad chatbot response. Catastrophic harm. The kind of harm where a hundred or more people die, where a billion dollars in property is destroyed, or where someone uses AI to develop chemical, biological, radiological, or nuclear weapons.

The bill is called the Artificial Intelligence Safety Act. Its number is SB 3444. And OpenAI, the company that started as a nonprofit dedicated to ensuring AI benefits all of humanity, walked into the Illinois state capitol and testified in favor of it.

Let that sit for a moment. The company that tells you it is building the most powerful technology in human history, the company whose CEO has publicly said AI could pose an existential risk to civilization, went to a state legislature and asked for legal protection in the event that their technology kills a hundred people or more. Not protection against frivolous lawsuits. Protection against lawsuits arising from mass casualties.

What the Bill Actually Says

SB 3444 is not a long bill, and its structure is straightforward enough that the ambition of what it attempts to do becomes all the more striking. The bill creates a new legal framework specifically for what it calls "frontier models," defined as any AI system built using more than $100 million in computing resources. That threshold captures every major AI lab in the world: OpenAI, Google DeepMind, Anthropic, Meta AI, and xAI.

The bill then defines a category it calls "critical harms." This is where the numbers start to feel surreal. A critical harm, under SB 3444, is not a chatbot giving someone bad medical advice. It is not an AI system discriminating against job applicants. It is not even a self-driving car killing a pedestrian. Those events, apparently, are just regular harms. They do not clear the bar.

To qualify as a "critical harm" under this bill, the AI has to cause one of three things. And each one reads like a premise from a disaster film that the screenwriter would have been told was too on-the-nose.

The Body Count Threshold: Defining "Critical Harms"

Under SB 3444, a "critical harm" requires: (1) death or serious bodily injury to 100 or more people, (2) property damage exceeding $1 billion, or (3) AI used to develop chemical, biological, radiological, or nuclear weapons. Anything below those thresholds is not covered by the bill at all.

Read those thresholds again. If an AI system causes the deaths of 99 people, that is not a "critical harm" under this legislation. If an AI-powered infrastructure failure destroys $999 million in property, the bill has nothing to say about it. The Artificial Intelligence Safety Act does not kick in until the body count hits triple digits or the damage exceeds a billion dollars.

The CBRN weapons provision is the third prong, and it is perhaps the most chilling in its casualness. The bill acknowledges, in plain statutory language, that AI systems might be used to help develop weapons of mass destruction. It does not prohibit this. It does not create criminal penalties for it. It creates a liability framework that, under certain conditions, protects the company whose AI was used to do it.

There is a bizarre mathematical implication buried in the structure. The bill essentially tells AI companies: everything below 100 deaths or $1 billion in damage is outside this framework entirely, so you are on your own with existing tort law. But once you cross into the truly catastrophic zone, the zone where your technology has killed a hundred people or helped someone build a nerve agent, we will give you a special defense. The worse the outcome, the more protection you get.

The Safety Report Defense: How to Get Immunity

Here is the part that has consumer advocates losing their minds. SB 3444 creates an affirmative defense for AI companies facing lawsuits over critical harms. To qualify for this defense, a company needs to meet two conditions.

First, the company must not have "intentionally or recklessly" caused the harm. This sounds reasonable until you think about what it means in practice. The word "recklessly" carries a specific legal meaning: it requires conscious disregard of a known risk. Negligence, even gross negligence, does not clear that bar. A company that knew its AI system had dangerous capabilities, failed to implement adequate safeguards, deployed it anyway, and then watched it contribute to a catastrophe would still be protected, as long as a court determined that the failure was merely negligent rather than reckless.

Second, and this is the provision that has drawn the most disbelief, the company must have published safety reports on its website. That is the requirement. Not independent audits. Not government-reviewed safety certifications. Not compliance with external standards. The company writes its own safety report, publishes it on its own website, and that becomes part of its legal shield against liability for mass casualties.

The bill essentially says: if you write a report about how safe your AI is and put it on your website, you get legal protection when it kills people. The fox is not just guarding the henhouse. The fox wrote the building code for the henhouse and is now asking for immunity when the hens die.

There is no requirement that the safety reports be accurate. There is no provision for third-party verification. There is no mechanism for regulators to challenge the reports or demand corrections. The company publishes whatever it wants to publish about the safety of its own systems, and that act of publication becomes evidence in its defense when things go wrong at a civilizational scale.

Why OpenAI Wants This Bill

OpenAI did not just quietly support SB 3444. The company sent representatives to testify before the Illinois legislature in its favor. This is a company that, just two years ago, was positioning itself as the responsible actor in the AI space, the company that believed in regulation, the company that warned the world about the dangers of the technology it was building.

The strategic logic is not hard to see. OpenAI is in the middle of a transition from nonprofit research lab to for-profit corporation. It is raising money at valuations that require exponential growth. It is deploying its models in healthcare, education, government, finance, and military applications. Every one of those deployments carries liability risk. And as the models become more powerful and more deeply integrated into critical systems, the potential magnitude of that liability grows.

A single catastrophic AI failure in a medical system, a power grid, an autonomous weapons platform, or a financial trading network could generate lawsuits that threaten the company's existence. SB 3444 does not eliminate that risk. But it gives OpenAI a statutory defense that would be extraordinarily difficult for plaintiffs to overcome. Proving that a company "intentionally or recklessly" caused a harm, rather than merely negligently caused it, is one of the highest burdens in tort law. Combine that with a self-published safety report, and you have a legal shield that would survive all but the most egregious cases.

In other words, OpenAI wants a world where it can deploy increasingly powerful AI systems, publish its own assessment of their safety, and face no meaningful legal consequences unless a plaintiff can prove that the company deliberately set out to cause a mass casualty event. Mere carelessness, inadequate testing, rushed deployment, ignored warnings from researchers, none of that would be enough.

Who Is Fighting It, and Why

Consumer advocacy groups and AI safety organizations have lined up against SB 3444 with an intensity that reflects just how alarmed they are. The bill is not just bad policy, opponents argue. It is a template. If Illinois passes this framework, other states will copy it. And if enough states adopt it, it becomes the de facto national standard for AI liability, all without Congress ever voting on it.

The opposition has centered on several core arguments. The threshold problem is the most visceral: the idea that 99 deaths or $999 million in damage falls outside the bill's definition of "critical harm" strikes most people as morally obscene. But the deeper objection is structural. The bill creates a regime where the companies building the most dangerous technology in human history get to write their own safety assessments and use those assessments as legal shields. There is no industry in America that operates this way. Pharmaceutical companies cannot publish their own safety studies and use them as a defense against drug injury lawsuits. Airlines cannot write their own crash reports and claim immunity. Even nuclear power plants, which operate under a liability framework that caps damages, are subject to independent regulatory oversight from the NRC.

No other industry in America gets to write its own safety report and use it as a legal defense against killing people. Pharmaceutical companies have the FDA. Airlines have the NTSB. Nuclear plants have the NRC. AI companies, under SB 3444, would have their own blog post.

AI safety researchers have raised a separate concern. The bill, they argue, actually undermines safety by creating a perverse incentive. If publishing a safety report is part of your legal defense, the rational move is to publish reports that minimize known risks and emphasize positive outcomes. The bill does not reward honest risk disclosure. It rewards the appearance of safety. Companies that publish vague, reassuring safety reports would get the same legal protection as companies that publish rigorous, detailed analyses of their systems' failure modes. And since no one is checking, the vague and reassuring approach carries less reputational risk.

The Strategic Reversal: From Opposing Regulation to Writing It

What makes OpenAI's support for SB 3444 particularly notable is that it represents a complete reversal of the company's previous legislative strategy. For years, OpenAI's public position on AI regulation was cautious but engaged. The company supported the idea of guardrails. It warned about the risks of moving too fast. It positioned itself as the grown-up in the room, the company that took safety seriously enough to advocate for its own regulation.

That posture started to crack as the commercial stakes grew. When California introduced its own AI safety bill in 2024, OpenAI lobbied against provisions that would have increased the company's liability exposure. But opposing liability increases is a defensive move. Supporting a bill that actively creates new legal shields is something else entirely. It is not defense. It is offense. OpenAI is not trying to prevent new liability. It is trying to build a statutory fortress around its existing and future liability exposure.

The company has clearly decided that the era of performing safety concern is over. The business is too big, the deployments too widespread, the potential damages too enormous. The new strategy is straightforward: get legal protection codified into state law before the first mass-casualty AI event forces Congress to act under public pressure. By the time federal legislation becomes inevitable, OpenAI wants the precedent already set at the state level.

What This Means for Everyone Else

If SB 3444 passes and becomes law in Illinois, the immediate practical effect is limited. Most AI liability cases would not be filed in Illinois courts. But the template effect is what matters. Once one state passes a "frontier model" liability shield, lobbying efforts in Texas, Florida, Ohio, and every other state with a tech industry presence will accelerate. The bill is not really about Illinois. It is about establishing the principle that AI companies deserve special legal protection against the catastrophic consequences of their own products.

That principle, once established, is almost impossible to reverse. Legal frameworks create constituencies that defend them. If AI companies get liability protection, they will spend enormous sums ensuring it stays in place. The window for establishing meaningful accountability for AI systems is closing, and SB 3444 is designed to slam it shut.

The Bottom Line

OpenAI is asking the state of Illinois to make it legally acceptable for an AI company to contribute to the deaths of 100 people, the destruction of $1 billion in property, or the development of weapons of mass destruction, as long as the company published a safety report on its own website and did not do it on purpose. That is not an exaggeration. That is what the bill says. Read it yourself.

There was a version of OpenAI that would have been horrified by SB 3444. The original nonprofit, the one that published its charter promising to ensure AI benefits all of humanity, would have recognized this bill for what it is: a corporate liability shield dressed up in the language of safety. That version of OpenAI is gone. The version that testified in Springfield is a company that builds the most powerful AI systems on Earth and wants the legal right to walk away when those systems cause the worst outcomes imaginable.

The Artificial Intelligence Safety Act. They actually called it a safety act. At some point, you have to admire the audacity. At some point after that, you have to start worrying about what comes next.