ChatGPT AI hallucination fake legal citations lawyers sanctioned 2026
ChatGPT's hallucination problem has led to over 1,000 documented legal cases involving fabricated citations | Photo: Cronkite News / Arizona PBS

The legal profession has a hallucination problem, and it is not getting better. In the two years since a New York attorney made international headlines for submitting a ChatGPT-generated brief stuffed with fictitious case citations, you might expect that the legal profession would have collectively decided to stop doing that. You would be wrong. The problem has exploded in scale, and the consequences for the lawyers involved are becoming increasingly severe.

Researcher Damien Charlotin has now catalogued over 1,000 legal cases involving AI-generated hallucinations. That number is not a typo. One thousand documented instances of AI chatbots inventing case law, fabricating judicial opinions, generating citations to courts that do not exist, and producing legal arguments built on foundations of pure fiction. And those are just the ones that have been caught.

1,000+ Legal cases involving AI hallucinations catalogued
700+ Actual court cases with AI-fabricated citations
$31,000 Highest single fine for AI-hallucinated brief
$4,713 Average monetary penalty per sanctioned case

The Scale: 1,000 Cases and Counting

Charlotin's research paints a picture that should alarm anyone who interacts with the legal system, which is to say, everyone. Of the more than 1,000 catalogued cases, over 700 involve actual court proceedings where AI-hallucinated content was submitted to judges. The remaining cases involve regulatory filings, arbitration proceedings, and other legal contexts where fabricated citations were discovered.

The true scope of the problem is almost certainly much larger. Charlotin's catalogue represents instances that were caught, flagged, and documented. For every hallucinated citation that a judge notices, how many slip through undetected? For every attorney who gets sanctioned, how many quietly pull filings and hope nobody noticed? The 1,000 documented cases are best understood as the visible surface of a much deeper crisis.

What makes this particularly disturbing is the trajectory. The rate of new cases being added to the catalogue is accelerating, not slowing down. Despite widespread media coverage of AI hallucination scandals in the legal profession, despite bar associations issuing guidance, despite courts implementing disclosure requirements, attorneys continue to submit AI-generated content without verifying it. The warnings are not working.

The Fines: $100 to $31,000 Per Violation

At least 15 of the catalogued cases have resulted in monetary penalties, and the range tells its own story. The smallest fine was $100, a slap on the wrist that probably cost the attorney more in time spent responding to the sanction than the penalty itself. The largest was $31,000, levied against a law firm after the court discovered that nearly one-third of the citations in their brief were AI-fabricated. The average across all penalized cases comes to $4,713.

That $31,000 case deserves a closer look because of what it reveals about the mechanics of AI hallucination in legal practice. Nearly one-third of the citations. Not one or two errant case names buried in a footnote. A substantial percentage of the entire legal foundation of the brief was invented. This means the attorney either did not check a single citation before filing, or checked some and somehow missed the fact that roughly every third one was fictional. Neither possibility is reassuring.

"Nearly one-third of the citations in the brief were AI-fabricated. This was not a minor error of carelessness. It was a systemic abdication of the most basic professional responsibility a lawyer has: making sure the law they cite actually exists."

The monetary penalties are just one dimension of the consequences. Attorneys who have been caught face reputational damage that no fine can quantify. In a profession built on credibility, being known as the lawyer who submitted fake cases is career poison. Some attorneys have been referred to bar disciplinary authorities, which can result in suspension or disbarment. Some have been publicly reprimanded in written opinions that will follow them for the rest of their careers. And in every case, the client who hired them expecting competent representation instead got a brief built on imaginary law.

Gordon Rees: The Firm That Will Not Learn

If there is a poster child for the legal profession's AI hallucination problem, it might be Gordon Rees Scully Mansukhani, one of the largest law firms in the United States. Gordon Rees has had at least three documented encounters with AI-generated citation problems, a pattern that would be embarrassing for a solo practitioner and is extraordinary for a firm of this size and reputation.

The first incident surfaced in October, when a filing from the firm was found to contain hallucinated case citations. In December, the firm was reprimanded in a separate matter. And in 2026, yet another instance was flagged. Three strikes across a span of months, at a firm with the resources to implement AI usage policies, train its attorneys, and verify its work product before filing.

Gordon Rees is not a scrappy two-person shop struggling to keep up with technology. It is a major national law firm with offices across the country. The fact that a firm of this caliber has been caught three times suggests that the problem is not about resources or access to verification tools. It is about a culture that has not adapted to the risks of the tools it is using. When a firm that can afford to do everything right keeps getting it wrong, the issue is deeper than individual carelessness.

Three separate incidents at one of the nation's largest law firms. If Gordon Rees, with all its resources, cannot stop submitting AI-hallucinated citations, what chance does the average solo practitioner have? The legal profession's AI problem is not about access to tools. It is about a fundamental failure to treat verification as non-negotiable.

The Non-Delegable Duty: Courts Draw a Line

Courts have been remarkably clear about one thing: attorneys have a non-delegable duty to verify every citation in their filings. This is not a new legal principle. Lawyers have always been responsible for the accuracy of what they submit to the court. What is new is that courts are explicitly applying this principle to AI-generated content, and they are doing so with increasing impatience.

The doctrine of non-delegable duty means exactly what it sounds like. You cannot hand your verification responsibilities to someone else, whether that someone is a paralegal, a junior associate, or a chatbot. If your name is on the filing, you are personally responsible for every word, every citation, and every claim of law contained within it. Telling a judge "I did not realize ChatGPT made that up" is not a defense. It is an admission that you failed to do your job.

Several courts have now issued standing orders requiring attorneys to disclose when AI tools were used in the preparation of legal documents. Some jurisdictions require attorneys to certify that all citations have been independently verified. These requirements are becoming more common and more stringent, driven by the steady stream of AI hallucination incidents that show no sign of slowing down.

The message from the bench is unambiguous: if you use AI to draft your brief, you verify it with the same rigor you would apply to any other source. If you submit fabricated citations, you will be sanctioned. If you do it repeatedly, the consequences will escalate. Courts are not interested in hearing about how the technology is new or how the attorney was not technically trained on these tools. The standard is the standard, and it does not lower because the method of failure changed.

A New Tool to Catch What Lawyers Won't

In March 2026, a new development emerged that underscores both the severity of the problem and the legal profession's inability to police itself: someone built a tool specifically designed to catch AI hallucinations in legal briefs. The fact that this tool needs to exist tells you everything about where we are.

The tool works by cross-referencing every citation in a legal document against actual legal databases, identifying cases that do not exist, opinions that were never written, and court names that are fictional. It is, in essence, a hallucination detector for the legal profession, automating the verification work that attorneys are supposed to be doing themselves but clearly are not.

There is something darkly funny about the progression. AI creates the hallucinations. A different tool is built to catch the hallucinations that AI created. Lawyers are caught in the middle, using one technology to generate their work product and needing another technology to verify that the first technology did not invent the law. If you designed a system to illustrate the absurdity of deploying AI without adequate safeguards, you could not do better than this.

But the tool also raises a more uncomfortable question: if automated verification is now possible and available, does every attorney who submits an AI-hallucinated citation have even less of an excuse? The technology to check exists. The obligation to check has always existed. The penalties for not checking are well documented. And yet the filings keep coming.

Why It Keeps Happening Despite the Consequences

Understanding why attorneys continue to submit AI-hallucinated citations requires understanding the economics and psychology of modern legal practice. The average attorney is not using ChatGPT because they are lazy. They are using it because they are drowning.

The billable hour model creates enormous pressure to produce work product quickly. Research that used to take hours can ostensibly be done in minutes with AI. The temptation to use that speed advantage is overwhelming, especially for solo practitioners and small firms competing against larger operations. When a tool promises to turn hours of research into minutes of conversation, the economic incentive to use it is irresistible.

The problem is what happens after the AI generates its output. Verification takes time. Checking every citation against Westlaw or LexisNexis takes the kind of careful, methodical work that AI was supposed to eliminate. Many attorneys are cutting the verification step not because they do not know they should do it, but because doing it negates the time savings that made the AI attractive in the first place. It is a trap: the tool is only faster if you skip the part that makes it safe.

The AI Legal Hallucination Crisis Timeline

Jun 2023 New York attorney Steven Schwartz makes headlines after submitting a ChatGPT-generated brief with six fictitious case citations. Sanctioned by the court.
2024 Courts begin issuing standing orders requiring disclosure of AI tool usage. Incidents continue to surface across jurisdictions.
Oct 2025 Gordon Rees, one of the largest US law firms, submits filing with AI-hallucinated case citations. First of three documented incidents.
Dec 2025 Gordon Rees reprimanded in second AI citation incident. A separate firm is fined $31,000 after nearly 1/3 of brief citations found to be AI-fabricated.
Early 2026 Gordon Rees flagged in third AI hallucination incident. Charlotin's catalogue passes 1,000 documented cases. At least 15 result in monetary penalties.
Mar 2026 New tool developed specifically to detect AI hallucinations in legal briefs. The fact that it needs to exist is its own indictment of the profession's failure to self-regulate.

There is also a knowledge gap that persists despite everything. Some attorneys genuinely do not understand that AI language models hallucinate. They treat ChatGPT like a search engine, assuming that if it produces a citation, that citation must exist somewhere. The interface is designed to inspire confidence. The responses are articulate, well-formatted, and authoritative in tone. Nothing in the output signals "warning: I may have invented this." The tool is built to sound right, and attorneys who are not trained on its limitations take that confidence at face value.

And then there is the most uncomfortable explanation of all: some attorneys know the risk and use the tool anyway because they calculate that the odds of getting caught are low enough to justify the time savings. For every attorney who gets sanctioned, dozens submit AI-assisted briefs that are never flagged. The enforcement mechanism, judges who happen to notice that a cited case does not exist, is random and inconsistent. Until verification becomes mandatory and automated at the filing stage, the cost-benefit calculation will continue to favor cutting corners.

Stop Using ChatGPT for Legal Work. Here Is What to Use Instead.

Here is what makes this whole crisis so frustrating: ChatGPT is not the only AI tool available to lawyers. It is just the worst one for legal research. And while over 1,000 attorneys have been caught submitting fabricated citations from ChatGPT, other AI platforms have been designed from the ground up to handle exactly this problem, either by refusing to fabricate or by making verification automatic.

Claude, built by Anthropic, takes a fundamentally different approach to uncertain information. Where ChatGPT confidently generates fictional case citations that look indistinguishable from real ones, Claude is designed to say "I don't know" when it is not confident in an answer. That sounds like a small distinction, but in legal practice, it is the difference between filing a brief that gets you sanctioned and getting a response that tells you to go verify something yourself. Claude is not perfect, and no AI tool should be used without verification. But a tool that flags its own uncertainty is categorically safer than one that presents fabrications with the same confidence as facts.

Perplexity has also become a go-to for legal professionals who want AI-assisted research without the hallucination roulette. Perplexity's core feature is that it cites real sources with actual links for every claim it makes. You can click through and verify that the citation exists, that the case was real, that the holding says what Perplexity claims it says. It is not a replacement for Westlaw or LexisNexis, but it is a dramatically safer starting point than asking ChatGPT to generate your case law and hoping for the best.

The March 2026 hallucination detection tool for legal briefs is a welcome development, but it is also an indictment. We now need AI tools to catch the mistakes of other AI tools. If lawyers had simply used platforms that cite real sources, like Perplexity, or platforms that flag uncertainty, like Claude, most of these 1,000 cases would never have happened.

Some forward-thinking law firms have already made the switch. They are using Claude for drafting and analysis work where the model's tendency to express uncertainty prevents the kind of confident fabrication that gets attorneys sanctioned. They are using Perplexity for preliminary research because every claim comes with a verifiable source link. And they are running their final briefs through the new March 2026 citation verification tool as a last line of defense before filing.

None of this is about being anti-AI. AI is going to transform legal practice, and the firms that use it effectively will have an enormous advantage. But "effectively" does not mean "recklessly." It means choosing the right tool for the job. ChatGPT was designed to be a general-purpose conversational assistant. It was not designed for a profession where inventing a single citation can get you fined $31,000 and referred to the bar for disciplinary action. Other tools were built with exactly that kind of stakes in mind, and lawyers who are still using ChatGPT for legal research in 2026 are choosing the most dangerous option available when better alternatives exist right now.

The Verdict

Over 1,000 documented cases. At least 15 monetary sanctions. Fines up to $31,000. One of the nation's largest law firms caught three times. And the rate of new incidents is accelerating, not slowing. The legal profession's AI hallucination crisis is not a growing pain. It is a structural failure that will continue until courts mandate automated citation verification at the point of filing. Voluntary compliance has failed. The data proves it.

The legal profession likes to think of itself as careful, methodical, and committed to accuracy above all else. That self-image is being destroyed, one fabricated citation at a time. More than 1,000 times, an attorney looked at AI-generated output and decided it was good enough to submit to a court of law without verifying it. More than 1,000 times, clients paid for legal work built on fictional foundations. More than 1,000 times, the system that is supposed to be the most rigorous arbiter of truth in our society was fed lies generated by a machine.

The solution is not to ban AI from legal practice. The solution is to acknowledge that AI hallucinations are not edge cases or rare malfunctions. They are a fundamental feature of how these tools work. Every output must be verified. Every citation must be checked. Every attorney who signs a brief is personally responsible for the law cited within it. These principles are not new. They are as old as the profession itself. The only thing that has changed is that the temptation to skip the verification has never been stronger, and the consequences of doing so have never been more visible.

One thousand cases and counting. The number will be higher by the time you finish reading this.