Documenting AI failures so you don't have to learn the hard way
There are now 944 documented cases of AI hallucinations appearing in court filings worldwide. Since the beginning of 2025 alone, researchers have tracked 518 cases in which generative AI produced fabricated content that ended up in U.S. courts. At least 11 states have established policies or rules regarding AI use by lawyers. And the consequences are getting worse: one attorney's refusal to stop submitting ChatGPT-generated fake citations cost his client the entire case.
This is not a hypothetical problem. This is not a "what if AI goes wrong" thought experiment. This is happening right now, in federal courtrooms across the country, and the scale is accelerating.
Attorney Steven A. Feldman repeatedly submitted legal filings containing fabricated case citations and misattributed quotes generated by AI. Despite multiple court warnings and orders to show cause, he continued the pattern without verifying his citations. He had access to Westlaw and Lexis but refused to use them.
The worst part: when the court ordered him to explain himself, he used AI to draft his response to the order to show cause. That response also contained more false citations.
Result: Judge Katherine Polk Failla (Southern District of New York) entered a default judgment against his client on February 5, 2026, one of the most severe sanctions in any AI hallucination case. The judge found Feldman acted in "bad faith" and determined "further misconduct is likely to occur in the future."
Read that again. A client, Affable Avenue LLC, lost their entire case not because of the facts, not because of the law, but because their attorney refused to stop letting ChatGPT write his legal briefs. The judge did not just fine him. She did not just sanction him. She entered judgment against his client. The case is over. The client lost. Because of AI hallucinations.
A contract attorney working for Hagens Berman Sobol Shapiro used ChatGPT to generate four briefs filed in federal court. Those briefs contained 11 fabricated citations out of 18 total. More than 60% of every legal citation in the filing was made up.
Co-counsel Celeste Boyd acknowledged using ChatGPT to draft and edit sections but "failed to independently verify the output, citing personal issues at the time."
Result: Judge Fred Slaughter ordered Hagens Berman and partner Robert Carey to pay $10,000 in sanctions, with Boyd receiving a separate $3,000 sanction. The judge rejected the attorneys' request to withdraw and correct the briefs, noting the corrections still contained errors.
Hagens Berman is not a small firm. They are a major national plaintiffs' firm. And even they could not resist the temptation of letting AI do the legal research, with predictable results: most of the citations in their filings were complete fiction.
Two attorneys representing Mike Lindell, Christopher Kachouroff and Jennifer DeMaster, submitted a court filing in February 2025 with more than two dozen mistakes, including hallucinated cases.
Kachouroff initially denied using AI. It was only when Judge Nina Y. Wang directly asked him whether the filing was "the product of generative artificial intelligence" that he admitted it.
Result: $3,000 per attorney ($6,000 total), ordered July 7, 2025. The judge called it "the least severe sanction adequate to deter and punish."
This case broke new ground because the AI did not just hallucinate case law, it hallucinated facts. Attorney Mr. Begley filed a declaration containing multiple fabricated quotations from deposition testimony that never occurred, complete with manufactured citations to sworn testimony during a summary judgment motion.
Result: Begley ordered to pay $4,000 and client Pauliah ordered to pay $1,000. Judge Carlton Reeves reduced the amounts from the full $8,570 in defense fees, considering both parties' financial circumstances.
That is an important escalation. The early AI hallucination cases all involved fake case citations, made-up judicial opinions that sound plausible but do not exist. The Pauliah case shows AI is now fabricating testimony. It is putting words in real people's mouths that they never said, under oath, in legal proceedings.
During oral arguments on December 10, 2025, Judge Matthew Wolf interrupted attorney Thomas W. King III to point out citation problems. Wolf told King directly: "You quote the Pennsylvania Supreme Court from the Bayada case for a quote that does not exist in that case" and "You cite the Popowsky case for a proposition and a quote that does not exist."
He then asked: "Who wrote it? And does it contain AI, artificial intelligence hallucinations?"
The attorneys later filed a letter on February 2, 2026 claiming the errors were human error, not AI. The case remains pending.
| Case | Court | Date | Sanction | Key Issue |
|---|---|---|---|---|
| Flycatcher / Feldman | S.D.N.Y. | Feb 2026 | Default judgment | Repeated AI use despite warnings |
| OnlyFans / Hagens Berman | C.D. Cal. | Aug 2025 | $13,000 | 11 of 18 citations fake |
| MyPillow / Lindell | D. Colo. | Jul 2025 | $6,000 | Denied AI use until asked |
| Pauliah / Begley | S.D. Miss. | Jan 2026 | $5,000 | Fabricated deposition quotes |
| Butler County / Knoch | PA Commonwealth | Dec 2025 | Pending | Judge caught it during oral arguments |
The trajectory here is unmistakable. In 2023, the Avianca case (the first major AI hallucination case, where attorney Steven Schwartz submitted ChatGPT-fabricated cases to a federal judge) was treated as a novelty. A cautionary tale. Something that surely would not happen again once word got out.
It has happened hundreds of times since. The documented case count went from a handful in 2023 to 206 by mid-2025 to 944 and counting in early 2026. The sanctions are escalating from fines to default judgments. The AI is not getting better at producing accurate legal citations. And attorneys keep using it anyway, sometimes lying about it when caught.
The Feldman case should terrify every attorney considering using ChatGPT for legal research. His client did not lose because of bad facts or bad law. They lost because their lawyer trusted an AI that makes things up, was warned repeatedly to stop, and kept doing it. The judge entered terminal sanctions. Game over.
At least 11 states (Arizona, Arkansas, California, Connecticut, Delaware, Illinois, New York, Ohio, South Carolina, Vermont, and Virginia) plus the District of Columbia have now established policies or rules on AI use by lawyers. California is advancing legislation requiring attorneys to take "reasonable steps" to confirm AI output, correct fabricated content, and remove biased material before using it in filings. A proposed "Hyperlink Rule" would require every cited opinion, statute, or regulation to be hyperlinked to a reputable legal database.
Meanwhile, at the federal level, President Trump signed an executive order on December 11, 2025 proposing a uniform federal policy framework for AI that could preempt state AI laws. The regulatory landscape is a mess, and the courtrooms keep filling up with AI-generated fiction.
ChatGPT does not know what a legal citation is. It does not understand case law. It does not verify whether the cases it references actually exist. It generates text that looks like legal research but is, in a substantial percentage of cases, complete fabrication. And 944 court filings (that we know of) have been contaminated by this problem.
Every one of those filings represents a real person's legal matter. A real defendant who had to spend money responding to fake citations. A real client who trusted their attorney to do actual research. A real judge who had to waste time identifying fiction instead of adjudicating facts.
This is not getting better. The tools are not improving fast enough. The attorneys are not learning. And the courts are running out of patience.