Key Facts
- What happened: Attorney Steven Schwartz used ChatGPT to research case law and submitted a brief containing 6 completely fabricated cases
- Court: U.S. District Court, Southern District of New York
- Judge: Hon. P. Kevin Castel
- Sanctions: $5,000 fine for Schwartz and his colleague Peter LoDuca
- Impact: International media coverage, spawned new AI use policies in courts worldwide
Considering alternatives to ChatGPT? Compare top AI assistants to find options that prioritize reliability, privacy, and honest performance over hype.
The Incident
In early 2023, attorney Steven Schwartz of the law firm Levidow, Levidow & Oberman was preparing a legal brief in the case Mata v. Avianca Airlines. His client, Roberto Mata, was suing the Colombian airline for an injury sustained on a flight.
To strengthen his arguments, Schwartz turned to ChatGPT to research relevant case law. ChatGPT obligingly provided him with several case citations that seemed perfect for his brief - cases involving airline liability, personal injury, and statute of limitations issues.
There was just one problem: None of the cases existed.
ChatGPT had "hallucinated" - generating plausible-sounding but entirely fictional case citations, complete with fake judges, fake courts, and fake legal holdings.
The Fake Cases
Timeline of Events
Schwartz uses ChatGPT to research case law for the Avianca brief.
Brief containing fabricated citations is filed with the court.
Opposing counsel notices the cases cannot be found in any legal database.
Judge Castel orders Schwartz to show cause why he should not be sanctioned.
Hearing held. Schwartz admits using ChatGPT and not verifying citations.
Judge Castel issues sanctions: $5,000 fine and required letters of apology.
What Schwartz Told the Court
"I did not comprehend that ChatGPT could fabricate cases... I have never used ChatGPT as a source for conducting legal research prior to this occurrence and was unaware of the possibility that its content could be false."
When confronted with the non-existent cases, Schwartz did something that made the situation worse: he asked ChatGPT to confirm the cases were real. ChatGPT, doubling down on its hallucination, assured him the cases existed and could be found in legal databases.
"Is Varghese a real case?"
"Yes, Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2019) is a real case."
The Judge's Response
Judge P. Kevin Castel was not amused. In his sanctions order, he wrote:
"Technological advances are commonplace and they should not be combated. But, in the context of attorneys using AI to assist in legal writing, there are risks involved... here, technological tools need to be used with skill and care."
Consequences
- $5,000 fine imposed on Schwartz and colleague Peter LoDuca
- Public apology letters required to each judge falsely cited
- Professional embarrassment - case made international news
- Bar complaints filed (disposition unclear)
- Client harm - Mata's case was damaged by the fiasco
Broader Impact
This case became the first major public example of ChatGPT hallucinations causing real-world professional harm. It triggered:
- New AI disclosure requirements in courts across the US
- Ethics opinions from state bar associations
- Updates to legal research training programs
- Corporate policies requiring AI output verification
- Academic papers on AI reliability in professional contexts
Lessons Learned
- Always verify AI output - ChatGPT will confidently present false information as fact
- AI will confirm its own lies - asking ChatGPT to verify its output doesn't work
- Professional responsibility survives AI use - you're still accountable for what you submit
- "I didn't know" isn't a defense - professionals must understand their tools
- Hallucinations aren't rare edge cases - they're fundamental to how LLMs work
Sources
- New York Times - Here's What Happens When Your Lawyer Uses ChatGPT
- Reuters - New York lawyers sanctioned for using fake ChatGPT cases
- Court filings: Mata v. Avianca, Inc., Case No. 1:22-cv-01461 (S.D.N.Y.)