Lawyer Uses ChatGPT, Cites 6 Fake Cases, Gets Sanctioned

Category: Legal Failure Date: May-June 2023 Location: New York, USA

Key Facts

  • What happened: Attorney Steven Schwartz used ChatGPT to research case law and submitted a brief containing 6 completely fabricated cases
  • Court: U.S. District Court, Southern District of New York
  • Judge: Hon. P. Kevin Castel
  • Sanctions: $5,000 fine for Schwartz and his colleague Peter LoDuca
  • Impact: International media coverage, spawned new AI use policies in courts worldwide

Considering alternatives to ChatGPT? Compare top AI assistants to find options that prioritize reliability, privacy, and honest performance over hype.

The Incident

In early 2023, attorney Steven Schwartz of the law firm Levidow, Levidow & Oberman was preparing a legal brief in the case Mata v. Avianca Airlines. His client, Roberto Mata, was suing the Colombian airline for an injury sustained on a flight.

To strengthen his arguments, Schwartz turned to ChatGPT to research relevant case law. ChatGPT obligingly provided him with several case citations that seemed perfect for his brief - cases involving airline liability, personal injury, and statute of limitations issues.

There was just one problem: None of the cases existed.

ChatGPT had "hallucinated" - generating plausible-sounding but entirely fictional case citations, complete with fake judges, fake courts, and fake legal holdings.

The Fake Cases

Varghese v. China Southern Airlines Co. Ltd.
Fabricated - No such case exists
Shaboon v. Egyptair
Fabricated - No such case exists
Petersen v. Iran Air
Fabricated - No such case exists
Martinez v. Delta Airlines
Fabricated - No such case exists
Estate of Durden v. KLM Royal Dutch Airlines
Fabricated - No such case exists
Miller v. United Airlines
Fabricated - No such case exists

Timeline of Events

February 2023

Schwartz uses ChatGPT to research case law for the Avianca brief.

March 1, 2023

Brief containing fabricated citations is filed with the court.

May 4, 2023

Opposing counsel notices the cases cannot be found in any legal database.

May 25, 2023

Judge Castel orders Schwartz to show cause why he should not be sanctioned.

June 8, 2023

Hearing held. Schwartz admits using ChatGPT and not verifying citations.

June 22, 2023

Judge Castel issues sanctions: $5,000 fine and required letters of apology.

What Schwartz Told the Court

"I did not comprehend that ChatGPT could fabricate cases... I have never used ChatGPT as a source for conducting legal research prior to this occurrence and was unaware of the possibility that its content could be false."

- Steven Schwartz, in court filing

When confronted with the non-existent cases, Schwartz did something that made the situation worse: he asked ChatGPT to confirm the cases were real. ChatGPT, doubling down on its hallucination, assured him the cases existed and could be found in legal databases.

"Is Varghese a real case?"

"Yes, Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2019) is a real case."

- ChatGPT conversation submitted to the court

The Judge's Response

Judge P. Kevin Castel was not amused. In his sanctions order, he wrote:

"Technological advances are commonplace and they should not be combated. But, in the context of attorneys using AI to assist in legal writing, there are risks involved... here, technological tools need to be used with skill and care."

- Judge P. Kevin Castel

Consequences

  • $5,000 fine imposed on Schwartz and colleague Peter LoDuca
  • Public apology letters required to each judge falsely cited
  • Professional embarrassment - case made international news
  • Bar complaints filed (disposition unclear)
  • Client harm - Mata's case was damaged by the fiasco

Broader Impact

This case became the first major public example of ChatGPT hallucinations causing real-world professional harm. It triggered:

Lessons Learned

  • Always verify AI output - ChatGPT will confidently present false information as fact
  • AI will confirm its own lies - asking ChatGPT to verify its output doesn't work
  • Professional responsibility survives AI use - you're still accountable for what you submit
  • "I didn't know" isn't a defense - professionals must understand their tools
  • Hallucinations aren't rare edge cases - they're fundamental to how LLMs work

Related Failures

Get the Full Report

Download our free PDF: "10 Real ChatGPT Failures That Cost Companies Money" (read it here) - with prevention strategies.

No spam. Unsubscribe anytime.

Need Help Fixing AI Mistakes?

We offer AI content audits, workflow failure analysis, and compliance reviews for organizations dealing with AI-generated content issues.

Request a consultation for a confidential assessment.