Incident Summary
- Date: September 2025
- Jurisdiction: California Court of Appeals
- Attorney: Amir Mostafavi
- AI Tool Used: ChatGPT and other AI tools
- Fabricated Citations: 21 of 23 case quotations in opening brief
- Sanction Amount: $10,000 (highest AI-related fine in California state court)
- Additional Action: Referred to California State Bar
What Happened
Attorney Amir Mostafavi used ChatGPT and other AI tools to "enhance" appellate briefs filed in the California Court of Appeals. Upon judicial review, the court discovered that 21 of 23 case quotations in the opening brief were completely fabricated by the AI.
The fabrication extended beyond the opening brief. The court found that many additional citations in the reply brief were also non-existent cases generated by ChatGPT's hallucination of legal authorities.
This case represents a significant escalation in judicial response to AI-generated fake legal citations, with the $10,000 sanction being the most costly penalty issued to an attorney by a California state court for AI-related misconduct, and one of the highest fines ever issued nationwide for attorney use of AI.
Court Findings
The court found that the attorney's use of AI tools resulted in briefs containing fabricated case citations that did not correspond to any real legal authorities. The quotations attributed to these non-existent cases were entirely generated by artificial intelligence.
— Based on court records and reporting from CalMatters, October 2025
Why This Matters
This case demonstrates several critical failures in AI tool usage:
Lack of Verification: The attorney failed to verify that the cases cited actually existed before filing briefs with the court.
Scale of Fabrication: With 21 of 23 quotations being fake (91%), this was not an isolated hallucination but a systemic failure of the AI tool to produce accurate legal research.
Judicial Response: The substantial fine and bar referral signal that courts are taking increasingly serious action against attorneys who submit AI-generated content without proper verification.
Pattern of Increasing Cases
According to legal researcher Damien Charlotin, who maintains a database of AI hallucination cases in legal filings, the frequency of such incidents has accelerated dramatically: "Before this spring in 2025, we maybe had two cases per week. Now we're at two cases per day or three cases per day."
Another tracker identifies 52 such cases in California alone and more than 600 nationwide, suggesting this problem is widespread across the legal profession.