The Setup: A Man, a Woman, and a Dog Named Kyra
This is a story about a dog. Or rather, it's a story about what happens when nobody in the entire legal system bothers to do the one thing that separates lawyers from fortune tellers: checking whether the law they're citing actually exists.
Joan Pablo Torres and Leslie Ann Munoz dissolved their domestic partnership in 2022. They shared a dog named Kyra. In 2024, Torres went to court seeking shared custody and visitation rights for Kyra. This is California. People go to court over dogs. It happens. What doesn't normally happen is what came next.
What came next was a completely fabricated legal citation, conjured from the digital void, traveling through six layers of the legal system like a virus with an immunity badge. It passed through the hands of a Reddit blogger, a client, an attorney, the opposing attorney, a trial judge, and into appellate court filings. Not one person, at any point, opened a legal database and spent 30 seconds verifying that the case being cited was real.
It was not real. It had never been real. And the California Fourth Appellate District had to be the one to point that out, in a published opinion (D085584), because apparently everyone else was too busy to look.
The Journey of a Lie: Six Stops, Zero Checkpoints
The fake citations were "Marriage of Twigg (1984) 34 Cal.3d 926" and "Marriage of Teegarden (1995) 33 Cal.App.4th 1572." Neither case existed in the form presented. They had the formatting of real California case law. They had plausible-looking volume numbers and reporter citations. They looked legitimate to anyone who didn't bother looking them up. Which turned out to be everyone.
Here's how the lie traveled, step by step, through the legal system:
A blogger using the handle "Sassafras Patterdale," described as a blogger, podcaster, and animal rescuer, published a post containing the fabricated citations. The hallmarks of AI-generated legal content were all there: perfectly formatted case names, plausible reporter citations, and zero basis in reality.
Munoz, the opposing party, included the Reddit blog post in her client declaration and cited it as supporting legal authority. A Reddit post. In a court filing. Citing fake case law.
Attorney Roxanne Chung Bonar, representing Munoz pro bono through a family connection (she was Munoz's cousin), incorporated the fake citations into her formal opposition brief. She did not verify them. She went further: she fabricated additional citation details to make them look more legitimate.
Here's where it gets truly absurd. Torres's own attorney, the one who should have been challenging these citations, drafted a proposed court order that included the fake citations without verifying them either. The other side's fabricated law ended up in his own paperwork.
The trial judge signed the order. The fake citations were now part of an official court ruling. A judge put their name on a document built on legal authority that did not exist. The system designed to be the final safeguard against legal fiction became the thing that legitimized it.
Bonar continued citing the fake cases in appellate filings before the Fourth Appellate District. When challenged, she didn't back down. She doubled down, defending the citations as legitimate and accusing opposing counsel of incompetence for questioning them.
The Sanctions: $5,000 and a Professional Disgrace
The California appellate court didn't just identify the fake citations. They threw the book at Bonar, ordering $5,000 in sanctions. That number is significant. Courts typically impose sanctions around $1,500 for this kind of misconduct. The elevated amount was specifically because Bonar "persisted in and aggravated the misconduct" by fabricating additional details about the fake citations and then attacking opposing counsel for daring to question their validity.
Think about that. She didn't just cite fake law. When someone pointed out it was fake, she invented more fake details to prop up the original fake citation, then accused the person catching the error of being the incompetent one. That takes a special kind of confidence. The kind of confidence that comes from never, not once, opening Westlaw or LexisNexis to see if the cases were real.
The Cruelest Irony
Torres, the man who correctly identified the fabricated citations, lost his appeal anyway. Why? Because his own attorney had included the fake citations in his draft order without checking them either. The court noted that both sides failed in their basic professional duty to verify legal authority. The system punished the person who caught the fraud because his own lawyer had already endorsed it.
This Is Not an Isolated Incident. This Is the New Normal.
The Kyra case is horrifying on its own. But it's not a freak accident. It's a symptom of something much larger and much worse: the widespread, systematic contamination of legal proceedings by AI-generated hallucinations that nobody bothers to verify.
In June 2023, a federal judge in New York sanctioned attorneys $5,000 in Mata v. Avianca after they submitted a brief full of ChatGPT-fabricated case citations. Attorney Steven Schwartz admitted he used ChatGPT to research the brief and that the AI "assured him" the cases were real and could be found on legal databases. They could not, because they did not exist. Judge P. Kevin Castel found the attorneys acted with "subjective bad faith."
That was supposed to be the wake-up call. That was 2023. It is now 2026. And here we are, documenting a case where a hallucinated citation traveled through six stages of the legal system, survived two separate attorneys' review, got a judge's signature, and made it all the way to appellate court before anyone bothered to check.
The Mata case was supposed to scare lawyers into checking their AI-generated citations. Instead, it seems to have taught them nothing. The cases keep coming. Lawyers keep submitting fabricated citations. Courts keep catching them only after the damage is done. And the AI keeps hallucinating with the confidence of a tenured professor who has never been wrong about anything.
The Real Problem: AI Hallucinations Are Perfectly Formatted Lies
Here's what makes AI-generated legal hallucinations so dangerous, and why the Kyra case is a preview of a much larger catastrophe. When ChatGPT fabricates a case citation, it doesn't just make up a name. It generates a complete, perfectly formatted citation with a case name that sounds plausible, a reporter volume and page number that fall within reasonable ranges, and a year that makes contextual sense. "Marriage of Twigg (1984) 34 Cal.3d 926" looks exactly like a real California Supreme Court citation. It follows every formatting convention. It has the right reporter abbreviation. The volume number is in range for that era.
The only thing wrong with it is that it's completely made up.
This is the core danger of AI hallucination in professional contexts. The outputs don't look wrong. They look perfect. They look more polished and properly formatted than what many humans would produce. The AI doesn't stumble or hesitate or flag uncertainty. It presents fiction with the formatting and confidence of settled law.
Three years apart. Same dollar amount. Same failure. The legal profession learned absolutely nothing.
The Six People Who Could Have Stopped This With One Search
Let's be very clear about how easy it would have been to prevent the entire Kyra debacle. At any of the six stages, one person needed to do one thing: type the case name into a legal database. That's it. Westlaw, LexisNexis, Google Scholar, even a basic Google search would have revealed that "Marriage of Twigg" and "Marriage of Teegarden" did not exist as cited. The search would have taken less than 30 seconds.
The Reddit blogger didn't check. The client didn't check. Attorney Bonar didn't check. Torres's own attorney didn't check. The trial judge didn't check. And Bonar still didn't check when she filed appellate briefs citing the same fake cases.
Six opportunities. Six failures. One dog named Kyra caught in the middle of a legal system that couldn't be bothered to do the bare minimum.
What This Means for Every Court Case in America
If a completely fabricated citation can survive six stages of legal review in a California appellate case, how many fake citations are sitting in court orders right now, in cases where nobody ever challenged them? How many trial court rulings are built on legal authority that a chatbot invented? How many people have won or lost their cases, their custody battles, their freedom, based on law that does not exist? The Kyra case was caught. How many weren't?
A Dog, a Bot, and a Broken System
Kyra is a dog. She doesn't know that her custody battle became a landmark case in the ongoing collapse of AI-assisted legal practice. She doesn't know that her name is now attached to a published appellate opinion documenting how two attorneys, a trial judge, and a blogger conspired through sheer laziness to contaminate a court proceeding with fake law.
But the rest of us should know. Because the Kyra case is not about a dog. It's about a legal system that has become so reliant on copying and pasting that nobody reads what they're pasting anymore. It's about attorneys who treat citations like decoration instead of authority. It's about judges who sign orders without independently verifying the legal basis. And it's about AI systems that generate perfectly formatted lies with the confidence of absolute certainty.
The citation was born on Reddit. It traveled through a client's declaration. It was laundered through an attorney's brief. It was adopted by the opposing attorney's own draft. It was signed by a judge. It was filed with an appellate court. At no point did the system work as designed. At no point did anyone do the one thing that lawyers are supposed to do better than anyone else: check whether the law is real.
The dog's name was Kyra. The citation's name was "Marriage of Twigg." One of them actually exists.
AI Is Contaminating the Legal System
From Mata v. Avianca in 2023 to the Kyra dog custody case in 2026, lawyers keep submitting AI-fabricated citations to courts. The sanctions aren't working. The warnings aren't working. The system is failing.
Read About AI Lawsuits Share Your AI Disaster Story Explore Full Documentation