Kim Kardashian, the billionaire media mogul who has spent the better part of a decade pursuing a law career, has publicly admitted that ChatGPT has been making her fail her exams. Not once. Not twice. Multiple times. And her response to the AI that keeps feeding her wrong answers? She yells at it. Like it's a misbehaving employee at one of her businesses. Like it can hear her. Like it cares.
"I'll get mad and I'll yell at it," Kardashian told interviewers, describing her relationship with OpenAI's chatbot. "You made me fail. Why did you do this?" She then described ChatGPT as her "toxic friend" and "frenemy," which is perhaps the most accurate description of the technology anyone has ever given.
The Study Method That Backfired
Kardashian's approach to AI-assisted studying is straightforward and, as it turns out, deeply flawed. She takes pictures of exam questions and feeds them directly into ChatGPT, expecting accurate legal analysis in return. The problem? The answers, in her own words, are "always wrong."
Let that sink in for a moment. One of the most famous women on the planet, with access to literally any tutor, any law professor, any legal mind in the country, chose to study with a chatbot that confidently generates incorrect legal analysis. And she kept doing it. Repeatedly. Even after it made her fail.
"They're always wrong." - Kim Kardashian, on ChatGPT's legal answers, after using it to study for multiple exams
The Long Road to the Bar
Kardashian's legal journey started in 2018 when she enrolled in California's Law Office Study Program, an alternative path to the bar that doesn't require traditional law school. It took her until 2021 to pass California's "baby bar" exam, the First-Year Law Students' Examination, and she needed three attempts to get there. She took the full California bar exam in July 2025 and has been awaiting results.
The fact that she struggled with the baby bar before ChatGPT even existed suggests the AI isn't her only obstacle. But the fact that she turned to a known hallucination machine for help with one of the hardest professional exams in the country is a case study in everything wrong with how people are using these tools.
Why This Matters Beyond Celebrity Gossip
It's easy to laugh at a Kardashian yelling at a chatbot. It's a funny image. But underneath the celebrity spectacle is a genuinely alarming trend that experts have been warning about for years.
ChatGPT doesn't know the law. It doesn't understand legal precedent. It can't reason through statutory interpretation. What it can do is generate text that sounds like a competent legal analysis while being subtly, sometimes catastrophically, wrong. It presents fabricated case citations with the same confidence as real ones. It misapplies legal standards while maintaining a tone of absolute authority.
Kardashian at least has the resources to recover from a failed exam. She has tutors, time, and money. But students across the country are doing the exact same thing right now, feeding their homework, their study materials, and their practice exams into ChatGPT and trusting the output. Many of them don't have the safety net of being a billionaire.
The Hallucination Problem Nobody Solved
OpenAI has known about the hallucination problem since GPT-3. They acknowledged it with GPT-4. They promised improvements. And yet, in 2026, ChatGPT still generates plausible-sounding but factually incorrect information with alarming regularity.
In the legal domain, this is particularly dangerous. Courts have already sanctioned multiple attorneys for submitting briefs containing fabricated case citations generated by ChatGPT. One U.S. appeals court ordered a lawyer to pay $2,500 for filing a brief riddled with AI hallucinations. These aren't edge cases anymore. This is a pattern.
The technology is fundamentally designed to produce text that sounds correct, not text that is correct. It's a prediction engine, not a knowledge engine. And when people treat it like an oracle, whether they're celebrities studying for the bar or students preparing for finals, the results are predictable: confident wrongness at scale.
The "Toxic Friend" Metaphor Is More Accurate Than She Knows
When Kardashian calls ChatGPT her "toxic friend," she's accidentally nailing the most important critique of consumer AI tools. A toxic friend tells you what you want to hear. A toxic friend agrees with your worst ideas. A toxic friend sounds supportive while leading you in the wrong direction.
That's exactly what ChatGPT does. It validates your framing. It builds on your assumptions. If you feed it a legal question and your phrasing implies a certain answer, it will often agree with your implied conclusion, even if it's dead wrong. It's the ultimate yes-man, and in educational contexts, that's the opposite of what you need.
Good education challenges your thinking. Good tutoring exposes your blind spots. ChatGPT does neither. It reinforces your existing understanding, right or wrong, and wraps it in grammatically perfect prose that makes you feel like you're learning when you're actually just getting more confident about things that aren't true.
What Should Students Actually Do?
The answer isn't complicated, but it's unglamorous: study with real resources. Use verified legal databases. Work with human tutors. Join study groups. Read actual casebooks. These methods have produced lawyers for centuries, and they work because they involve accountability, correction, and genuine understanding.
AI tools can be useful for brainstorming, organizing notes, or getting a rough overview of a topic. But the moment you start relying on them for factual accuracy in high-stakes professional contexts, you're playing a game you will eventually lose.
Kim Kardashian learned this the hard way. The question is whether the rest of the world will learn from her example, or whether millions of students will have to fail their own exams first before the message sinks in: ChatGPT is not your study partner. It's a text generator that doesn't know what it doesn't know. And it will never, ever tell you it's wrong.
It'll just keep talking, sounding confident, while you fail another exam.