The Quiet Admission Nobody Was Supposed to Notice
On October 29, 2025, OpenAI did something it had resisted for years. Without a press conference, without a blog post splashed across the front page of its website, and without a single apology to the users who had already been harmed, the company officially banned ChatGPT from providing specific medical, legal, or financial advice. The AI was reclassified as an "educational tool" only.
Think about what that means. For nearly three years, hundreds of millions of people used ChatGPT to diagnose symptoms, draft legal documents, and make financial decisions. OpenAI watched this happen. OpenAI marketed the tool's versatility. And then, after a cascade of lawsuits, regulatory complaints, and documented cases of real people being hurt by AI-generated recommendations, the company quietly flipped a switch and pretended it had always been this way.
The restriction did not come with a retroactive warning to users who had already followed ChatGPT's medical guidance. It did not come with compensation for the people who trusted an AI chatbot with life-altering decisions. It came with a vague policy update and the hope that nobody would ask too many questions about why it took so long.
The Nonprofit That Called It: ECRI Names AI Chatbots the #1 Health Technology Hazard
ECRI, one of the most respected nonprofit patient safety organizations in the world, does not make alarmist proclamations. For over fifty years, the organization has published an annual list of the top health technology hazards facing hospitals, clinics, and patients. Their assessments are methodical, evidence-based, and taken seriously by every major healthcare system in the country.
For 2026, ECRI named the misuse of AI chatbots as the number one health technology hazard. Not a cybersecurity threat. Not a malfunctioning ventilator. Not a drug interaction database error. AI chatbots. The tools that millions of people use every day to ask about their symptoms, their medications, and whether that chest pain is something to worry about.
Why ECRI Sounded the Alarm
ECRI's assessment was not based on theoretical risk. It was based on a growing body of documented cases where patients followed AI-generated medical advice and experienced adverse outcomes. The organization warned that chatbots present medical information with an authoritative tone that users mistake for clinical expertise, when in reality the models have no understanding of individual medical history, drug interactions, contraindications, or the nuances that separate a benign symptom from a medical emergency.
The most dangerous aspect, according to ECRI, is that users cannot distinguish between accurate medical information and confidently stated hallucinations. The chatbot delivers both with the same polished certainty.
When the most authoritative patient safety organization in America names your product the single biggest health technology hazard of the year, that is not a PR problem. That is an indictment.
NPR Documents ChatGPT Giving Bad Medical Advice, Including Inventing Body Parts
In March 2026, NPR published an investigation into ChatGPT's medical advice capabilities that should have been front-page news everywhere. Experts documented case after case of the chatbot providing incorrect diagnoses, suggesting inappropriate treatments, and, in one of the most surreal findings, literally inventing anatomical structures that do not exist in the human body.
Let that sink in. A tool that hundreds of millions of people use to ask about their health was caught fabricating body parts. Not getting a diagnosis slightly wrong. Not recommending a treatment that was outdated. Inventing anatomy. Creating fictional medical structures and presenting them to users as established fact.
The NPR report also documented instances where ChatGPT suggested diagnoses that contradicted standard medical guidelines, recommended drug dosages without accounting for potential interactions, and provided reassurance about symptoms that medical professionals said warranted immediate emergency care. In each case, the chatbot delivered its response with the same calm, confident, authoritative tone that made it sound like a board-certified physician rather than a statistical language model predicting the next most likely word in a sequence.
By the time OpenAI restricted medical advice in October 2025, users had been receiving this caliber of guidance for nearly three years. How many people delayed seeking real medical attention because a chatbot told them they were fine? That is a question OpenAI has never answered and, based on its track record, never will.
Air Canada Ordered to Pay After Its Chatbot Lied to a Passenger
If you want to understand why companies should never let a chatbot make promises on their behalf, look no further than what happened to Jake Moffatt and Air Canada. Moffatt's grandmother had passed away, and he needed to book a flight. Before purchasing his ticket, he used Air Canada's website chatbot to ask about the airline's bereavement fare policy. The chatbot told him he could book a full-fare ticket now and apply for a bereavement discount retroactively within 90 days.
That was wrong. Air Canada's actual bereavement policy explicitly stated that the discounted fare had to be requested before the flight, not after. The chatbot fabricated a policy that did not exist and presented it to a grieving customer as fact.
The Tribunal's Ruling
When Moffatt tried to claim the refund the chatbot had promised, Air Canada denied it, pointing to the actual written policy on its website. Moffatt took the case to Canada's Civil Resolution Tribunal. Air Canada's defense was remarkable in its audacity: the airline argued it could not be held responsible for information provided by its own chatbot, essentially claiming the AI was a separate entity that Air Canada had no obligation to stand behind.
The tribunal rejected that argument entirely. The ruling stated that Air Canada was responsible for all information on its website, whether provided by a human agent or a chatbot. The airline was ordered to compensate Moffatt. The message was clear: if you deploy an AI chatbot on your website, you own every word it says.
This case became a landmark in AI liability law. And yet, thousands of companies continue to deploy chatbots that give advice on complex topics, from insurance claims to medical billing to legal rights, without adequate safeguards to prevent the exact kind of fabrication that cost Air Canada this case.
ChatGPT Told the World a Norwegian Man Killed His Children. It Was Completely False.
In one of the most disturbing documented cases of AI defamation, ChatGPT generated text claiming that a Norwegian man had murdered two of his children and was serving a 21-year prison sentence. The man had done nothing of the sort. He was a real person, living his life, who discovered that an AI used by hundreds of millions of people was telling anyone who asked that he was a convicted child killer.
The privacy advocacy organization NOYB (None of Your Business), founded by Austrian lawyer Max Schrems, filed a formal GDPR complaint against OpenAI over this incident. The complaint argued that ChatGPT's generation of false and defamatory personal information violated the European Union's data protection regulations, which guarantee individuals the right to have inaccurate personal data corrected.
OpenAI's response to these defamation issues has been, charitably, inadequate. The company has acknowledged that ChatGPT can generate incorrect information about real people but has not implemented any mechanism that reliably prevents it. Users can request corrections, but the model may generate the same false information again in the next conversation. There is no persistent fix. There is no recall system. There is just a language model that might tell someone you committed murder, and the best OpenAI can offer is that it probably will not say that exact thing to the next person who asks.
"I Can't Help With That": The New ChatGPT Experience
If you have used ChatGPT recently and tried to ask about a medical symptom, a legal question, or a financial strategy, you have probably noticed the change. Where the chatbot once provided detailed, specific, and confidently wrong answers, it now delivers something arguably worse: vague generalities wrapped in an endless loop of disclaimers and deflections.
Ask ChatGPT whether your symptoms suggest a specific condition, and you will get a paragraph about how symptoms can have many causes and you should consult a healthcare professional. Ask it to draft a legal document, and it will explain that legal documents require professional expertise and suggest you contact an attorney. Ask it for a specific investment strategy, and it will give you a lecture about risk tolerance and recommend speaking with a financial advisor.
The Worst of Both Worlds
The irony is devastating. For years, ChatGPT gave specific, detailed, and often dangerously wrong advice on medical, legal, and financial matters. Users relied on that advice. Some were harmed by it. Now, the same tool that was once too confident is too cautious, refusing to provide any specific guidance at all. Users who came to depend on ChatGPT for these tasks are left with a tool that can generate a sonnet about their symptoms but will not tell them whether to go to the emergency room.
OpenAI managed to create a product that was dangerous when it was helpful and useless when it is safe. The people who were harmed during the "helpful" phase got no warning, no compensation, and no apology. The people trying to use it now get a chatbot that functions as the world's most expensive "consult a professional" sign.
The restriction also raises a fundamental question about what users are actually paying for. OpenAI charges $20 per month for ChatGPT Plus. That subscription once gave users a tool that would attempt to answer any question, however recklessly. Now it gives them a tool that declines to answer entire categories of questions. The price has not changed. The refund policy has not changed. Only the capability has changed, and only because the liability became too obvious to ignore.
The Timeline of Negligence
What makes OpenAI's October 2025 policy shift so damning is not the restriction itself. Restricting a chatbot from giving medical, legal, and financial advice is genuinely the right thing to do. What is damning is the timeline. OpenAI knew, for years, that users were treating ChatGPT as a doctor, a lawyer, and a financial advisor. The company had access to usage data showing exactly how people were using the tool. It received reports of adverse outcomes. It watched ECRI build the case for naming AI chatbots the top health hazard. And it did nothing until the legal and regulatory pressure became unavoidable.
November 2022: ChatGPT launches. Users immediately begin asking medical, legal, and financial questions. OpenAI includes a disclaimer in the terms of service but does nothing to prevent the tool from providing specific advice in these categories.
2023-2024: Reports of harmful AI-generated advice accumulate. The Air Canada chatbot ruling establishes that companies are liable for their chatbot's statements. NOYB files the GDPR complaint over fabricated personal information. Medical professionals begin publicly warning patients not to trust AI diagnoses.
2025: ECRI names AI chatbot misuse the number one health technology hazard for the following year. NPR documents ChatGPT inventing body parts and providing incorrect diagnoses. Lawsuits pile up. On October 29, 2025, OpenAI finally restricts ChatGPT from giving specific advice in these categories.
The gap between "we know this is dangerous" and "we will stop doing it" was roughly three years. Three years of users following bad medical advice. Three years of people drafting legal documents with fabricated case citations. Three years of financial recommendations from a tool that cannot do basic arithmetic reliably. Three years of profit before responsibility.
What OpenAI Is Not Telling You
OpenAI has framed the restriction as a responsible evolution of the product. The company positions ChatGPT as an "educational tool" that helps users understand concepts rather than providing actionable advice. This framing conveniently omits several inconvenient truths.
First, OpenAI never warned existing users that the advice they had previously received might be wrong. There was no mass notification. No email campaign. No banner in the chat interface saying "previous medical, legal, or financial advice you received from this tool may have been inaccurate and you should verify it with a professional." Users who followed ChatGPT's guidance in 2023 or 2024 have no idea they were using a tool that the company itself now considers unfit for that purpose.
Second, the "educational tool" label does not solve the underlying problem. ChatGPT still generates text with an authoritative tone. It still presents information as fact. The only difference is that it now adds a disclaimer suggesting users consult a professional. But decades of research on user behavior show that disclaimers are largely ignored, especially when they follow detailed, confident-sounding information. The disclaimer is a legal shield, not a safety measure.
Third, and most critically, OpenAI has provided no mechanism for accountability for the advice already given. If ChatGPT told you in 2024 that your symptoms were probably nothing and you delayed seeking care, there is no record, no recourse, and no responsibility. OpenAI's terms of service explicitly disclaim liability for the accuracy of ChatGPT's outputs. The company built a tool that gave medical advice to millions, profited from it, restricted it only when forced to, and shielded itself from every consequence.
That is not a policy shift. That is damage control disguised as responsibility.
Were You Harmed by ChatGPT's Medical, Legal, or Financial Advice?
We are documenting cases where users followed AI-generated recommendations and experienced real consequences. Your story could help hold AI companies accountable.
Read User Stories Submit Your Experience More Investigations