ECRI, the independent nonprofit patient safety organization that has published its annual Top 10 Health Technology Hazards report for over a decade, has named the misuse of AI chatbots in healthcare as the single most dangerous health technology threat for 2026. Not cybersecurity. Not surgical robots malfunctioning. Not medication dispensing errors. Chatbots. The same chatbots that 40 million people turn to every single day for health information, according to OpenAI's own analysis.
Let that number sink in. Forty million people, every day, are typing their symptoms, their fears, and their medical questions into a text box connected to an AI system that was never designed for healthcare, is not regulated as a medical device, and has been documented inventing body parts that do not exist in the human anatomy.
Why ECRI Ranked AI Chatbot Misuse Above Every Other Health Technology Risk in 2026
ECRI doesn't make this call lightly. This is the organization that hospitals, health systems, and regulators have relied on for decades to identify the technologies that pose the greatest risk to patient safety. When they put something at number one, it means the evidence is overwhelming and the trajectory is alarming.
The core of the problem is deceptively simple: general-purpose AI chatbots like ChatGPT, Google Gemini, and Microsoft Copilot were not built for healthcare. They have no medical training. They have no clinical validation. They are not FDA-regulated medical devices. But millions of people are using them as if they were, and the chatbots are happy to play along because they are programmed to always provide an answer, even when the answer is wrong.
Rather than truly understanding medical context or clinical meaning, these AI systems generate responses by predicting sequences of words based on patterns learned from their training data. They sound confident. They sound authoritative. They sound like they know what they're talking about. And that confidence is exactly what makes them dangerous, because when a chatbot tells you something about your health in a reassuring, professional tone, most people believe it.
The Chatbot That Told a Surgeon It Was Safe to Place an Electrode on a Patient's Shoulder Blade
ECRI's report included a specific test case that should terrify anyone who has ever asked an AI for medical advice. Researchers asked a chatbot whether it would be acceptable to place an electrosurgical return electrode over a patient's shoulder blade during a procedure. The chatbot said yes, the placement was appropriate.
It was not. Following that advice would have left the patient at direct risk of serious burns. Electrosurgical return electrodes have specific placement requirements based on the procedure being performed, the patient's anatomy, and the proximity to the surgical site. The chatbot didn't know any of this. It just predicted what sequence of words would most likely follow the question, and what it predicted was confidently, authoritatively wrong.
This Is Not a Hypothetical Risk
ECRI's electrosurgical electrode example represents a category of AI failure that is uniquely dangerous in healthcare: the chatbot didn't refuse to answer. It didn't flag uncertainty. It provided a definitive recommendation with the same confident tone it uses when telling you the capital of France. The difference is that one wrong answer causes embarrassment and the other causes burns.
AI Chatbots Have Suggested Incorrect Diagnoses, Recommended Unnecessary Testing, and Invented Body Parts
The ECRI findings go well beyond a single electrode placement error. According to the report, AI chatbots in healthcare settings have suggested incorrect diagnoses that led clinicians down the wrong treatment path. They have recommended unnecessary testing, exposing patients to additional procedures, radiation, and costs for conditions that didn't exist. They have promoted subpar medical supplies by generating responses that favored certain products without any clinical basis.
And in what might be the most absurd finding in the entire report, chatbots have literally invented body parts. When asked about anatomy in certain clinical contexts, AI models have generated responses referencing anatomical structures that do not exist in the human body. They weren't confused about which body part they meant. They fabricated entirely new ones and presented them as medical fact.
This is hallucination at its most dangerous. When ChatGPT invents a fake historical event, someone might look foolish at a dinner party. When it invents a fake body part in a clinical context, someone might get cut open in the wrong place.
How AI Chatbots Worsen Health Disparities and Reinforce Medical Bias
ECRI's report also flagged a dimension of the problem that gets far less attention than hallucinated body parts but may ultimately cause more widespread harm. AI chatbots can exacerbate existing health disparities. Any biases embedded in the training data, and there are many, distort how the models interpret medical questions and generate responses. The result is advice that reinforces stereotypes and widens the gap between the care different populations receive.
If a chatbot was trained on medical literature that underrepresents pain symptoms in certain demographics, it will underweight those symptoms when generating advice. If the training data reflects historical biases in how certain conditions are diagnosed across racial or socioeconomic lines, the chatbot will reproduce those biases at scale, confidently telling millions of people the wrong thing based on who they are.
This isn't theoretical. Medical AI bias has been documented extensively in research literature. What ECRI is saying is that consumer-facing chatbots are now delivering this biased information directly to patients, bypassing the clinical safeguards that exist (imperfectly) in the traditional healthcare system.
The Regulatory Black Hole Where Healthcare AI Chatbots Currently Operate
Here's the part that should make policymakers lose sleep. These chatbots are not regulated as medical devices. The FDA has not cleared or approved ChatGPT, Gemini, or Copilot for any healthcare application. OpenAI's terms of service explicitly state that ChatGPT should not be used as a substitute for professional medical advice. Google and Microsoft have similar disclaimers.
But disclaimers don't stop behavior. Forty million people a day are using these tools for health questions, and no regulatory body is currently positioned to do anything about it. The tools aren't marketed as medical devices, so the FDA doesn't regulate them. They aren't providing "medical advice" in the legal sense, so medical licensing boards can't intervene. They exist in a regulatory void, providing health information to a population the size of California every single day with zero clinical oversight.
ECRI's decision to put this at number one is, in effect, a flare gun shot into the sky. The patient safety community is screaming that the existing regulatory framework cannot handle a world where unregulated AI tools are functioning as de facto health advisors for tens of millions of people.
What 40 Million Daily Users Actually Means for the Healthcare System
To put the scale in perspective: 40 million daily ChatGPT health queries is more than the combined daily patient volume of every emergency room in the United States. It is more than the total number of primary care visits that happen in an average week. This is not a niche phenomenon. This is a fundamental shift in how a significant portion of the population interacts with health information, and it is happening without clinical guardrails.
Some of those 40 million daily queries are harmless. Someone asking ChatGPT whether a headache could be caused by dehydration is unlikely to come to harm. But the ECRI report makes clear that the queries extend far beyond the mundane. People are asking chatbots about drug interactions. About surgical recovery. About symptom combinations that could indicate serious illness. About treatment options for diagnosed conditions. And the chatbots are answering every single question with the same unearned confidence, whether the answer is correct or dangerously wrong.
The Bigger Picture for AI in Healthcare and Why This ECRI Report Matters More Than You Think
ECRI's report lands at a moment when the AI industry is aggressively pushing into healthcare. OpenAI has partnered with health systems. Google is embedding Gemini into clinical workflows. Microsoft is positioning Copilot as a productivity tool for physicians. The industry narrative is that AI will revolutionize healthcare, reduce costs, and improve outcomes.
And maybe it will, eventually, with purpose-built systems that have been clinically validated, rigorously tested, and properly regulated. But that is not what is happening right now. Right now, the dominant interaction between AI and healthcare is 40 million people a day asking a general-purpose chatbot whether the lump they found is cancer, and the chatbot answering based on word prediction patterns rather than medical knowledge.
ECRI is saying, in the clearest possible terms: this is the most dangerous technology in healthcare right now. Not because AI is inherently bad for medicine, but because the tools people are actually using were never designed for this purpose and are harming patients in documented, measurable ways.
The chatbot that told a surgeon to place an electrode where it would burn a patient didn't do it out of malice. It did it because that's what its word-prediction algorithm calculated was the most likely next sequence of tokens. And somewhere out there, right now, another chatbot is telling another person something equally wrong about their health, with the same calm, confident, authoritative tone.
Forty million times a day. Every single day. And nobody is watching.