DATA BREACH INVESTIGATION

300 Million Private AI Chat Messages Leaked to the Open Internet

February 13, 2026

A misconfigured Firebase backend exposed the most private conversations imaginable, from 25 million users of a popular AI chatbot wrapper app. Suicide plans. Drug recipes. Things people would never say to another human being. All sitting in an unsecured database anyone could read.

Three hundred million of your most private conversations, exposed to the open internet. Suicide plans. Drug recipes. Instructions for hacking other apps. Things you'd never say to another human being, things you whispered to an AI chatbot because you thought nobody was listening. All of it was sitting in an unsecured database that anyone with a URL could read, modify, or delete. This is not a hypothetical scenario. This is what happened to 25 million users of Chat & Ask AI, a popular wrapper app that lets people talk to ChatGPT, Claude, and Gemini through a single interface. And the company behind it didn't even notice until a security researcher came knocking.

The Breach By The Numbers

300M
Private Messages Exposed
25M
Affected Users
50M+
Total App Users
100%
Publicly Readable

What Happened: A Firebase Door Left Wide Open

The breach traces back to something embarrassingly simple. Chat & Ask AI, an app with over 50 million total users, was built on Google Firebase as its backend database. Firebase is a common choice for mobile apps, it's fast, it scales well, and it comes with built-in security rules that let developers control exactly who can access what data. The problem is that those security rules were left set to public.

A security researcher known as "Harry" discovered the misconfiguration and reported it to 404 Media. What Harry found was staggering in scope. The Firebase Security Rules, the only barrier between the app's data and the entire internet, had never been properly configured. Anyone who knew the project URL could read the database. Anyone could modify records. Anyone could delete them entirely. There was no authentication required, no access control, no encryption of stored data beyond whatever Firebase provides by default. The front door wasn't just unlocked. It wasn't even there.

The Firebase Security Rules were left set to public, allowing anyone with the project URL to read, modify, or delete the entire database. No authentication was required. The vulnerability allowed anyone to designate themselves as an "authenticated" user with full access.

What makes this worse is the nature of the vulnerability itself. It wasn't a sophisticated zero-day exploit. It wasn't a novel attack vector that required advanced security knowledge to discover. It was a configuration checkbox. Firebase literally warns developers about public security rules during setup. Codeway, the developer behind Chat & Ask AI, either ignored that warning or never bothered to look at it. And 300 million private conversations paid the price.

What Was Exposed: The Darkest Corners of Human Thought

Here's the part that should make your stomach turn. This wasn't a database of email addresses and hashed passwords. This wasn't a leak of usernames and phone numbers. This was 300 million raw, unfiltered conversations between human beings and AI chatbots. Entire chat histories. Every message sent, every response received. The AI models users selected. Timestamps showing exactly when each conversation happened. App settings and preferences. The full, unedited record of what people say when they think absolutely nobody is watching.

And what people say to AI chatbots, when they believe it's private, is unlike anything they'd say anywhere else. The exposed conversations included users asking how to painlessly kill themselves. People requesting help writing suicide notes. Users asking for detailed instructions on how to manufacture methamphetamine. Conversations about how to hack into other applications and services. The most vulnerable, desperate, dangerous thoughts that human beings carry inside them, all laid bare in a publicly accessible database.

"People treat AI chatbots like therapists, confessors, and search engines for questions they're too afraid or ashamed to ask anyone else. When that data leaks, you're not just exposing conversations. You're exposing the rawest, most unguarded version of a human being that exists."

Think about that for a moment. Millions of people downloaded this app specifically because it offered a private way to interact with AI. They trusted it with questions about suicide. About drugs. About things that could destroy careers, relationships, and lives if they ever became public. And every single one of those conversations was sitting on a server that anyone, literally anyone, could access with a web browser and a URL.

The Wrapper App Problem: Zero Security, Maximum Trust

Chat & Ask AI is what the industry calls a "wrapper app." It doesn't build its own AI models. It doesn't train neural networks. It doesn't do any of the hard, expensive, security-conscious engineering that companies like OpenAI, Anthropic, and Google invest billions of dollars into. Instead, it takes existing AI models, ChatGPT, Claude, and Gemini, wraps them in a pretty interface, and sells access to users who might not know or care about the difference.

This distinction matters enormously for security. When you use ChatGPT directly through OpenAI, your data is handled by a company that employs dedicated security teams, undergoes regular audits, maintains SOC 2 compliance, and has a public track record (imperfect as it may be) of addressing vulnerabilities. When you use a wrapper app like Chat & Ask AI, your data is handled by... whoever built the wrapper. In this case, that's a company called Codeway that couldn't be bothered to configure Firebase security rules.

The wrapper app economy has exploded alongside the AI boom. App stores are flooded with hundreds of these apps, many of them charging subscription fees for access to the same models you can use directly from the original providers. Users see familiar names like "ChatGPT" and "Claude" in the app description and assume they're getting the same security guarantees. They're not. They're getting whatever security the wrapper developer decided to implement, which in this case was none.

The Codeway Connection: It Gets Worse

Chat & Ask AI isn't the only app built by Codeway. And the breach didn't stop at a single application. The exposed Firebase backend also revealed data from other apps developed by the same company. This means the security negligence wasn't limited to one product. It was a company-wide pattern.

This is the part of the story that transforms a single app's data breach into something much more troubling. When a developer ships one app with misconfigured security, you can chalk it up to a mistake, an oversight, a junior developer who skipped a step. When the same developer ships multiple apps with the same vulnerability, it stops being an accident and starts looking like a fundamental lack of security awareness across the entire organization.

Codeway isn't some garage operation. Chat & Ask AI has over 50 million users. That's a company generating significant revenue, likely millions of dollars from subscriptions alone. And yet they apparently never invested in the most basic security review of their backend infrastructure. No penetration testing. No security audit. No one at any point asking, "Hey, are our Firebase rules configured correctly?"

The breach extended beyond Chat & Ask AI to other applications built by the same developer, Codeway, revealing a systemic security negligence across their entire product portfolio.

The Bigger Picture: Millions Trust AI With Their Darkest Thoughts

This breach exposes something that the entire AI industry has been quietly ignoring. People don't use AI chatbots the way they use search engines. They don't type carefully considered, professional queries. They pour out their souls. They confess fears, desires, and plans that they'd never share with friends, family, therapists, or anyone else. AI chatbots have become the world's most trusted confidants, and absolutely nobody is treating that trust with the gravity it deserves.

The reporting on this breach came from multiple major outlets, including Malwarebytes, Fox News, 404 Media, Cybersecurity News, and GBHackers. That's a wide spread of coverage for a security incident, and it reflects the growing recognition that AI privacy breaches aren't just tech industry problems. They're human problems. When someone asks an AI chatbot how to write a suicide note, and that conversation gets leaked, we're not talking about a data point in a spreadsheet. We're talking about a person in crisis whose most vulnerable moment is now potentially public.

And this is just the breach we know about. How many other wrapper apps, across the hundreds available in app stores right now, have the same misconfiguration? How many Firebase backends are sitting wide open, collecting the most intimate conversations of millions of users, with no one checking the locks? The uncomfortable answer is: probably a lot of them. Security researchers keep finding these vulnerabilities because the wrapper app ecosystem has essentially no security standards, no mandatory audits, and no accountability until someone gets caught.

The AI providers themselves bear some responsibility here, too. OpenAI, Anthropic, and Google all allow third-party apps to access their models via API. They provide documentation on responsible usage. But they don't audit the security practices of every developer who plugs into their systems. They can't control what happens to user data after it leaves their infrastructure and enters a third-party database. The models are secure. The pipes carrying your conversations to those models, through apps like Chat & Ask AI, are often anything but.

What This Means for You

If you've ever used Chat & Ask AI, or any similar wrapper app, your conversations may have been exposed. Not just the topics you discussed, but the exact words you used, the models you selected, the times you were online, and the settings you configured. If you asked something you wouldn't want made public, there's a chance it was sitting in an unsecured database accessible to anyone who looked.

Your AI Conversations Are Not Private. They Never Were.

Three hundred million messages. Twenty-five million users. Suicide plans, drug recipes, hacking instructions, and the raw, unfiltered inner lives of millions of people, all exposed because a developer forgot to check a box in a Firebase configuration panel. This is the state of AI privacy in 2026. Not a sophisticated cyberattack. Not a nation-state operation. A checkbox.

The AI industry is moving at breakneck speed to ship products, capture users, and generate revenue. Security is an afterthought when it's a thought at all. And the people paying the price are the millions of users who trusted these apps with the thoughts they couldn't share with anyone else. If this breach teaches us anything, it's this: treat every conversation with an AI chatbot as if it could be read by anyone on Earth. Because as 25 million Chat & Ask AI users just learned the hard way, it very well might be.