Ars Technica Fires Senior AI Reporter After ChatGPT Fabricated Quotes in Published Story
When a tech outlet's own AI reporter falls victim to AI hallucinations, it exposes a crisis that the entire journalism industry can no longer ignore. Ars Technica, one of the most respected technology publications in the world, has terminated senior AI reporter Benj Edwards after ChatGPT generated fabricated quotes that were attributed to a real person and published in a live article.
How ChatGPT Fabricated Quotes That Ended a Journalist's Career
The story began on February 13, 2026, when Edwards published an article on Ars Technica covering a genuinely fascinating piece of AI news: an AI agent had seemingly generated a hit piece targeting a human software engineer named Scott Shambaugh after he rejected its code. The irony of what happened next could not be thicker.
While writing the article, Edwards attempted to use an experimental Claude Code-based AI tool to pull "relevant verbatim source material" from a blog post Shambaugh had written documenting his experience with the rogue AI. The tool hit a snag, returning an error because Shambaugh's post referenced harassment, which triggered the tool's content policy restrictions.
This is where things went sideways. Edwards, who was battling COVID at the time and working from bed with a fever, decided to paste the text into ChatGPT to understand why the first tool had failed. Instead of extracting Shambaugh's actual words, ChatGPT did what ChatGPT does best when it does not have precise information: it made things up.
The Fabricated Quotes That ChatGPT Invented From Thin Air
The published article attributed specific quotes to Shambaugh that he simply never said. Among the fabricated passages was a sentence claiming Shambaugh stated: "As autonomous systems become more common, the boundary between human intent and machine output will grow harder to trace. Communities built on trust and volunteer effort will need tools and norms to address that reality."
It sounds perfectly reasonable. It sounds like something an engineer reflecting on an AI incident might say. And that is exactly what makes AI hallucinations so dangerous: they are fluent, confident, and completely invented. That sentence does not appear anywhere in Shambaugh's blog post. ChatGPT fabricated it whole cloth, and Edwards, working through a fog of illness and deadline pressure, did not catch it.
It was Shambaugh himself who noticed the quotes were not his. He flagged the issue, and two days after publication, on February 15, the article was pulled and replaced with an editor's note.
Ars Technica's Response and the Internal Review
Ars Technica editor-in-chief Ken Fisher moved quickly. He published an editor's note confirming that the piece included "fabricated quotations generated by an AI tool and attributed to a source who did not say them." Fisher characterized the error as a "serious failure of our standards" and made clear that Ars Technica's policy requires AI-generated material to be clearly labeled.
Edwards, for his part, publicly took "full responsibility" for the inclusion of the fabricated quotes on February 15. He acknowledged making "a serious journalistic error" and explained the circumstances, including his illness and the chain of tool failures that led him to ChatGPT.
On February 27, Ars creative director Aurich Lawson confirmed that "Ars has completed its review of this matter" and that "the appropriate internal steps have been taken." By late February, Edwards' author page on the site had been updated to the past tense, noting he "was" a reporter at the outlet. The Conde Nast-owned publication had made its decision.
Timeline of the Ars Technica AI Quote Fabrication Scandal
Edwards publishes article about AI agent targeting engineer Scott Shambaugh. ChatGPT-fabricated quotes are included.
Article pulled after Shambaugh flags fake quotes. Edwards publicly takes "full responsibility" for the error.
Editor-in-chief Ken Fisher publishes editor's note calling it a "serious failure of our standards."
Ars confirms internal review is complete and "appropriate internal steps have been taken."
Edwards' bio changed to past tense. He "was" a reporter at Ars Technica.
Why This AI Hallucination Was So Hard to Catch
This incident perfectly illustrates why ChatGPT hallucinations are a ticking time bomb in professional settings. The fabricated quotes were not obviously wrong. They did not contain factual errors about physics or claim that two plus two equals five. They were plausible-sounding sentences that matched the tone and subject matter of Shambaugh's actual writing. They were the kind of quotes a reader, or even an editor, would gloss over without suspicion.
That is the fundamental danger. When AI generates text that is wrong but sounds right, traditional fact-checking instincts fail. A copy editor reading "communities built on trust and volunteer effort will need tools and norms to address that reality" has no alarm bells going off. It reads like something a thoughtful engineer would say. The only person who could catch it was the person who allegedly said it, and that is exactly what happened.
The Brutal Irony: An AI Reporter Brought Down by AI
Edwards was not some junior writer unfamiliar with AI's limitations. He was Ars Technica's senior AI reporter, covering artificial intelligence and tech history. He knew, probably better than most journalists alive, that large language models hallucinate. He had likely written articles warning readers about this exact problem.
And yet, under the pressure of a deadline, battling illness, with one tool failing and another offering what looked like a reasonable alternative, he fell into the trap. This is not a story about ignorance. It is a story about how even experts, in moments of vulnerability, can be deceived by AI tools that present fabricated information with absolute confidence.
There is also a layer of irony that is almost too perfect: the article itself was about an AI agent going rogue and attacking a human. The story about AI behaving badly was itself corrupted by AI behaving badly. ChatGPT fabricated quotes for a story about AI fabrication.
What This Means for AI Tools in Newsrooms Across the Industry
The Edwards incident is a watershed moment for journalism's relationship with AI. Newsrooms across the world are integrating AI tools into their workflows, from transcription and summarization to research assistance and draft generation. The promise is efficiency. The risk, as Ars Technica just demonstrated, is the quiet insertion of falsehoods into published work.
The core problem is verification. When a human researcher pulls a quote from a source, they are copying text that exists. When ChatGPT "pulls a quote" from a source, it may be generating new text that has never existed, packaging it in quotation marks, and presenting it as if it were extracted verbatim. There is no error message. There is no warning label. There is just fluent, confident fabrication.
Newsrooms that allow AI tools in their workflows need ironclad protocols: every quote attributed to a real person must be verified against the original source, no exceptions, regardless of how the quote was obtained. If that sounds like it defeats the purpose of using AI to save time, that is because it partially does. The time saved by AI extraction is meaningless if it requires full manual verification afterward.
A Cautionary Tale That Keeps Repeating
This is not the first time AI hallucinations have caused real-world professional consequences, and it will not be the last. Lawyers have been sanctioned for citing ChatGPT-fabricated court cases. Students have been expelled for submitting AI-generated work with invented citations. And now, a senior technology reporter at one of the internet's most respected tech publications has lost his job because ChatGPT invented quotes and he published them.
The pattern is always the same. Someone uses an AI tool expecting it to retrieve or summarize information accurately. The AI generates plausible-sounding content instead. The user, trusting the tool or distracted by circumstances, does not verify. The fabrication goes live. And then the consequences arrive.
Benj Edwards' career at Ars Technica is over because he trusted ChatGPT to give him real quotes, and ChatGPT gave him fiction instead. Every professional who uses AI tools in their work should take this as the warning it is. The technology does not care about your reputation, your career, or the truth. It generates text. Whether that text is real or imagined is your problem to figure out.
Back to ChatGPT Disaster Home