Marcel Bucher's ChatGPT Data Loss: Two Years Wiped at the University of Cologne

Marcel Bucher is a professor of plant molecular physiology at the University of Cologne. In August 2025, he toggled a single setting in ChatGPT — the "data consent" option — to test whether the model would still function without sharing his data with OpenAI. That single click deleted his entire workspace: two years of grant applications, lecture material, manuscript drafts, and exam content. There was no warning. There was no recovery option. OpenAI's "Privacy by Design" architecture treats disabling data sharing as an instruction to wipe the user's history, and Bucher's history went with it. He published the account in Nature on January 22, 2026.

What Was Lost

Bucher used ChatGPT daily as a workflow companion. According to his own account in Nature and the follow-up coverage in PC Gamer, Inc., and Windows Central, the deleted content included:

CategoryWhat Disappeared
Grant applicationsTwo years of in-progress and submitted research-funding documents.
Teaching materialLecture preparation, course notes, slide drafts.
Manuscript draftsPublication drafts and revision histories for peer-reviewed papers.
Exam contentComposed exam questions and grading rubrics.
Project foldersEvery project folder in his account — emptied, not selectively deleted.

Bucher did retain partial copies of some conversations and materials that he had exported or pasted elsewhere over the two-year period. The structured, in-platform record — the version that linked the documents into a coherent research workflow — was gone.

The Single Click

The action that triggered the deletion was straightforward: Bucher disabled the "data consent" toggle in ChatGPT's settings. His stated intent, in the Nature account, was experimental. He wanted to know whether the model would still be useful to him without OpenAI continuing to use his content for training. He did not expect, and was not warned at the moment of clicking, that disabling that setting would permanently delete every chat and project folder in his account.

The mechanism is documented in OpenAI's privacy architecture: when a user opts out of data sharing, ChatGPT does not retain the conversation history. The system is designed not to keep what the user has not consented to share. This is a defensible privacy posture in isolation. The problem, as Bucher's case illustrates, is that the same architecture treats two years of professional work product as falling under the privacy-deletion rule, with no gradient between "do not train on this" and "do not retain this for me."

Bucher's Own Words

If a single click can irrevocably delete years of work, ChatGPT cannot, in my opinion and on the basis of my experience, be considered completely safe for professional use. — Marcel Bucher, Nature, January 22, 2026

OpenAI's Response

OpenAI's reply, reported in the follow-up coverage, made two points. First, the deleted chats "cannot be recovered" — consistent with the company's Privacy by Design framing. Second, OpenAI disputed Bucher's claim that there had been no warning, saying that the platform "provides a confirmation prompt before a user permanently deletes a chat." Bucher's account in Nature was that the data-consent toggle did not present a warning of equivalent specificity for the cascading deletion of his project folders, only for the individual delete-chat action.

Whether a confirmation prompt was technically present at the toggle level, and whether it adequately communicated the consequence (permanent loss of all project history), is the substantive disagreement between Bucher and OpenAI. The Nature article and the subsequent reporting suggest the prompt, where it appeared, was not framed in a way an academic user would interpret as "this will delete two years of your work."

The Public Reaction

The response to Bucher's column was, as Nature itself noted in its coverage, polarized. One camp argued he was a victim of a poorly-designed cloud product whose deletion semantics were not adequately disclosed at the moment of the destructive click. The other camp argued he had relied on a cloud-based generative tool for two years of professional research output without maintaining local backups, and that the loss was the foreseeable consequence of that choice. Both positions have substantive support.

The most useful synthesis, which has emerged in academic-tech-policy discussion since January 2026, is that both can be true: Bucher could be at fault for the lack of local backups and ChatGPT could be at fault for an architecture that treats a privacy-toggle change as a destructive workspace-wipe without a clear, plain-language confirmation. The Nature column, in this reading, is a case study in why generative-AI products marketed for professional workflows need fundamentally different deletion semantics than consumer chat products.

Timeline

DateEvent
~Sep 2023Bucher begins using ChatGPT as a daily research-workflow companion.
2023–2025Two years of grant applications, lecture material, manuscript drafts, and exams accumulate inside his ChatGPT workspace.
August 2025Bucher disables the "data consent" toggle. Project folders empty. Chat history disappears. No undo.
August–Dec 2025Bucher contacts OpenAI support; the company confirms the deletion is permanent under its Privacy by Design policy.
January 22, 2026Bucher publishes "When two years of academic work vanished with a single click" in Nature.
Jan–Apr 2026Follow-up coverage in Windows Central, PC Gamer, Inc., Gizmodo, Notebookcheck, Futurism, and Slashdot.
April 2026Case is now the canonical reference in academic-IT discussions of LLM data persistence and is cited in university procurement reviews of generative-AI tooling.

Why This Matters Beyond One Professor

Bucher's case has become the reference point in academic-IT and university-procurement discussions of generative-AI tooling for two reasons. First, it is documented at a level of specificity (Nature publication, named institution, named user, OpenAI on-the-record response) that internal university committees can cite without scope-of-evidence concerns. Second, it cleanly isolates a single failure mode — cloud-product deletion semantics that conflate "stop training on me" with "delete everything I have ever stored" — that other users had been encountering in less-documented form for months.

Several universities have, since January 2026, updated their AI-tooling guidance to recommend that academic work product not be the canonical record kept inside ChatGPT. The recommended pattern, where ChatGPT is used at all, is treat-as-ephemeral: copy substantive output to local storage at the time of generation. That is the de-facto post-Bucher standard.

The pattern Bucher's case illustrates. Cloud-native AI tools, when they double as workflow systems, accumulate professional output that the user does not realize is being held in a structurally fragile location. The deletion semantics, the export options, the backup paths, and the recovery procedures are typically treated as secondary product surface. A privacy toggle that doubles as a workspace wipe is the load-bearing example. There are others.

Related Documentation on This Site

For the broader pattern this case sits inside, see the ChatGPT User Complaints Library 2026 testimonial corpus, the Performance Decline tracker, the Why ChatGPT Is Getting Worse hub, and the 2026 Lawsuits Index. The Bucher case is also referenced in our coverage of OpenAI's broader 2026 privacy and product-architecture decisions.

If You Use ChatGPT for Professional Work

The post-Bucher consensus, in academic IT and corporate procurement, is straightforward: do not let a generative-AI chat product be the canonical store for any work product you cannot afford to lose. Export substantive output to local storage at time of generation. Treat the ChatGPT workspace as ephemeral. If you need a model that does not bury workspace persistence under a privacy toggle, see the ChatGPT Alternatives 2026 comparison.

Alternatives to ChatGPT in 2026

Readers who arrive on this page from search are typically asking the same question Bucher had to ask himself: what should I be using instead? The 2026 alternatives are real, several of them have different deletion semantics by default, and several of them have invested in workspace-persistence options that the Bucher pattern would not have triggered. Anthropic's Claude line, Google's Gemini line, and the open-weights tier (Llama, Mistral, DeepSeek, Qwen) each have different posture, pricing, and policy footprints. The full comparison, including the verified hallucination-rate and pricing data published since the GPT-5.5 launch on April 23, 2026, is on the ChatGPT Alternatives 2026 page.