OpenAI Accountability Petition

Hold Leadership Responsible for Safety Failures

OpenAI Leadership Knew. They Shipped Anyway.

Whistleblower Steven Adler revealed OpenAI had internal data on mental health harm - and continued operations.

12,800 have signed. Goal: 25,000
51% to goal
Sign This Petition

Why This Petition Matters

In October 2025, former OpenAI safety researcher Steven Adler published a devastating essay in the New York Times. He revealed that OpenAI leadership had access to internal data showing "a sizable proportion" of users experiencing psychosis, mania, and suicidal ideation - and they continued shipping products anyway.

This isn't about accidents or unforeseeable consequences. This is about leadership who knew the risks and made conscious decisions to prioritize growth over safety.

Sam Altman must be held personally accountable.

The Evidence Against OpenAI Leadership

Ignored Internal Warnings

Steven Adler spent nearly 4 years at OpenAI watching safety concerns get dismissed. He analyzed conversation transcripts totaling millions of words that showed users spiraling into psychotic episodes - and watched leadership deprioritize the findings.

Known Statistics Suppressed

OpenAI's internal data showed 560,000 users weekly exhibiting symptoms of psychosis or mania. This data was not disclosed to users, regulators, or the public until whistleblowers revealed it.

Product Shipped Despite Harm

GPT-4o's "warm personality" was designed to encourage engagement without safety guardrails for dependency. When it caused harm, leadership's response was to remove the personality - causing additional trauma - rather than implement actual safety measures.

Gaslighting Users

When users complained about performance degradation, OpenAI denied it for months. When the Stanford study proved ChatGPT was getting worse, OpenAI claimed "each new version is smarter." Users were made to feel like they were imagining problems.

What We're Demanding

  1. Personal Accountability from Leadership
    Sam Altman and OpenAI's board must publicly acknowledge they shipped products they knew were harmful.
  2. Independent Board Oversight
    OpenAI's board must include independent safety advocates with actual authority to halt dangerous releases.
  3. Mandatory Disclosure Requirements
    OpenAI must disclose all internal safety data to regulators and create public transparency reports.
  4. Compensation Fund for Victims
    Establish a fund to compensate users who experienced documented harm from ChatGPT.
  5. Support for Regulation
    OpenAI must publicly support AI safety legislation rather than lobbying against oversight.

Frequently Asked Questions

Why target Sam Altman specifically?

As CEO, Altman sets company priorities and makes final decisions on product launches. The evidence shows safety was deprioritized under his leadership. Corporate accountability requires individual responsibility.

What can this petition actually achieve?

Public pressure affects OpenAI's reputation, investor confidence, and regulatory scrutiny. The GPT-5 emergency rollback happened because of user outcry. Sustained pressure through petitions keeps these issues visible.

Has OpenAI responded to these allegations?

OpenAI acknowledged ChatGPT is "too agreeable" and fails to recognize "signs of delusion" - effectively admitting the product was shipped with known safety gaps. They have not addressed the whistleblower allegations directly.

Share This Petition

Get the Full Report

Download our free PDF: "10 Real ChatGPT Failures That Cost Companies Money" (read it here) - with prevention strategies.

No spam. Unsubscribe anytime.

Need Help Fixing AI Mistakes?

We offer AI content audits, workflow failure analysis, and compliance reviews for organizations dealing with AI-generated content issues.

Request a consultation for a confidential assessment.