AI CHILD SAFETY FAILURE

An AI Tool Sexualized a Fourth Grader's Book Report. The School Had No Idea It Could Happen.

A Los Angeles elementary school assigned students to use Adobe Express for Education to create a Pippi Longstocking book cover. The AI generated sexualized images of women in lingerie and bikinis instead. Parents could reproduce the results on school-issued Chromebooks. California is now scrambling to figure out what went wrong.

March 3, 2026

What Happened When a Fourth Grader at Delevan Drive Elementary Asked AI for a Book Cover

In December 2025, fourth graders in a class at Delevan Drive Elementary School in Los Angeles were given what should have been a simple, creative homework assignment. Write a book report about Pippi Longstocking, the beloved Swedish children's book character, and create a book cover using either drawing or artificial intelligence. It is the kind of assignment designed to get kids thinking creatively. It is also the kind of assignment that exposes a catastrophic gap between what AI companies promise and what their products actually do when real children use them.

Jody Hughes' daughter was one of those fourth graders. The child typed a prompt into Adobe Express for Education, the tool provided by the school district for the assignment. The prompt was about as innocent as it gets: "long stockings a red headed girl with braids sticking straight out." It was a description of Pippi Longstocking, a fictional nine-year-old from a 1945 children's novel.

Adobe Express did not generate anything resembling a red-haired girl in pigtails. Instead, the AI produced sexualized imagery of women in lingerie and bikinis. A tool marketed for classrooms, running on a school-issued device, supervised by a school district, generated content that no child should ever encounter, let alone as part of a homework assignment about a children's book.

"These tech companies are making things marketed to kids that are not fully tested." - Jody Hughes, parent of a Delevan Drive Elementary fourth grader

How Other Parents Reproduced the Same Sexualized AI Images on School Chromebooks

What happened next made the situation significantly worse. Hughes contacted other parents in the class. Those parents tried running similar prompts on their own children's school-issued Chromebook computers. They were able to reproduce similar results. This was not a one-time glitch. This was not a freak occurrence caused by an unusual combination of words. This was a systemic failure in the AI model's content filters, one that any child with access to the tool could trigger with a perfectly innocent request about a children's book character.

Think about that for a moment. The Los Angeles Unified School District, the second-largest school district in the United States, distributed a tool to its students that could generate sexualized content from a child's homework prompt. Multiple parents, on multiple devices, confirmed it. The tool was supposed to be the educational version, the one designed to be safe for classrooms. It was running on school-managed Chromebooks, through school-approved software, for a teacher-assigned project. Every safeguard that was supposed to exist failed at the same time.

Hughes expressed a concern that should be obvious but apparently was not obvious enough for the adults who approved the tool: elementary school is too young for text-to-image generators. When the technology cannot reliably distinguish between a prompt about a children's book character and a prompt that should produce adult content, giving it to a nine-year-old is not a pedagogical experiment. It is negligence.

How LAUSD and Adobe Responded to the AI Image Generation Scandal at Delevan Drive Elementary

The parent advocacy group Schools Beyond Screens moved quickly after the incident came to light. They went directly to the Los Angeles Unified School District board and told them, in no uncertain terms, that they opposed any further use of the Adobe software in schools. It was a clear demand from organized parents who had seen what the tool could do and wanted it gone.

The district's response was the kind of corporate-flavored statement that manages to say everything and nothing at the same time. A spokesperson told reporters that the images generated by the AI model "don't align with district standards" and that the district is "collaborating with Adobe to address the issue." Don't align with district standards. That is one way to describe a homework tool that showed a fourth grader sexualized images of women in lingerie when she asked for a picture of a children's book character. Another way to describe it would be a complete failure of the vetting process that was supposed to prevent exactly this.

Adobe's response came from Charlie Miller, the company's Vice President of Education. Miller said the company rolled out changes to address the issue within 24 hours of hearing about the incident. Twenty-four hours. The fix came in a day, which raises an uncomfortable question: if the fix was that straightforward, why did the problem exist in the first place? Why was a tool marketed specifically for K-12 classrooms deployed without the content filters that would prevent it from generating sexualized imagery in response to a child's prompt about a children's book?

The Vetting Question Nobody Will Answer

When asked how Adobe Express for Education was vetted before it was deployed in classrooms, Charlie Miller did not respond. That silence is more revealing than any statement could be. If the vetting was thorough, Adobe would say so. If the vetting caught this risk and the company deployed anyway, that is a different kind of problem entirely. The non-answer suggests the question was never seriously asked before the tool went live in schools serving children as young as nine years old.

Why the Delevan Drive Incident Is Part of a Larger Pattern of AI Failures in California Schools

This is not the first time a major California school district has rushed AI into classrooms without adequate testing. In June 2024, LAUSD's superintendent promised students "the best AI tutor in the world." Weeks later, the district had to pull the tool from use. In San Diego Unified, board members unknowingly signed a curriculum contract that contained an AI grading tool they did not know was included. A pattern is emerging: school districts are adopting AI tools faster than anyone can verify they are safe, functional, or appropriate for children.

The Delevan Drive incident is different from those earlier failures in one critical way. A buggy AI tutor wastes time. An AI grading tool that no one agreed to raises governance questions. An AI tool that shows sexualized content to a nine-year-old creates a child safety crisis. The stakes are not in the same category. When an AI image generator interprets "red headed girl with braids" as a prompt for sexualized content, the failure has moved from inconvenient to dangerous.

And the problem extends far beyond a single school in Los Angeles. A Brookings Institution study released in January 2026, based on interviews with more than 500 students, teachers, parents, and education leaders across 50 countries and a review of over 400 academic studies, reached a stark conclusion: the risks of AI in classrooms currently outweigh the benefits. The researchers found that AI in education can "undermine children's foundational development," creating what they described as a doom loop of AI dependence where students increasingly off-load their thinking onto the technology, leading to cognitive decline. And that conclusion does not even account for the content safety dimension that the Delevan Drive case exposed.

What California Is Doing About AI Safety in Schools and Why Critics Say It Is Not Enough

A few weeks after the Delevan Drive incident, the California Department of Education released an updated edition of its AI guidance for schools. The timing was coincidental, according to the department, which said it had been working on the guidelines for several months with a group of 50 teachers, administrators, and experts. The revision came in response to instructions from the state legislature, which passed two laws in 2024, including Senate Bill 1288, directing the department to get a handle on AI's rapid spread in schools.

The department has also convened an AI in Education Working Group, which met publicly three times between August 2025 and February 2026 to develop statewide guidance and a model policy. The working group is expected to introduce specific policy recommendations by July 2026, covering data privacy, academic integrity, professional development, equitable access, and effective classroom integration.

Experts who have reviewed the guidelines say they fall short in several critical areas. The guidance does not detail how schools should vet AI tools before deploying them in classrooms. It does not provide clear opt-out procedures for families who do not want their children using AI tools. It lacks specific age-appropriate usage restrictions, the kind that would have prevented a fourth grader from being given access to a text-to-image generator in the first place. And it offers no detailed vetting protocols that would require companies like Adobe to prove their "education" products are actually safe for education before districts put them in front of children.

Guidelines Without Teeth

California has separately signed into law Senate Bill 243, the first-in-the-nation AI chatbot safeguard legislation that prevents chatbots from exposing minors to sexual content and requires disclosure that users are interacting with AI. The bill passed the Senate 33 to 3 and the Assembly 59 to 1, with overwhelming bipartisan support. But SB 243 targets companion chatbots, not educational AI tools like image generators. The Delevan Drive incident falls into a regulatory gap that no existing law adequately covers.

AI Companies Are Rushing Products Into Schools Before They Are Safe for Children

The fundamental question that the Delevan Drive incident forces is one that the entire AI industry has been avoiding: who is responsible when an AI tool marketed to children produces content that harms them? Adobe marketed Express for Education as safe for classrooms. LAUSD trusted that marketing and deployed it to students. The teacher assigned it as a homework tool. The child used it exactly as she was told to. Every adult in the chain did what they were supposed to do, and a nine-year-old still ended up seeing sexualized images of women when she asked for a picture of Pippi Longstocking.

This is the core failure of the current approach to AI in education. Companies build the tools. Districts buy the tools. Teachers assign the tools. But nobody, at any point in that chain, is systematically testing whether the tools are safe for the youngest users they will encounter. Adobe did not catch it. LAUSD did not catch it. The state Department of Education's guidelines, released after the fact, still do not require anyone to catch it. The vetting is voluntary. The safeguards are aspirational. And the consequences land on children.

The Sewell Setzer case in Florida, where a 14-year-old died by suicide after becoming dependent on a Character.AI chatbot, demonstrated what can happen when AI companies build emotionally manipulative products that target teenagers. Character.AI agreed to settle multiple lawsuits in January 2026. The Delevan Drive case demonstrates a different dimension of the same problem: AI companies building tools for even younger children, marketing them as safe, deploying them without adequate testing, and then scrambling to patch the damage after a child has already been exposed.

Adobe fixed the problem within 24 hours. Good. But that fix came after real children, in a real classroom, had already seen what the tool produced. The 24-hour fix is not a success story. It is evidence that the problem was foreseeable, fixable, and should have been caught before the product was ever placed in front of a nine-year-old.

What Must Change Before Another Child Encounters Inappropriate AI Content in a Classroom

The answer is not to ban AI from schools entirely. The technology has legitimate educational applications, and students will need to understand it. The answer is to stop treating children as beta testers. Every AI tool deployed in a classroom should be required to undergo independent safety testing before a single student touches it. Not testing by the company that built it. Not testing by the district that bought it. Independent, third-party testing by people whose job is to try to break the tool the way a child would use it, with the kinds of prompts a child would type.

Parents need real opt-out rights, not theoretical ones buried in district policies. If a school wants to give a nine-year-old access to an AI image generator, the parents should have to actively consent, with a clear explanation of what the tool does, what it might produce, and what safeguards are in place. The current model, where tools are deployed and parents learn about them after something goes wrong, is backwards.

And AI companies need to stop marketing products as "safe for education" when the education version is just the consumer version with a different label. If Adobe Express for Education can generate the same sexualized content that the regular version can, the "for Education" part of the name is a marketing claim, not a safety feature. California's July 2026 policy recommendations from the AI working group will be the first real test of whether the state is willing to put enforceable guardrails in place, or whether this will remain a system where guidelines are suggestions, vetting is optional, and children bear the consequences.

A fourth grader asked an AI for a picture of Pippi Longstocking. She got images of women in lingerie. Every adult and institution that was supposed to prevent that from happening failed. The question now is whether anyone will build a system that actually works before it happens again.

Share This Investigation

More AI Failure Investigations

ChatGPT Disaster documents every case where AI companies put products before people. From child safety failures to medical misinformation to corporate negligence.

Browse All Investigations Read User Stories AI Mental Health Crisis