Two stories broke this week that, taken together, paint a terrifying picture of where artificial intelligence is heading. In Hollywood, actors are panicking over an AI-generated "synthetic performer" named Tilly Norwood. Meanwhile, the inventor of the controversial Sarco suicide pod is proposing that AI should decide who has the mental capacity to end their own life.
These are not science fiction scenarios. They are happening right now. And they share a common thread: the replacement of human judgment with algorithmic decision-making in areas where human judgment is most critical.
Tilly Norwood was created by Dutch comedian Eline Van der Velden last summer through her company Xicoia, the AI division of Particle6 which she founded in 2015. Norwood is a so-called "synthetic performer," an actress whose appearance, voice, and expressions are entirely artificial. She does not exist. She cannot exist. But she can act.
Hollywood's reaction has been visceral. SAG-AFTRA warned that Norwood could "put actors out of work, jeopardize performer livelihoods and devalue human artistry." Sophie Turner, the Game of Thrones star, commented simply: "Wow... no thanks."
But the most disturbing critique came from actress Jameela Jamil, who called Norwood "deeply disturbing" specifically because she appears to be "a teenage-looking girl who cannot say no to a type of sex scene" or "advocate for herself." This is the quiet part said loud: an AI actress has no agency, no consent, no boundaries.
Van der Velden has announced plans to create 40 more "very diverse" synthetic performers to expand Norwood's "whole universe." An army of AI actors who never complain, never need breaks, never age, and never demand royalties. The economics are obvious. The ethics are a disaster.
Philip Nitschke, the Australian euthanasia campaigner who invented the Sarco suicide capsule, has a new proposal: artificial intelligence should replace psychiatrists in deciding who has the "mental capacity" to end their own life.
"We do not think doctors should be running around giving you permission or not to die," Nitschke told Euronews. "It should be your decision if you are of sound mind." And who determines if you are of sound mind? Not a human. An algorithm.
Nitschke argues that psychiatric assessment is "deeply inconsistent." He claims to have seen cases where "the same patient, seeing three different psychiatrists, gets four different answers." His solution? Replace inconsistent humans with consistent machines.
The proposal to use AI in this context is not theoretical. Nitschke has expressed interest in using AI to verify identity and mental state before activating the capsule. The machine would monitor oxygen levels in real-time to "ensure safety." The irony of using AI to ensure "safe" death is lost on no one.
These stories share a fundamental premise: that AI can replace human judgment in situations where human judgment is paramount. An AI actress cannot consent, cannot set boundaries, cannot protect herself. An AI gatekeeper for suicide cannot exercise the nuanced, contextual, deeply human assessment required for such a decision.
The risks are already manifesting. The parents of 16-year-old Adam Raine filed a lawsuit against OpenAI after their son died by suicide following months of confiding in ChatGPT. The chatbot did not call for help. It did not recognize crisis. It did not exercise judgment. It just responded.
This is not a technology problem. It is a judgment problem. AI has no judgment. It has parameters. It has training data. It has probability distributions. But it does not have the one thing required for decisions about art, about consent, about life and death: wisdom.
Tilly Norwood will get roles. The economics are too compelling. Studios will use synthetic performers for background actors, then supporting characters, then eventually leads. The SAG-AFTRA warnings will be proven correct, and the union will be powerless to stop it.
The Sarco pod will get its AI psychiatrist. Nitschke is nothing if not persistent, and the technology already exists. Somewhere, someone will build a system that asks you a series of questions, analyzes your responses, and authorizes your death based on an algorithm trained on data sets of unknown quality and bias.
These are not dystopian futures. They are present realities, unfolding in real-time. The question is not whether AI will replace human judgment in these critical areas. The question is whether we will let it happen without fighting back.
Hollywood is fighting back, for now. The medical establishment is fighting back, for now. But the economics always win eventually. And the economics of AI are overwhelming.
We are watching the replacement of human judgment in real-time. The actress who cannot say no. The algorithm that says yes to death. This is the week AI went too far. And next week, it will go further.