Manipulated, sexualized images of German actress Collien Fernandes circulated online for years. Far beyond a single incident, the case points to a structural problem: Artificial intelligence (AI) now makes creating highly realistic sexualized depictions of real people possible without their consent.
The case also raises a question European law has yet to answer: Does harm require a witness, or does violating someone’s dignity in private already cause injury?
European law has so far treated deepfakes as a content moderation problem — wait for something harmful to appear, then remove it. The law’s fixation on distribution is a decision to look away until the damage becomes visible to others.
But that assumption does not match the lived reality of those affected. A manipulated image of a real person exists the moment it is generated. The victim’s loss of control over her own image — over her own body — is immediate and complete. Research consistently documents depression, anxiety, suicidal ideation, reputational damage, and forced career or housing changes among victims.
If harm can arise at the point of creation, responsibility cannot begin only at publication. It begins upstream with the platforms that host the tools to generate fake, sexualized images; the app stores that distribute them; and the companies that build them knowing exactly what they will be used for.
Abuse is not merely a possible side effect; it is often part of the business model.
Women are disproportionately harmed. Sensity AI’s 2019 landmark mapping of the deepfake landscape, cited by the U.S. Department of Homeland Security, found that more than 95 percent of all deepfake content online is nonconsensual intimate imagery, and virtually all of its victims are women.
To address the reality of the ease with which someone can use AI to violate the privacy and personhood of another, European law should be updated to recognize that harm can arise at the point of creation and to reflect that reality in how responsibility is assigned across the ecosystem, not only at the stage of distribution. Moderation alone does not address the original harm created at the moment the image is generated.
Profiting From the Tools of Abuse
The app stores of Google and Apple still host dozens of so-called nudifier apps. These apps are designed to generate realistic manipulated sexualized depictions of real people.
Research by the Tech Transparency Project found that search functions, autocomplete tools, and in some cases even advertisements actively direct users toward such products. According to the findings, the identified apps were downloaded hundreds of millions of times and generated revenues in the hundreds of millions.
Abuse is not merely a possible side effect; it is often part of the business model. Anyone who distributes, lists, or monetizes such apps shares responsibility.
Existing European law has only partially caught up with this reality. The current European Union AI Act mainly contains transparency obligations, such as labeling requirements for AI-generated content. An explicit ban on such applications is still missing.
This is where the ongoing legislative process on the so-called European Union AI Omnibus, a package updating the AI Act, comes in. The European Parliament and the Council of the European Union have, in principle, agreed on a ban of nudifier apps.
A person’s dignity does not end where no one is watching.
Under current drafts, however, that ban would not apply if providers implement “effective safeguards” that prevent users from generating such images. For most AI tools, that logic might be defensible. For apps whose entire purpose is to sexualize real people without their consent, it is not: No safeguard redeems a function that should not exist.
In practice, providers themselves would therefore decide whether their own safeguards are sufficient. That is roughly equivalent to allowing car manufacturers to determine for themselves whether their brakes work.
The ongoing negotiations between the Commission, Council and Parliament offer a chance to close this loophole. A credible ban requires clear criteria, independent verification, and meaningful sanctions for violations.
A Question of Principle
The Fernandes case did not provide a final answer to whether harm occurs upon distribution or may already arise upon creation. As long as the law focuses only on publication, it intervenes after the harm has already begun.
Europe now has an opportunity, through the ongoing AI Omnibus negotiations, to decide whether dignity is protected once an audience is watching or at the moment it is violated. That answer will determine not only what platforms must remove, but what they must never enable in the first place.
A person’s dignity does not end where no one is watching. My body must not become someone else’s image without my consent.


