Deepfakes, Fiction, and the Future of CSAM Law

Originally published at c4osl.org.

On February 10, 2026, Australian author Lauren Ashley Matrosa was convicted of possessing and distributing child abuse material. However, no children were involved or harmed. The conviction was based on an erotic novel that she had written, in which the only sexual activity takes place between adults play-acting a “Daddy Dom/little girl” (DD/lg) fantasy. Nevertheless, the judge ruled that Australia’s definition of “child abuse material” (which in other jurisdictions is known as CSAM, CSAEM, or child pornography) is broad enough to include even fictional representations of such roleplay.

A kinky canary in the coal mine

It may seem paradoxical that representations of DD/lg (or ageplay, the broader term for age-based BDSM roleplay) should be illegal while the actual practice of ageplay remains legal. BDSM-related fantasies are widely practiced as a means of exploring power, identity, or even processing past experiences. Yet despite this, ageplay communities are frequently stigmatised as inherently abusive or as proxies for child sexual abuse (CSA), even though no minors are involved.

A prominent UK child safety advisor who supports banning DD/lg content has described it as the currency of predatory paedophiles, precipitating a ban of the topic from Facebook. Today, ageplay-related search terms are shadowbanned on major platforms or trigger deterrence messages warning users that they may be seeking illegal content.

My argument has always been that this is the wrong approach. It’s true that BDSM content is offensive to many and that people should not be exposed to it without their consent. But this is why “trigger warnings” in a book, or equivalent tags and filters for online content, are a better approach than blanket censorship or even criminalization. Matrosa’s book had a myriad of trigger warnings, but this was judged irrelevant by the court, who assessed the legality of the book not by the standards of the BDSM community that it was written for, but rather by the standards of a hypothetical offended observer.

UN moves to loosen the CSAM definition

The Matrosa case is not an anomaly. It is the domestic manifestation of a broader shift in how institutions define and regulate “child abuse material.” Just one week prior to Matrosa’s conviction, the United Nations agency for children, UNICEF, issued a statement urging states to loosen their legal definitions of CSAM to include AI-generated content “even without an identifiable victim”.

While this statement does not expressly address the case of novels, a 2019 UN draft proposal had recommended that states extend the definition of CSAM to include “written materials in print or online”. When this draft recommendation encountered significant opposition, including objections from over 17,000 signatories to a petition, the final document settled on a less specific recommendation that “representations of non-existing children or of persons appearing to be children” should be covered.

The problem is this: CSAM is a term that doesn’t get stronger the more you pack into it. Rather, it gets weaker. The term Child Sexual Abuse Material was expressly coined to replace the term child pornography because the latter term fails to infer that those depicted in such material are victims of child sexual abuse. The moral gravity of the term, and the justification for associating it with lengthy criminal penalties of imprisonment, are weakened when it is loosened to include victimless materials within its scope.

Deepfakes are a problem we can solve

With all that said, the UNICEF statement does call attention to a very real and legitimate concern: generative AI is being used to create sexually explicit deepfake images and videos of real victims. Across 11 countries studied by UNICEF, ECPAT, and Interpol, as many as 1.2 million children had their images manipulated into deepfakes in the past year, most commonly by their own peers. Despite the virtual nature of the images, this constitutes a form of image-based sexual abuse, causing direct and profound harm.

Clearly, something must be done. But loosening the definition of CSAM to include victimless content is neither necessary nor sufficient to address this problem. The harm of deepfakes arises from the non-consensual use of a real person’s image — not from the abstract existence of offensive synthetic imagery. Expanding criminal categories to cover all AI-generated content risks diverting attention and resources away from identifying victims, removing abusive material, and holding perpetrators accountable.

Here are three better solutions:

  • Education: If, as UNICEF reports, most deepfakes are created by peers within schools, then our response must account for the reality that many perpetrators are themselves minors. Indiscriminate prosecution can exacerbate harm, entrench stigma, and disrupt rehabilitation among minors who may not fully appreciate the consequences. Instead, policy should prioritize education, digital literacy programs, age-appropriate interventions, and restorative justice approaches. Criminal penalties may be justified in egregious cases involving intent to harass or exploit, but proportionality must prevail: prevention and education outperform punishment in addressing youth-driven behavior.
  • NCII laws: Targeted, victim-centered alternatives to CSAM laws already exist and work better in many contexts. Numerous jurisdictions have enacted laws on non-consensual intimate imagery (popularly called revenge porn, and a subset of tech-facilitated gender-based violence or TFGBV), some explicitly extending protections to AI-generated content featuring identifiable individuals. These frameworks offer practical tools: rapid content takedown mechanisms, civil damages for victims, and platform liability for failing to remove clearly unlawful material.
  • Data protection laws: In other jurisdictions, particularly within Europe, existing data protection frameworks offer another more targeted solution to the problem of AI deepfakes. The EU’s General Data Protection Regulation (GDPR) treats biometric data — including facial recognition data and image embeddings used in AI systems — as a special category requiring a lawful basis for processing. The scraping, storage, and manipulation of children’s images to generate deepfakes may already violate these provisions. Rather than expanding criminal definitions of CSAM to encompass all synthetic imagery, enforcement efforts could focus on unlawful data processing and misappropriation of likeness.

Conclusion

When it comes to generated or artistic content, the decisive question is simple: does it exploit and harm a real, non-consenting person? If it does — as in the case of deepfakes of real children — it is a form of abuse and demands a firm, targeted response. If it does not, then however offensive it may be, it does not belong in the same criminal category.

Edge cases like AI-generated deepfakes have led some to argue for collapsing all depictions of minors, real or imagined, into the definition of CSAM. But conflating fiction with victimization weakens both enforcement and principle. Criminal law loses clarity. Resources are misdirected. And the moral gravity of the term “child sexual abuse material” is diluted. 

The case of Lauren Matrosa illustrates where this path leads: criminal liability imposed not for harm, but for offense. A free society does not protect only inoffensive art. It protects art and literature precisely because criminal penalties must be necessary and proportionate, imposed only to prevent or punish conduct that causes real and identifiable harm. This is not a radical proposition; it is a cornerstone of international human rights law. Offensive art shouldn’t be distributed without safeguards, but it should be allowed to exist.

Real CSAM on the other hand — including deepfakes of real children — is not merely offensive. It is abusive. Our response to it demands precision, enforcement, and support for victims. Therefore the solution is to target the harm directly, through measures such as preventative education, data privacy frameworks, and targeted image abuse laws – not to expand the existing criminal category of CSAM until it loses its meaning.

This May, the Center for Online Safety and Liberty (COSL) will convene a session at RightsCon 2026 to examine these sensitive and complex issues directly. RightsCon is the world’s leading summit on human rights in the Internet age, attended by Internet companies, governmental bodies such as UNICEF, and civil society activists.

Our session will explore how policymakers can address AI-enabled abuse without collapsing fiction into criminality, how tools and frameworks for addressing the issue can be deployed more effectively, and how freedom of expression principles can be preserved while strengthening protection for real victims. The discussion will include lawyer and activist Mar Diez, Professor K S Park of Open Net Korea, Shambhawi Paudel of ILGA Asia, and Emma Shapiro of the Don’t Delete Art project— each bringing expertise on digital rights, platform governance, gender-based violence, and artistic censorship. We would welcome you to join us there.

✉️ Email me updates!

We don’t spam! Read more in our privacy policy

Share This Article
Facebook
Twitter
LinkedIn
Email

Leave a Reply

Your email address will not be published. Required fields are marked *

I work at the intersection of technology, law, and human rights — advising, litigating, and building institutions that shape how digital spaces are governed.

Over more than two decades, I have served as a technology lawyer, civil society leader, policy advocate, and founder of mission-driven ventures. My work focuses on the hardest questions in digital governance: platform accountability, online safety, freedom of expression, AI-generated content, cross-border regulation, and the political economy of the internet.

I am driven by a belief that digital policy must be grounded in both principled human rights analysis and pragmatic institutional design — and that durable solutions require engagement across law, business, and civil society.

Latest Articles

On February 10, 2026, Australian author Lauren Ashley Matrosa was convicted of possessing and distributing child abuse material. However, no children were […]

When Jonas (not his real name) posted a photo of himself in sports clothes to his own Instagram account, the last thing […]

When Danish AI artist “Barry Coty” was arrested in 2023, it came as a surprise. He had believed that the fantasy AI […]