Jeremy Malcolm

Generative AI and Children: Prioritizing Harm Prevention

Among many hot policy issues around generative AI, one that has gained increasing attention in recent months is the potential (and, increasingly, actual) use of this technology to create, as the New York Times puts it, “explicit imagery of children who do not exist.” The concerns expressed are mostly pragmatic, not moral; for example the Washington Post warns that law enforcement officials “will be forced to spend time determining whether the images are real or fake.”

Generative AI vendors such as OpenAI, Midjourney, and Stability AI have responded proactively by building in safety features to prevent the use of their products to produce content of naked minors. However, open source implementations of generative AI software without such safeguards are already actively being used to create such images, some of which are visually near-indistinguishable from real. Indeed, there is no way – at least, no technical way – to prevent digital artists from using such software to depict anything at all.

Is the criminal law needed here?

There are certainly cases where criminal prosecution of those engaged in the production of these images will be appropriate; for example, in cases where a perpetrator used CSAM (illegal sexual images of real children) as training inputs, or used generated outputs to groom real minors for abuse.

The American Enterprise Institute’s Daniel Lyons would go further. He suggests that Congress should step in and re-introduce a law, overruled as unconstitutional 21 years ago, that criminalized the possession or distribution of “any visual depiction that is or appears to be of a minor engaged in sexually explicit conduct,” or that is “advertised, promoted, presented, described, or distributed in such a manner” as to convey that impression.

That would go too far. Under this standard, too much art, fiction, adult content, and survivor stories would also be criminalized. We know this because there’s a record of it happening in countries that already include the written word and artwork in their legal definitions of child pornography or CSAM. Examples that I’ve previously given on this blog include a 17 year old Costa Rican girl criminalized for posting her own artwork to social media, and a website host threatened with prosecution for hosting an incest survivor’s graphic novel memoir.

In another such case from 2020, a horror novelist was prosecuted under Canadian child pornography law for including a short scene of incest in a retelling of Hansel and Gretel. The Canadian court ended up acquitting him and ruling the law under which he was charged unconstitutionally broad. Last month, the author announced that he would be retiring due to the stigma that he continues to suffer from these charges – even while the horror book series has been picked up for television adaptation.

In this context, we should be very hesitant to embrace the narrative that a new wave of speech regulation is needed, at a moment when morality groups are explicitly seeking to use child protection and obscenity laws to target the speech of marginalized groups such as LGBTQ+ communities.

Legal solutions

Thankfully, a broad ban on fantasy sexual materials is unnecessary to address the narrow, practical concern about AI-generated images being confused for real in law enforcement operations. Existing legal frameworks can be leveraged to address these specific concerns.

First, photorealistic computer-generated images that resemble CSAM are already treated as child pornography by the law. 18 U.S. Code § 2256(8)(b) provides that a “computer-generated image that is, or is indistinguishable from, that of a minor engaging in sexually explicit conduct” is unlawful. The Justice Department’s view is that this already covers artificially generated images.

Additionally, nothing is currently preventing prosecutors who are in any doubt about whether an image is real or generated from adding obscenity charges to their indictment, as this already covers fantasy sexual materials that are identified as obscene by a jury. Computer forensics expert Aaron Weiss, who has over 15 years of experience in CSAM cases, told me:

Even if a defense attorney presents the question to a jury as to whether the graphics/videos shown to them could be AI-generated, I don’t see it having a significant impact. Those items being displayed in court are so powerful, and if it looks real enough, I do not see the possibility of it being AI-generated making a difference.

If such cases aren’t being brought, this may have less to do with deficiencies in our current legal framework (as Senators Blackburn and Ossoff have suggested in an Inquiry to the Department of Justice), and more to do with prioritization of enforcement resources towards cases where real children can be saved from ongoing abuse. Only about 3.5% of reports of CSAM are ever investigated at all, and law enforcement agencies are forced to prioritize only the most serious cases. Adding virtual images onto this backlog would be misguided.

Weiss told me, “I do believe that an over-focus on AI-generated CSAM is a distraction from helping child victims who are being sex trafficked or abused in their homes by friends and relatives.” Other commentators have long reached the same conclusion, arguing that by concentrating enforcement resources on image offenses, we show too little concern to current and future victims of hands-on abuse.

Operational solutions

The concern about AI-generated images being confused for real ones can also be effectively addressed through technological solutions. There is promising research into technologies that can reliably and automatically distinguish AI-generated from real content. This could ensure that only real images are presented to law enforcement for investigation.

While that in itself wouldn’t stop the circulation of the images, most platforms already disallow realistic or even unrealistic sexualized images of minors from being posted, and mainstream image classification tools – including some that I have recommended – have no trouble in intercepting such content for human analysis. Fears of an influx of offensive AI-generated content onto major platforms are unfounded.

There are a minority of platforms that do permit fantasy sexual materials without regard to the apparent age of the characters depicted. Many of these are Japanese, where there is a surprisingly powerful political constituency dedicated to opposing the censorship of taboo “lolicon” and “shotacon” cartoons. Partly on account of this, a recent attempt to prohibit such content in a new Cybercrime Convention has been tempered by allowing parties to limit the prohibition to visual representations of a real child.

Importantly, even among platforms that allow such fantasy sexual materials, harm reduction safeguards can still be put in place. For example, one of my clients permits AI-generated content in its terms of service, but requires it to be tagged or watermarked to indicate clearly that it is AI-generated, prohibits content that depicts real persons or is indistinguishable from an actual minor, and does not permit content derived from unlawful training materials.

Where is the harm?

Beyond the narrow concern about generated images being confused for real, a separate concern is sometimes raised about the effects that fantasy sexual materials, whether they are indistinguishable from real or not, could have on rates of real child sexual abuse. On the one hand, advocates of the criminalization of such materials argue that they may spur would-be child sex offenders to enact their fantasies in real life, although there is currently no empirical support for this contention.

But on the other hand, researchers at the Stanford Internet Observatory note the possibility that “the use of [fantasy sexual materials] in place of CSAM produced from the non-virtual abuse of living children could serve a preventative purpose—potentially for treatment/impulse management of those identifying with a sexual attraction to minors,” while cautioning that, “neither the viability nor efficacy of such a practice has been sufficiently studied.”

A team of researchers from Nottingham Trent University and the State University of New York at Oswego are currently investigating that very question. The team’s first publication in Current Psychiatry Reports, published in July 2023, outlines a research program to gather evidence on the use of fantasy sexual materials (FSM) by people who experience attractions to children. The paper references an existing small-scale study suggesting that usage of such materials could indeed serve a preventative purpose. In that study, FSM usage was seen to have “a potentially carthartic effect” on the user, reducing their sexual frustration and thereby making them “less likely to express a proclivity for sexual abuse than a comparison group.” Co-author Dr Gilian Tenbergen told me:

While research investigating the creators and users of this content is still in its infancy, the first studies suggest that most users of fantasy sexual materials are using it to fulfil needs for emotional intimacy and connection. While it may feel logical to ban these materials because we assume they are harmful, without the empirical data to support that, we may very well be inflicting the harm ourselves.

Conclusion

A recent flurry of attention towards AI-generated content depicting children should not overshadow the urgent need to prioritize resources towards combatting actual instances of child exploitation and abuse. The allocation of enforcement efforts must be guided by a clear understanding of the most effective strategies to safeguard real children from harm.

There are legitimate concerns that the use of generative AI tools to produce fantasy sexual materials could interfere with the prosecution of sex crimes against real victims. But those who describe this challenge as terrifying or a worst case scenario are at best overstating their case, and at worst providing a foundation for bad faith arguments for broader censorship, which would affect legitimate creative expression and the narratives of child sexual abuse survivors.

Laws, policies, and technologies already exist, or are in development, to address the potential deleterious impacts of generative AI technologies on real world investigations and victims. In particular, there is every indication that distinguishing generated content from real CSAM and preventing it from being widely distributed online are both solvable problems.

Beyond this, the potential impacts of fantasy sexual materials on the prevalence of child sexual abuse remain an area of active investigation. While caution is warranted, advocating for blanket criminalization without robust empirical support may inadvertently hinder efforts to comprehensively protect vulnerable children.

In the realm of generative AI, as with any transformative technology, there is a need for measured, evidence-based responses that strike a balance between safeguarding against potential harm and preserving essential freedoms. In this instance, premature calls for regulation are a trojan horse for overbroad censorship, and a distraction from our core responsibility to protect real children from harm.

✉️ Email me updates!

We don’t spam! Read more in our privacy policy

Share This Article
Facebook
Twitter
LinkedIn
Email

Leave a Reply

Your email address will not be published. Required fields are marked *

I am a leading ICT policy advisor and advocate, driven by a vision of the potential for information and communication technologies to advance socially just outcomes, coupled with a strong awareness of how this vision intersects with political, legal and economic realities.

Over two decades I have applied my expertise and experience in private legal practice, in high-level management of innovative businesses, as a respected and high-profile civil society leader, and as a bold social entrepreneur. Throughout my career, my quiet but unwavering commitment to achieve equitable solutions in fiercely contested domains has been a constant.

Latest Articles

Among the many challenges that image moderators face, one is

This morning I was privileged to be part of a

On November 14 the European Parliament presented its compromise take

Jeremy Malcolm

Trust & Safety Consultant