Jeremy Malcolm

AI and Victimless Content under Europe’s CSA Regulation

On November 14 the European Parliament presented its compromise take on the European Commission’s controversial draft CSA Regulation. Commissioner Ylva Johannson’s proposal mandates that Internet platforms operating in Europe conduct surveillance of their users. It effectively requires them to use AI classifiers to sift through a vast stream of private communications to identify suspected child grooming and exploitation. The compromise is a more measured instrument that sidelines the role of AI classifiers, instead focusing on material that has been actively used to exploit real victims.

While the messaging of those driving the CSA Regulation is focused on coercively-produced child abuse content, AI classifiers would also surface much other material that currently law enforcement doesn’t see. This includes nudes that teens exchange with each other, innocent bathtime or medical photos, and even fantasy content such as cartoons and stories. While tools to surface such content do have a place in platforms’ trust and safety workflow, they have no place in a mandatory reporting regime.

By ruling such content out of scope, the Parliament’s compromise makes the proposal more victim-focused. It also puts the break on decades-long moves from conservative groups to extend the harsh legal treatment that we reserve for real CSAM to victimless content. While the compromise may be bad news for AI tech vendors such as the beleaguered Thorn, it is good news for already-overwhelmed prosecutors, for parents, for the arts and entertainment sector, and most of all for children.

The European Parliament’s CSA Regulation proposal

Here are three of the most significant changes that the European Parliament’s compromise makes to the surveillance mandate of the original CSA Regulation proposal:

  1. Rather than requiring providers to surveil their users on a blanket basis, monitoring is to be performed by a new EU Centre on Child Sexual Abuse only on content that is publicly-accessible. To search private messages or groups, a search warrant would be required, which would only be issued if there was already some evidence of criminal behavior.
  2. There will be no mandate for providers to limit or to circumvent end-to-end encryption. While a tough pill for child safety groups to swallow, there was never really any question that mandating circumvention of encryption would be a bridge too far; European Data Protection Supervisor (EDPS) Wojciech Wiewiórowski recently suggesting that this would mark a point of no return for online privacy.
  3. The scope of the EU Centre’s proactive scanning is to be limited to known images of actual child sexual abuse. This is perhaps the most significant compromise of them all. It means in a nutshell that AI classifiers are not to be used to trawl through chat messages, nor through family photos and selfies, nor through artistic images, to surface new material of possible law enforcement interest.

On all three counts, the Parliament’s position on the CSA Regulation is a rejection of authoritarian law enforcement powers, and a reaffirmation of the human rights that constrain governments from using such powers against the people – for any reason at all. It lays down a very clear and principled red line that the Commission will find it difficult to justify crossing.

Seeking out new sexual images of minors is abuse

In a previous article on the UK Online Safety Act, I explained how over three-quarters of CSAM now detected by analysts online is self-generated, mostly by teenagers for whom taking and sharing nudes is developmentally normal. When this content is leaked, it is rightly treated as CSAM. But such content is only the tip of a much larger iceberg of digital images that are kept private.

Under the surveillance mandate of Ylva Johansson’s CSA Regulation, platforms and apps that teenagers use to take, store, or share nude selfies or sexting materials would be required to proactively seek that content out and report it to authorities. This unnecessarily puts such teenagers in harm’s way, which will have inevitable tragic results.

But more than that, for the law to require platforms to deliberately search through private cloud storage to seek out new sexual images of minors that would otherwise never see the light of day is at the very least a serious privacy violation. Requiring that strangers look at those images before possibly reporting them to the police is arguably a form of sexual abuse in itself. Seeking out private sexual images of minors should never be OK, even if the person doing so is under government orders.

CSAM and obscenity

Just as worrying is that the CSA Regulation would require AI systems to classify all artwork that users create or upload online. A number of European countries (and the United Kingdom) criminalize fictional representations of child sexuality or abuse. These laws aren’t consistently enforced, because such representations are found throughout popular culture and history, in movies like American Beauty and Cuties, books like Lolita and The Color Purple, comics like Lost Girls and A Girl on the Shore, among innumerable other examples.

But while Hollywood typically gets a pass, when similar works are published non-commercially by teenage fans or by LGBTQ+ people, they are much more likely to be flagged as criminal. In countries that criminalize such content, teenagers are already being arrested for distributing their own artwork. Platforms are being forced to remove comics written by incest abuse survivors. Even sharing a sexual Simpsons meme can land you on a register.

The United States gets a lot wrong in its response to sexual violence. But one thing that it gets right is to maintain a hard distinction between CSAM and obscenity. In the 2002 case of Ashcroft v Free Speech Coalition, the Supreme Court found that the constitutional basis for the categorical illegality of CSAM (or “child pornography” under American law) is that it involves the sexual abuse of a real child. Since virtual or artistic images don’t do that, they do not justify the extreme measures – such as mandated reporting – that are justified in fighting CSAM.

Criminalizing victimless content does not prevent abuse

This also raises a much bigger and recurring question about whether we should be treating artwork as potentially criminal at all. The most confounding thing about even addressing this question is that proponents of criminalization seldom honestly state their motivations. Rather than admitting that they simply have a moral objection to such content existing, they will obscure this with a fig leaf of pseudo-scientific speculation that such content normalizes sexual violence against real victims.

Although proponents of criminalization directly acknowledge the dearth of research to back up these expansive claims – which are really just a rehashing of earlier second-wave feminist claims about pornography – what research we do have suggests the opposite. Both historical data as well as more recent phenomenological research suggest that the broad availability of materials that represent underage sexuality and abuse in a fictional or fantasy context could help to reduce real world offending rather than fueling it.

The remaining moral argument that such content shouldn’t be allowed to exist simply because it can be offensive does not rise to the level that would make it necessary and proportionate to treat it as CSAM. As part of my job I have had to view real, harrowing abuse images, and there is simply no comparison between these and any form of artwork no matter how offensive. Frankly it is insulting to victims, and a manifestation of rape culture, to equate them in any way.

How trust and safety teams should handle victimless content

So how should trust and safety teams handle victimless content? Taking first the case of self-produced nude or sexual content that teenagers may inappropriately share, safety by design is a platform’s first line of defense. In a previous article on the UK’s Online Safety Bill I gave some specific examples of technologies that are being used now to introduce friction and guardrails that can prevent teens from making the impulsive decision to share such content in the first place.

For materials that are posted publicly and subsequently re-shared as CSAM, hash-based scanning technologies are able to identify known examples with a high degree of accuracy. AI classifiers can also play a role in identifying new content that surfaces in public feeds. But the hype over AI’s potential should not detract from the tried and true method of user reporting, which is frequently the first method by which a platform becomes aware of new CSAM. In either case, a responsive and well trained professional trust and safety team is needed to ensure that timely action is taken.

As for fictional and fantasy content, platforms can of course develop and enforce their own policies. However I have recommended that due to the inherent subjectivity of artwork and the fact that over-censorship disproportionately affects LGBTQ+ communities and CSA survivors, it is usually far better for platforms to use tagging and filtering to allow users to decide for themselves what content they wish to see. This is something that few platforms do very well now, which means that there is much room for learning and improvement.

A great resource for platforms to consult in this context is the Best Practices Principles for Sexual Content Moderation and Child Protection. This document was published in 2019 following a multi-stakeholder consultation process that I led, involving platforms, content creators, and child protection experts. Among other recommendations, it provides that automated systems should only act on confirmed illegal materials as identified by a content hash, and that lawful sexual content, even if it infringes platform policies, should never result in a referral to law enforcement.

Conclusion

As a trust and safety professional, I refuse to be forced into the position where I am required to snitch on sexting teenagers, or to become an official censor of art or fiction. I would decline to work for any company that required me to report such content to the government voluntarily. But all the more vehemently do I object to being required to report it to the government by law, as I would under the European Commission’s proposed CSA Regulation.

It’s true that the genesis of most new content legally classified as CSAM comes from teenagers themselves, and that hash-based scanning cannot detect this content. However to completely eliminate new sexting content from the Internet would require not a war on child abusers, but a war on puberty. Mandatory AI-based spying on the content that teenagers self-produce would only put them at risk. Instead, platforms are already deploying privacy-protective methods to deter teens from sharing intimate materials in the first place, and to detect and report those that are shared in public channels.

Similarly, more proportionate methods exist to manage the visibility of sensitive artistic content on public platforms. Although often framed as such, enforcement of public morality is not synonymous with the protection of children, and in fact can often undermine that aim. Personally, I don’t think that policing such content ought to be a function of government at all. A better approach is for platforms to empower their users to control their own exposure to potentially offensive content, using robust tagging and filtering mechanisms.

The European Parliament is wisely steering the European Union away from requiring platforms to employ AI classifiers to report victimless content to authorities. Its compromise proposal, authorizing specific targeting of known images of actual abuse, rather than speculative trawling through personal content, limits authorities to their rightful role of investigating crimes against real victims.

This compromise is not the end of the negotiation over the CSA Regulation, but the beginning. It is likely that some of the original draft – most likely including the legalization of voluntary platform scanning – will see a return in the final instrument. But for now, the Parliament’s intervention has applied a welcome correction to the European Commission’s authoritarian law enforcement agenda, and created a more rights-respecting baseline for negotiations going forward.

Join me for a discussion on November 17

Join me, Dr. Gilian Tenbergen (Executive Director, Prostasia Foundation) and Aaron Weiss (Owner and Director, Forensic Recovery, LLC) at a free webinar to discuss the CSA Regulation and other proposals to limit end-to-end encryption in the name of child protection. Will this actually work or are there better ways to make the internet a safer place for our children? Find out at Child Protection in an Age of End-to-End Encryption.

✉️ Email me updates!

We don’t spam! Read more in our privacy policy

Share This Article
Facebook
Twitter
LinkedIn
Email

I am a leading ICT policy advisor and advocate, driven by a vision of the potential for information and communication technologies to advance socially just outcomes, coupled with a strong awareness of how this vision intersects with political, legal and economic realities.

Over two decades I have applied my expertise and experience in private legal practice, in high-level management of innovative businesses, as a respected and high-profile civil society leader, and as a bold social entrepreneur. Throughout my career, my quiet but unwavering commitment to achieve equitable solutions in fiercely contested domains has been a constant.

Latest Articles

Among the many challenges that image moderators face, one is

This morning I was privileged to be part of a

On November 14 the European Parliament presented its compromise take

Jeremy Malcolm

Trust & Safety Consultant