Jeremy Malcolm

Why the EU will Lose Its Battle for Chat Control

Nobody wants to have their private communications vetted by AI robots. That’s the message that rings loud and clear from the backlash against European Commissioner Ylva Johansson’s proposal for a Child Sexual Abuse (CSA) Regulation. Division over this proposal, dubbed “Chat Control 2.0” by its critics, has only deepened following recent revelations about the extent of surveillance tech company lobbying in its favor, both directly and through newly-minted nonprofits claiming to represent CSA survivors.

Public opinion isn’t the only obstacle for those who have pinned their hopes on a legislative solution to the scourge of child sexual abuse online. Faced with the reality of an Internet that natively supports private and secure communications, and a legal framework that explicitly outlaws the kind of general monitoring that would be required to detect CSAM and child grooming within such communications, there is no clear legal path forward for the proposal. Indeed, the EU’s own legal advisors have delivered the frank assessment that there is no way that the proposal can survive in the form that Johansson insists it must.

Johansson’s response to concerns expressed by European Parliamentarians and others has amounted to little more than repeatedly doubling down on her assertion that the proposal “is about protecting children – and only that.” But this response misses the point. Even if her intentions truly are only to protect children (and not, say, to provide law enforcement agencies with general monitoring capabilities that they could use for a range of other purposes), this doesn’t provide carte blanche for lawmakers to override fundamental human rights.

Indeed, the very reason why human rights are recognized in law at all is because governments, led by the nose by majorities and powerful interest groups, are inclined to advance populist policies that violate the rights of minorities who have little or no domestic political power.

Who controls the Internet?

This problem is particularly acute online, where the transnational or border-crossing nature of the Internet means that actions that any one government takes to repress or surveil its citizens’ communications have spillover effects on those outside its borders. For this reason, governance of the Internet must never become the province of governments alone, but must be regarded as a shared responsibility to manage a global resource – analogous in some ways to the governance of outer space.

Although now firmly established, this principle wasn’t always as well recognized. In the early 1990s, when the Internet first exploded from a research and hobbyist community into a truly global communications network, urgent political questions about its ownership and control immediately arose. Some suggested that the International Telecommunications Union, an arm of the United Nations should take on additional responsibility.

20 years ago this year, as governments convened the first phase of a meeting called the World Summit on the Information Society (WSIS) to discuss possible future governance arrangements for the network, I was among those who fought against a government power-grab on behalf of a broad coalition of global civil society groups. Astonishingly, governments actually listened – perhaps in part because they were unable to agree on such future arrangements by themselves.

They took the unprecedented step of inviting members of civil society, the Internet technical community, and the private sector to join a group called the Working Group on Internet Governance (WGIG) to try to reach consensus on the next steps to be taken. Although differences of approach remained, the group firmly established that the Internet must never be handed over for governments to control. Its final report, published in June 2005, settled on the following definition of Internet governance as a power-sharing arrangement:

Internet governance is the development and application by Governments, the private sector and civil society, in their respective roles, of shared principles, norms, rules, decision-making procedures, and programmes that shape the evolution and use of the Internet.

This in turn led into the establishment of a new, multi-stakeholder organization called the Internet Governance Forum (IGF), in which representatives of civil society, industry, governments, and the Internet technical community could hold discussions on Internet policy issues on an equal footing – with none of them holding ultimate decision-making power. The IGF met for the first time the following year in 2006, and is having its 18th annual meeting in Kyoto this week.

Although the IGF does not have the power to make treaties or to enforce compliance with the recommendations discussed there, it is intended as a venue to address Internet policy policy issues using a collaborative, multi-stakeholder approach, rather than by handing off authority to governments to address those issues in isolation. (You can read more about the IGF and the circumstances of its formation in my 2008 book on that topic.)

Internet governance and child online safety

Unsurprisingly, online child protection was one of the first issues to be discussed at the IGF. Indeed, I’ve made the case that online child protection debates always boil down to just one of two core Internet governance issues: the surveillance of private communications over the Internet, and the censorship of previously-legal online speech.

For a time, the merits of national laws putatively aimed at protecting children were the subject of fierce contestation on the IGF’s margins, and there was even a minor scandal over an apparent attempt to limit the influence of human rights advocates in an IGF child online safety working group (“dynamic coalition”, in IGF parlance).

But it soon became obvious to discussants that on both counts – censorship and surveillance – proponents of government control will inevitably run up against the limits that international human rights norms impose to protect the people from tyranny. As this relates to censorship, I attended and spoke at an event in Seoul in October 2019 at which a representative from the UN Office of the High Commissioner on Human Rights cautioned that art and fiction can’t be treated as equivalent to real child abuse images, and that criminalizing speech is only permitted as a last resort.

As for blanket communications surveillance, the application of international human rights law is also unambiguous. In 2015, a seminal report from the United Nations Special Rapporteur on Freedom of Expression directly considered the argument from law enforcement that anonymous or encrypted communications make it too difficult to investigate child pornography crimes. Yet the report’s recommendation was clear, and it remains unchallenged today:

States should not restrict encryption and anonymity, which facilitate and often enable the rights to freedom of opinion and expression. Blanket prohibitions fail to be necessary and proportionate. States should avoid all measures that weaken the security that individuals may enjoy online, such as backdoors, weak encryption standards and key escrows.

A dirty fight

Just because governments’ international human rights obligations may be clear doesn’t mean that lawmakers are always inclined to abide by them. And the more difficult it becomes to justify the congruence of proposed child protection laws with human rights norms, the more likely it is that the proponents of such laws will play dirty. So it has been with Europe’s CSA Regulation.

When the first phase of the regulation (“Chat Control 1.0“) was under negotiation in 2021, Johansson accused those standing up for communications privacy as supporting a “haven for pedophiles”, and European Parliamentarians complained that such rhetoric was being used as a form of “moral blackmail.” This time around, the same divisive rhetoric is still being employed, as in Johansson’s simplistic assessment of her own proposal written to a Parliamentary committee:

The answer to the question ‘Who benefits’ from my proposal is: children. And who benefits from its rejection? Abusers who can continue their crimes undetected and possibly big tech companies and messaging services who do not want to be regulated.

Yet while the benefits of her regulation for children are strongly debatable, Johansson is hardly one to single out tech companies as beneficiaries of its rejection, having relied heavily on alliances with self-interested companies to advance her cause. The close involvement of Thorn, a U.S. surveillance tech company known for using face recognition technology to surveil adult sex workers, has aroused particular concern. Although Thorn’s commercial CSAM detection technology was developed using a share of $280 million in child protection grants, it charges a hefty licensing fee for its use and stands to receive a windfall if the final regulation requires Internet platforms to use AI classification tools.

Thorn’s luster became somewhat tarnished last month when its founder Ashton Kutcher stepped down in the wake of controversy over his support for a convicted rapist, but the surveillance industry continues to exert influence on the European legislative process, through links with child safety nonprofits aimed at publicly associating CSA survivors with its cause, and through the UK government-established WeProtect Global Alliance which has sidelined civil liberties advocates.

Do what we can, not what we can’t

Put starkly, any approach to the elimination of online child sexual abuse that turns upon governments’ ability to perform mass surveillance and censorship is doomed to fail – both legally and technically. I’ve even gone so far as to argue that the war on child sexual abuse material as waged through these approaches has failed completely, that technologies to circumvent them are here to stay, and that the use of such circumventions can be a morally imperative safety valve for citizens facing government repression.

Thankfully, resisting a surveillance capitalist approach to child protection doesn’t mean doing nothing. Leaving aside both the technical and legal obstacles, even if police could detect CSAM and grooming as it happened, the limited range of responses that they can provide are simply not a comprehensive solution. Our society’s dominant framing of child sexual abuse as a criminal justice issue centers investigation powers, while blinding us to alternative approaches that foreground public health and prevention.

At a workshop that I organized at RightsCon (something of a spiritual successor to the IGF) in June 2021, a multi-stakeholder group of participants resolved, “We encourage policymakers to adopt a comprehensive approach to combating CSA that is guided by public health principles and human rights standards.” I’ve previously written an overview of what such a public health based approach to the prevention of child sexual abuse means, and how much of it lies outside the purview of tech companies.

Even if we limit our focus to what companies can do directly, in my last article on the UK’s Online Safety Bill (now Act), I examined how tech companies are already putting in place measures for abuse prevention and early intervention on their platforms. This does include scanning for CSAM – but never across encrypted channels, and only for verified images of actual child abuse, assessed as such by an expert analyst, rather than for an open-ended category including artwork, selfies, and text conversations that AI systems deem suspicious.

A better version of the CSA Regulation for Europe would limit itself to establishing a formal, transparent and accountable structure for platforms to review content against a verified database of known abuse materials, including procedures for appeal in case of false positives, and a privacy-protective process for reporting matches to authorities. As Johansson has correctly argued, current arrangements for reporting and analysis rely too heavily on a US-government affiliated nonprofit, NCMEC, which has no accountability to the public and next to no transparency. This has led to a lack of clarity about important issues, not least of which being the true scale of the problem after accounting for duplicate reports and false positives.

A plan for delivering an improved structure for reporting and analysis of suspected CSAM does indeed already exist in the original Chat Control proposal, which proposes the establishment of a new EU Centre to prevent and combat child sexual abuse, and a transparent and accountable framework for it to carry on these functions. With the restricted scope defined above, this proposal has merit and with luck, will be one part of it that survives the inevitable excision of the provisions that overreach.

Conclusion

We can make the Internet safer for young people without resorting to measures that infringe human rights, such as mandating that tech companies conduct mass surveillance of their users. While the overwhelming likelihood is that Ylva Johansson will have her wings clipped when a much narrower Chat Control regulation is finally passed, her cavalier use of moral shaming and her willingness to offer up privacy as a bargaining chip has already set an agenda for repression that extends beyond Europe.

I’ve previously examined how the UK’s new Online Safety Act could mandate a similar mass surveillance regime there. Similar plans are also brewing in Australia, and in the United States, where a raft of state laws that undermine privacy rights are undergoing challenge. The same toxic rhetoric in support of these measures is being exchanged in other jurisdictions also; to give possibly the worst example of this, Common Sense Media accused tech lobby group TechNet of “lobbying to allow pedophilia, bestiality, trafficking, and child sex abuse” by opposing possibly unconstitutional provisions of California bill AB 1394.

Such rhetoric only widens the divide between those derided as tech libertarians on one side, and as disingenuous authoritarians on the other. At least as far as mass surveillance proposals are concerned – even those that are earnestly advanced in the hope of protecting children – there can be no question that the “libertarians” are right. Human rights as established in international law are one of the few safeguards that the people have against law enforcement overreach, and an essential bulwark against encroaching fascism masked as concern for children.

Once the interception and surveillance of private communications has been normalized, scope creep is inevitable. There is no example of a criminal investigation power that has ever been used only against child abusers, and it is hardly more plausible to suggest that the new powers proposed for the Chat Control regulation would remain confined to that target. Indeed, Europol couldn’t even contain itself until after the regulation had been passed before lobbying for its extension.

There is no dispute that more must be done to keep children safe online. But mass surveillance is not, and will never be, a viable solution, and it’s time for the European Union and other governments to drop the pretense that it ever will be. Governments alone do not and must not control the Internet. As such, online safety for children is not something that governments can deliver alone, or by strong-arming tech companies to act in their stead.

Rather, like governance of the Internet more broadly, working towards a safer Internet for children is a multi-stakeholder endeavour that can only succeed to the extent that it is undertaken within a framework of human rights and public health principles.

✉️ Email me updates!

We don’t spam! Read more in our privacy policy

Share This Article
Facebook
Twitter
LinkedIn
Email

Leave a Reply

Your email address will not be published. Required fields are marked *

I am a leading ICT policy advisor and advocate, driven by a vision of the potential for information and communication technologies to advance socially just outcomes, coupled with a strong awareness of how this vision intersects with political, legal and economic realities.

Over two decades I have applied my expertise and experience in private legal practice, in high-level management of innovative businesses, as a respected and high-profile civil society leader, and as a bold social entrepreneur. Throughout my career, my quiet but unwavering commitment to achieve equitable solutions in fiercely contested domains has been a constant.

Latest Articles

Among the many challenges that image moderators face, one is

This morning I was privileged to be part of a

On November 14 the European Parliament presented its compromise take

Jeremy Malcolm

Trust & Safety Consultant