This is the week of the RightsCon conference that wasn’t. RightsCon, the leading international Internet and human rights conference organised by civil society group Access Now, was cancelled by the Zambian government just five days before it had been due to commence. The roundtable that my Center for Online Safety and Liberty (COSL) had organised to present the Drawing the Line Watchlist—part of the conference’s Futures, Fictions, and Creativity track—was just one among hundreds of sessions affected.
The cancellation has been attributed to pressure from China over the participation of Taiwanese delegates at the event. To that extent, both China and Zambia share responsibility for this deplorable act of censorship. But focusing too narrowly on these actors risks missing the larger story. Around the world, governments—including those that present themselves as liberal democracies—are converging on a common approach: expanding state power over online expression in the name of safety, particularly child protection, with consequences that increasingly resemble the very authoritarian practices they claim to oppose.
Why RightsCon, and why child safety?
My first RightsCon was held in Manila in 2015, when the event was a tenth of its present size. The focus of my advocacy was then on intellectual property (IP) law. At that time, most of the proposals for laws that would repress online rights and freedoms were squarely framed to promote the interests of IP owners. Many of the participants at early RightsCon events helped bring about a watershed moment when the momentum of such “IP maximalist” policies was brought to an end.
That turning point came just a few years earlier, in 2011–2012, with the collapse of the proposed Stop Online Piracy Act and PROTECT IP Act in the United States. These bills would have enabled sweeping measures against alleged copyright infringement, including site blocking, search de-listing, and cutting off payment and advertising services. What had once seemed like an unstoppable push by rights holders unraveled almost overnight in the face of an unprecedented online backlash. Civil society groups such as the Electronic Frontier Foundation and Public Knowledge, alongside major platforms including Wikipedia and Google, mobilized millions of users to protest—most visibly through a coordinated Internet “blackout” in January 2012. Within days, political support for the bills collapsed, and with them, the broader legitimacy of overtly IP-driven efforts to regulate the Internet.
So governments swiftly executed a pivot: from the mid 2010s, child protection would be the new pretext used for the introduction of laws to repress online rights and freedoms. This was a politically effective shift: few are inclined to question laws that purport to be for the protection of children, and those who do are easily besmirched and discredited by policymakers as promoting the Internet as a haven for pedophiles. I’ve previously written about how such rhetoric was employed at the 2017 meeting of the Internet Governance Forum (IGF)—RightsCon’s United Nations-hosted cousin—in support of the laws FOSTA-SESTA, which were sold as a necessary tool to combat trafficking but in practice gutted longstanding intermediary protections under Section 230 and pushed platforms into sweeping, risk-averse censorship of lawful speech, often to the detriment of the very communities the laws claimed to protect.
It was this moment that marked the transition of my own advocacy away from IP laws and toward child safety laws, a focus I now carry forward through COSL, which starts from a simple premise: restricting online rights and freedoms is neither sufficient nor necessary to keep children safe; on the contrary, online safety is inseparable from online liberty. In confronting repressive child safety laws, COSL focuses on two fronts where digital rights are under sustained pressure: defending online privacy, and upholding freedom of expression.
Defending online privacy
A major current threat to online privacy is age verification legislation, which requires users to disclose sensitive personal information such as government identification documents or persistent age credentials tied to their devices or online accounts, before they can access adult content (or, in some cases, any social media content) online. These laws create centralised stores of data that can be breached, abused, subpoenaed, or repurposed for surveillance. Meanwhile, children themselves routinely circumvent these systems using Virtual Private Networks (VPNs), borrowed credentials, or migration to less regulated platforms, often placing them in less safe online environments.
In parallel to this, lawmakers are seeking enhanced powers to conduct surveillance of private communications, such as through the European Chat Control proposal. Although putatively focused on child sexual abuse material (CSAM), the powers being sought are not limited to known CSAM but would also authorise the use of AI classifiers to sift through fictional, artistic, and text-based communications, and even more concerning, would expose intimate images exchanged by dating teens to the prying eyes of platforms and law enforcement.
The European Parliament recently drew a line in the sand here, and was broadly criticised by child safety groups and platforms for compromising children’s safety by taking this stand—but it was right to do so. Blanket surveillance of private communications, coupled with mandatory age verification, are not improving online safety but rather pushing users to less safe online spaces such as the Tor network, as a last resort form of personal privacy protection. So completing a trifecta of bad policy proposals, regulators are also toying with circumventing end-to-end encryption and the regulation of VPN services.
The result is a self-reinforcing cycle: the more governments seek to eliminate anonymity and private space online, the more users are pushed toward tools and platforms designed to resist surveillance altogether. Policies introduced in the name of child safety are thus reshaping the architecture of the Internet itself—not into a safer environment, but into one defined by escalating monitoring, shrinking private space, and growing suspicion of ordinary human communication. And once states acquire these powers in the name of protecting children, the pressure to extend them beyond that context becomes difficult to resist.
Upholding freedom of expression
The second front of repression that is reshaping the Internet—and the one that would have been the subject of COSL’s RightsCon session this week—is the expansion of censorship around sexual content. To be clear, this isn’t about abuse content such as CSAM and non-consensual deepfakes—which are already unequivocally illegal and rightly so. Rather, it is about the expansion of censorship to include content depicting (or even just describing) sexual kinks and fetishes, LGBTQ+ self-expression, and other victimless sexual content.
For example, the United Kingdom just passed new amendments to criminalise depictions of lawful BDSM activities, beginning with erotic strangulation—the least contentious starting point, given the acknowledged health risks of this activity—then swiftly added faux-incest and ageplay. On April 20 this year, UK police arrested an artist for possessing prohibited images of children, which were nothing other than her own artwork of a character drawing in the childlike Japanese “chibi” style. A week later, an Australian court confirmed the conviction of author Lauren Mastrosa over a fictional, ageplay-themed novel.
COSL’s cancelled RightsCon workshop would have critically addressed this trend: the steady collapse of the line between content that documents real abuse and content that is fictional, simulated, stylised, or expressive. That line matters. Once the law ceases to require a real victim, the category of “abuse material” becomes elastic enough to absorb drawings, stories, fantasies, queer self-representation, kink communities, and other forms of sexual expression that lawmakers find distasteful or politically convenient to target.
This is not a marginal problem. It is the same logic that has driven attacks on drag performance, school libraries, sex education, online fandoms, and LGBTQ+ speech: redefine sexual expression as child endangerment, then punish those who defend it as though they were defending abuse. The result is not a safer Internet for children. It is a more censorious Internet for everyone, where the protection of real children is invoked to justify the policing of imagination, identity, and desire.
An attack on civil society
RightsCon is not merely a conference; it is infrastructure for civil society participation in Internet governance. The Net Rights Coalition, in a statement co-signed by COSL and over 130 other digital rights stakeholders, called the cancellation a “[missed] opportunity to demonstrate a strong commitment to preserving the multistakeholder model, a key feature of global digital governance.” As previously explained, it has long been accepted that governance of the Internet is a shared responsibility that includes governments, the private sector, and civil society in their respective roles.
One of the roles performed by civil society—that is, by organizations and grassroots movements that advance public interest objectives—is to act as a watchdog on government and the private sector alike, when their actions threaten international human rights. COSL’s Drawing the Line Watchlist examined and, where appropriate, called out 10 countries for failing to recognise the unique harm that is done to survivors of actual sexual violence, when this harm is conflated with merely offensive online content. Other civil society watchdogs, such as CLARA Standards, hold tech companies to account for misuse of the power they hold over children’s digital lives.
Another of civil society’s unique roles is to articulate norms on emerging issues, as COSL had been planning to do by launching the Drawing the Line Principles following our roundtable. Other groups that would have presented at RightsCon also had useful normative contributions to make on child safety issues, challenging the official narrative that places surveillance and censorship in central positions. For example, digital violence prevention group HateAid would have spoken about its excellent recent report Safety by Design: Pathways to Safer Social Media Platforms which contains 214 actionable recommendations for privacy-protective platform design choices.
Civil society also plays a unique role in preventing and responding to online abuse. For example, groups like Stop it Now! intervene with those who have sexual thoughts about children to offer them anonymous support that can prevent them from harming others. Groups like Cyber Harassment Support provide safety training, digital dignity advocacy, and end-to-end support for victims of sextortion, online sexual blackmail, deepfake threats, and other forms of online abuse.
Forums such as RightsCon have become an essential platform for these and hundreds of other civil society groups, which lack power and funding compared to other stakeholders. RightsCon’s cancellation therefore operates as a direct attack on civil society’s already limited capacity to call out government overreach, to support those who are marginalised and harmed, and to promulgate better solutions for online safety that are compatible with international human rights.
Conclusion
RightsCon’s cancellation will no doubt be remembered as a diplomatic incident, and rightly so. But it should also be remembered as a warning about the fragility of the spaces in which civil society is still able to contest the direction of Internet governance. When governments can prevent those spaces from convening, they do not merely silence particular speakers or derail particular sessions. They weaken the capacity of civil society to resist the normalization of surveillance, censorship, and control.
The irony is that the cancelled conference has only made its themes more urgent. The same authoritarian impulse that shut down RightsCon is present in the online safety laws now advancing across jurisdictions: the insistence that governments must be trusted with greater powers to monitor private communications, verify identities, restrict access to information, and suppress disfavoured expression. These powers are almost always introduced with reassuring language and sympathetic examples. But once established, they rarely remain confined to their original purpose.
COSL’s roundtable did not take place in Lusaka this week. The Drawing the Line Principles were not launched in the room where we had hoped to launch them. But the work continues. If anything, RightsCon’s cancellation has clarified the stakes. Civil society cannot afford to retreat from these debates merely because governments have learned to dress repression in the language of safety. We must insist on an Internet where children are protected, survivors are heard, privacy is preserved, and freedom of expression is not treated as collateral damage. That is the line we came to RightsCon to draw—and it is the line civil society must keep drawing, whether governments permit us to gather in the room or not.
This article is illustrated by a photograph from RightsCon 2025 in Taipei, taken by Wang Yu Ching / Office of the President.