Four Proposed Child Safety Laws, Four Approaches

The U.S. EARN IT Act, Britain’s Online Safety Bill and its stateside counterpart the Kids Online Safety Act, an upcoming European regulation combating child sexual abuse online, and a proposed United Nations convention on cybercrime, are a few current proposals for new legal instruments addressing child safety online at national, regional, and global levels. While on the surface these proposals share the same general goal, they take several very different approaches towards doing so.

Breaking these approaches down into four categories – criminal justice, intermediary liability, safety by design, and maintenance of public morality – provides an instructive framework for analyzing these and other legislative efforts. It also sheds light on the extent to which these measures are likely to succeed in their stated goals, or may have adverse consequences for human rights online.

Criminal justice: Europe’s Chat Control 2.0

By far the dominant approach to child safety online is to deal with it as a criminal justice issue. When children are harmed online, a perpetrator must be found and held accountable. It’s easy for lawmakers to communicate this approach to the public, it’s popular with the electorate, and it benefits a politically powerful constituency of law enforcement agencies and contractors.

Of the proposals mentioned above, the criminal justice approach most closely characterizes Europe’s draft Regulation combating child sexual abuse online. Activists have termed this regulation Chat Control 2.0, in reference to the existing voluntary regime known as Chat Control 1.0 established in 2021. A key feature of both measures is to allow (version 1.0) or to require (version 2.0) Internet platforms to act as the eyes and ears of the police – scanning private communications for evidence of online child sexual abuse, and establishing a clearinghouse to receive and process those reports.

A significant limitation of the Chat Control 2.0 regulation is that mandating the use of mass surveillance techniques to uncover perpetrators of child sex abuse is a human rights minefield. Although flagging of existing known images of child sex abuse is a proportionate step to take and a well-established best practice for platforms, Chat Control 2.0 goes further and could require platforms to leverage AI tools to scan and classify private chats and photographs, which will certainly generate false positive matches that could imperil innocent people. An initial set of amendments proposed in February 2023 would carve encrypted communications outside of the scope of the regulation, which on the one hand addresses a key privacy concern – but on the other hand also exposes an obvious limitation of the regime’s potential effectiveness as an investigative tool.

A broader general concern with treating child safety online as a criminal justice issue is that it is a reactive approach – it waits for harm to be done before action is taken, and this often means that action is never taken. There isn’t even always an identifiable perpetrator when harm is done to children online. For example, as my previous article explained, sexualized cyberbullying can have serious consequences for children, often leading to self-harm and in some tragic cases, to suicide. Yet these cases are not easily amenable to being dealt with through the criminal justice system. So too, when children unwisely share sexualized content online, simply arresting those who further share this content is at best an indirect approach to addressing the core problem.

Intermediary liability: the EARN IT Act

An alternative approach to addressing child safety online is to use the instrument of intermediary liability, on the theory that what the criminal justice system can’t adequately address, perhaps the free market might. The best example of this approach in the current slate of proposed laws is the EARN IT Act, which would pierce the liability shield that 47 U.S.C. §230 (colloquially “Section 230”) provides U.S. Internet companies, exposing them to private lawsuits and state criminal prosecutions over child sexual abuse material (CSAM) on their platforms.

Platforms do already have a responsibility to remove and report CSAM when they discover it, so the narrative of the bill’s sponsors that they currently enjoy immunity from such content is inaccurate. However, it is probable that many would take stronger safeguards to avoid such material being uploaded to their platforms in the first place, if they faced legal liability for recklessly or negligently failing to eliminate such material promptly. On the other hand, civil society groups fear that these same incentives will operate to ensure that platforms also crack down on much legitimate, constitutionally protected expression.

A previous application of this intermediary liability approach to child safety, FOSTA SESTA, bears out these concerns. Although originally promoted as a measure to address child sex trafficking, in practice the law has failed to assist in the prosecution of sex trafficking crimes, but has stifled the voices of marginalized communities such as LGBTQ+ people, sex workers, and most ironically, child sex abuse prevention professionals. Today, FOSTA/SESTA remains under constitutional challenge, and a law to evaluate its impacts on sex workers has been proposed.

Obviously, platforms do have a key role to play in addressing child safety online. But there are better mechanisms by which this can be assured than the blunt instrument of imposing liability for user-uploaded content. Market mechanisms may work well when it comes to balancing supply and demand, but exposing companies to uncertain liability under a multitude of private lawsuits and state laws is a clumsy mechanism to invoke for achieving more complex social goals.

Safety by design: the Online Safety Bill and KOSA

Two of the current legislative proposals mentioned above attempt to reflect an approach that prioritizes safety by design: Britain’s Online Safety Bill and the U.S. Kids Online Safety Act. Although differing in detail, both would encourage platforms to proactively assess the risks that their platform creates for children online, and to develop systems and processes that address those risks. Conceptually, the approach has elements in common with the blanket intermediary liability approach (and can carry some of the same risks), but it is less of a blunt instrument.

In its initial draft, the Online Safety Bill received criticism for imposing liability on platforms for an ill-defined category of legal but harmful content. A January 2023 revision has since been narrowed so that platforms will only be required to act on illegal content and content that violates their own terms of service, and a right of appeal must be provided when content is removed. Larger platforms must also provide users with tools to filter out potentially harmful content (including pornography), and these filters must be enabled for children by default. However civil liberties groups have expressed continued concern that the law still gives government too much power to determine what content is harmful to children, and effectively mandates age verification which will reduce online privacy.

Similarly, under the Kids Online Safety Act (KOSA), platforms are required to take “reasonable measures” to prevent and mitigate harms to child users including mental health disorders, cyberbullying and harassment, and sexual exploitation and abuse. Like the Online Safety Act, KOSA underwent revision in response to criticisms of an earlier draft, narrowing its scope so that it would apply only to content recommendations, and would not require platforms to filter the results of content that users specifically search for. But concerns remain that it will be too difficult for platforms to determine what recommended content may have harmful impacts on young users, and that this creates an incentive for over-censorship.

Enforcement of public morality: the Convention on Cybercrime

If companies are poorly incentivized to protect children, the incentives that drive lawmakers in this space are little better. As child safety is inherently such an emotive issue, politicians do better at promoting their own reelection by passing laws that appeal to public sentiment on this topic, rather than those that are guided by evidence.

One of the most insidious effects of this is that child protection is often wrongly conflated with the enforcement of public morality. Current examples in the United States are given by the spate of populist red state laws and district policies that restrict public drag performances, LGBTQ+ inclusive library books, and sex education curricula, without any evidence that these are harmful to children, and often in the face of contrary evidence. Rather than being about protecting children, these populist laws are designed to enforce conservative sexual norms that were never truly “normal” – but simply appeared as such because digressions from the norm were criminalized, stigmatized, and suppressed.

At the global level, a less obvious but similarly pernicious example of a morality-based approach can be found in an obscure point of negotiation around the proposed United Nations Convention on Cybercrime. Although it covers a broad range of “cyber-enabled” crimes, the point in issue is whether fiction and art that depicts imaginary persons under 18 years in a sexual context should be treated as CSAM. The January 2013 draft text does do this, while allowing individual countries to reserve the right to opt out from criminalizing fictional materials.

Although it might be assumed that this is an arcane point of most interest to lawyers, whether criminal laws against CSAM extend to fictional content actually cuts to the very heart of what those laws are for: are they for protecting real children who are victimized in those images, or are they to enforce public morality? Although some challenge this duality by suggesting that the the criminalization of fictional materials serves to prevent real-life crimes, there is currently no empirical evidence that supports this contention, and some that supports the opposite conclusion.

The harms of adopting such an expansive morality-driven approach are more tangible, however. Doing so diverts law enforcement resources away from real child abuse cases, and towards the prosecution of victimless obscenity crimes, very often against members of marginalized communities. As the case of drag clearly illustrates, queer self-expression is more often read as sexual than equivalent straight self-expression, and also more often falsely read as pedophilic. For example, human rights organizations have drawn attention to the overzealous prosecutions of a 17 year old Costa Rican girl and a Russian trans woman simply for posting their artwork online.

To be clear, platforms can and should play a role in mediating the availability of art and fiction that transgresses social norms, much in the same way that they do with other forms of adult content. I have worked with and advised platforms that allow such content with appropriate age gates and warnings, as well as those that disallow it altogether. Both approaches are valid, and both are preferable to the use of criminal law to regulate such content. As popular as laws for the enforcement of public morals may be, these are not child safety laws, and should not be treated by lawmakers as such.

What’s missing: harm prevention and reduction

Across all of the approaches mentioned above, a notable absence is an approach that recognizes that child safety online begins with what happens offline. Interventions such as comprehensive sex education for children, and media literacy education for them and their parents, can resource them to navigate the online environment more safely – in the recognition that there will always be rogue platforms that aren’t safe for them.

Similarly, accessible and stigma-free access to social services, including but not limited to professional and peer-support mental health services for young people and adults alike, can catch potentially problematic behaviors in the bud, and avert their harmful manifestation online in forms such as cyberbullying, grooming, and trafficking.

Internet platforms alone cannot be expected to take responsibility for these broader social interventions, which require investment in an holistic, public health based approach. In the United States, the Invest in Child Safety Act (not to be confused with the Kids Online Safety Act) would have injected $5 billion into both law enforcement and crime prevention measures, but it attracted little attention from lawmakers and has not yet been re-introduced into the 118th Congress.

In Europe, the Chat Control 2.0 legislation has been supplemented by EU funding for projects focused on prevention, including early intervention among populations sexually attracted to children. Unfortunately, this has collided head-on with the public’s preference for a criminal justice approach to child safety, causing a reactionary backlash against the terminology used by the professionals leading such interventions.

Conclusion

Four approaches that underlie current legislative efforts to ensure child safety online have been explored above, and one key omission has been highlighted. To some degree, the approaches taken can be complementary: safety by design, for example, goes some way towards reducing online risks to children, but goes hand in hand with a criminal justice response in those cases where these safeguards fail.

On the other hand, the approaches highlighted above can also undermine each other. In particular, laws based on public morality, and intermediary liability regimes that incentivize platforms to over-remove content about sex, can make children less safe by making accurate sources of information and support less accessible.

Trust and safety professionals conduct their work within a systems of laws, so it’s important for them to involve themselves in public policy debates as lawmakers charge forward, to influence the development of these laws in a positive direction. Key representatives of the profession, such as Farzaneh Badiei from the Digital Trust and Safety Partnership and Andrew Puddephatt from the Internet Watch Foundation to name just two, do have a human rights and global governance background, and have been advocating in appropriate forum for policies that are balanced, evidence-based, and rights-respecting.

My vision of trust and safety as a profession is that we should be shifting away from a reactive approach centered around hiding and blocking unsafe or toxic content, and towards an approach based on public health principles, that proactively addresses the conditions that attracts such content to our platforms in the first place. Such an approach identifies and addresses risk factors that can make our platforms unsafe for children, builds strength and resiliency in our user communities, promptly intercepts and mitigates harm when detected, and is situated within a human rights framework.

✉️ Email me updates!

We don’t spam! Read more in our privacy policy

Share This Article
Facebook
Twitter
LinkedIn
Email

Leave a Reply

Your email address will not be published. Required fields are marked *

I am a leading ICT policy advisor and advocate, driven by a vision of the potential for information and communication technologies to advance socially just outcomes, coupled with a strong awareness of how this vision intersects with political, legal and economic realities.

Over two decades I have applied my expertise and experience in private legal practice, in high-level management of innovative businesses, as a respected and high-profile civil society leader, and as a bold social entrepreneur. Throughout my career, my quiet but unwavering commitment to achieve equitable solutions in fiercely contested domains has been a constant.

Latest Articles

Bluesky has been melting down in a fight over controversial journalist Jesse Singal, and pedophilia allegations have been flying freely.

Australia is one of the few countries of its size,

What should a social media platform do about content that