Jeremy Malcolm

Three Guidelines for Child Exploitation Policies

One of the specialities that I have developed as a trust and safety professional over the last five years is in assisting platforms to develop policies that accurately and fairly distinguish child exploitation from protected expression. Many platforms find difficulty in drawing this line for themselves, or may end up drawing it in ways that are harmful to minors or to other vulnerable communities.

For example, until 2011 Reddit hosted a community r/jailbait that collected sexually suggestive photos of underage girls, and even promoted this as a suggested community to new users. After r/jailbait was shut down under community pressure, Reddit executed a swift pivot to a no-tolerance policy on minor sexualization that suddenly saw users being banned for posting tame cartoon images of teenage characters in swimsuits.

Platforms are of course entitled to draw their content policies wherever they wish. But there are consequences to drawing a line that is either too liberal or too repressive – and those consequences are typically first felt by communities at the margins. There’s no simple recipe for developing the right policy on child exploitation, as this depends on many factors. But here is a non-exhaustive list of three principles that I’ve found to be important in developing a policy that effectively targets child exploitation without over-censoring protected expression.

1. Target violating content directly, not discovery mechanisms

A recent Wall Street Journal article revealed how Instagram’s recommendation algorithm was promoting self-produced explicit content from underage users. More controversially, the article published the hashtags that could be used to surface such material. Although this was only done after Meta had blocked those tags, there is an uncomfortable possibility that the WSJ may have helped to promote the availability of self-produced illegal content on other platforms.

This raises the question: is blocking discovery mechanisms such as search terms and hashtags the right tool to combat the availability of unlawful content? In general, I don’t believe that it is. Trust and safety teams work with limited resources, and those resources should always be prioritized towards removing unlawful content at the source, rather than obfuscating the mechanisms by which it may be discovered.

Blocking discovery mechanisms inevitably results in false positives that restrict protected expression. Like other trust and safety systems, catalogues of search terms reflects stereotypes and biases against marginalized communities. For example, a list of presumptively CSAM-related search terms used by several platforms including Patreon and Pinterest was later revealed to include terms referencing legitimate LGBTQ+ content.

It’s also ineffective. When a morality campaigner took Facebook to task over the kink community hashtag #ddlg, its subsequent banning of that tag only gave rise to a plethora of alternatives such as #ddlgcommunity. Similarly, malicious users can do and do also switch code words and hashtags on a dime. I recently counseled a Mastadon instance on hashtags that were being posted to promote commercial child abuse content, and a game of cat-and-mouse ensued for some time as the abusers cycled through a series of variations and misspellings.

As soon as a search term is blocked, it signals to a malicious user that they need to switch to a different search term. But when a search comes up empty because the content simply isn’t there, the user is more likely to give up and move along. That’s the better outcome for trust and safety professionals, and for those they seek to protect.

2. Don’t censor art, tag and filter it

Recently I was discussing with a trust and safety colleague about what could be done about sexualized cartoons of underage anime characters being posted on their platform. Should it be necessary for moderators to wade into fandom debates about the canonical ages of the characters being depicted? What about the case where an artist depicts a canonically underage character, but claims that they have “aged them up” to 18?

These are clearly impractical questions to ask if image moderation is to be conducted at any scale. Trust and safety teams attempting to enforce such policies end up drawing arbitrary lines between werewolves and bipedal anthropomorphic wolves, and between Pokemon with different bodily proportions. Platforms are better off banning sexualized cartoons altogether than attempting to enforce absurd policies such as these.

But while blanket bans on cartoons are an option, they have negative impacts also. For example, they have resulted in the censorship of art from abuse survivors – such as Ignatz Award-nominated author Debbie Drechsler’s comic Daddy’s Girl that depicted her own incestual child sexual abuse. In a 2020 Canadian court case the judge found that censorship of such works “diminishes the right to freedom of expression, particularly that of persons who wish to express in explicit terms the abuse experienced by them.”

LGBTQ+ communities are also more likely to be impacted by bans on sexualized art and fiction, because queer art in general is more likely to be read as sexualized (think of drag bans), as well as being more likely to be falsely read as pedophilic, due to homophobic and transphobic attitudes that falsely link these communities with child abuse.

None of this is an argument that platforms should permit content that would be judged legally obscene, nor that they should necessarily promote or allow explicit cartoon content to be generally discoverable. But it does mean platforms should take a different approach towards moderating art and fiction than they would take towards real-life content, and err on the side of freedom of expression.

Rather than blanket bans, consider policies that require users to tag potentially offensive or triggering content. The platform can then filter out certain tags from content feeds and discovery mechanisms by default, while allowing users who wish to see them to affirmatively opt in. Twitter allows three such tags – Nudity, Violence, and Sensitive – but fan-run platforms such as Archive of Our Own offer many more, enabling users to curate their own individualized content feeds at a more granular level.

3. Ensure policies on bullying, misinformation, and child exploitation reinforce each other

It isn’t only a platform’s child exploitation policy that helps to safeguard and protect its young users. Its bullying and misinformation policies also play important roles. It isn’t commonly recognized that the most common perpetrators of sexualized slurs and abuse towards young people online are other young people. This frequently includes false accusations of grooming or pedophilia, which may be accompanied by reports under the platform’s child exploitation policy. As such trust and safety teams have to be circumspect about accepting all such reports at face value, recognizing that the platform’s own policies can be weaponized by warring users against each other.

The same is true of misinformation policies. Trust and safety professionals, especially those who specialize in child sexual abuse prevention, can be targeted by their critics under broadly written child exploitation policies, such as Twitter’s which prohibits “normalizing sexual attraction to minors”, and Tumblr’s which prohibits “inappropriate content involving minors” simply for talking about pedophilia as a mental health condition in the context of preventing child sex offending.

To put that into a real-world context, this very blog site – the one that you are reading right now – was recently blacklisted from Wikipedia on the basis of a false report suggesting that it contained improper “child-related content”. A link to the blog from my author page was removed and my editing privileges on Wikipedia were revoked. (Update September 4, 2023: read this post for an account of what happened next.)

Wikipedia has followed a similar arc to Reddit when it comes to child exploitation. Wikipedia’s laissez faire period saw the platform hosting a euphemstically-titled article on “Adult-child sex”, a catalogue of “Child pornography search terms”, and an article on a commercial Ukrainian child pornography studio that catalogued its illegal works. In 2008 Wikipedia was even blocked by the UK’s Internet Watch Foundation over an article about a classic rock album that included child nudity in a thumbnail of the album’s cover art, and its article on lolicon remains blacklisted from Google search today.

In damage control, Wikipedia adopted a (mostly) sensible child protection policy in 2010 designed at tightening its standards, but volunteer editors have applied the policy in a naïve and over-zealous manner. The editor responsible for censoring this blog from Wikipedia participates on a forum where self-styled “pedophile hunters” seek to expunge a wide range of content that they view as problematic, and to have the associated editors banned. In addition to articles on child protection, their targets this month have included a Netflix movie poster from the article on the film Cuties, and a series of staid, monochrome medical photographs from the article on Puberty.

Online bullying and information warfare are often justified by the claimed intention of protecting children, and platforms must be alert for malicious users who leverage the platform’s policies in order provide cover for these activities. Wikipedia is in an unusual position in that it involves untrained community members directly in moderation. But the lesson for other platforms is to recognize the intersections between policies on bullying, misinformation, and child protection, and to ensure that in enforcing one policy, they do not undermine the others.

Conclusion

Child exploitation is not protected expression. Hence there is no conflict for a platform in adopting a tough, no-tolerance approach to child exploitation, while also upholding a high level of freedom of expression for its users. With that said, developing effective policies to address child exploitation on online platforms requires careful consideration and a balanced approach. Three key guidelines can help platforms navigate this complex issue.

Firstly, targeting violating content directly rather than blocking discovery mechanisms is crucial. Trust and safety teams should prioritize removing unlawful content at its source, rather than attempting to obscure the ways in which it can be found. Blocking search terms and hashtags can lead to false positives, restrict protected expression, and perpetuate stereotypes and biases against marginalized communities. By focusing on eliminating the content itself, platforms can better protect their users.

Secondly, platforms should adopt a nuanced approach to moderating art and fiction, rather than implementing blanket bans. Arbitrary lines between different types of content can be impractical to enforce and may inadvertently censor important expressions, such as art from abuse survivors or LGBTQ+ communities. Instead, implementing tagging and filtering systems can allow users to curate their own content feeds while maintaining freedom of expression. This approach strikes a balance between protecting users and respecting their diverse creative expressions.

Lastly, platforms must recognize the interconnected nature of policies on bullying, misinformation, and child exploitation. Misuse of these policies can occur when they are weaponized by individuals engaged in online conflicts, undermining the intended goals of child protection. By ensuring that these policies reinforce each other and being vigilant against malicious users who exploit them for their own agendas, platforms can create safer and more inclusive online environments for young users.

Developing effective child exploitation policies requires ongoing evaluation, adaptation, and collaboration between platforms, trust and safety professionals, and the wider community. By following these guidelines and considering the broader implications of their policies, platforms can work towards creating online spaces that prioritize safety, freedom of expression, and the well-being of all users, particularly young people and other vulnerable communities.

✉️ Email me updates!

We don’t spam! Read more in our privacy policy

Share This Article
Facebook
Twitter
LinkedIn
Email

Leave a Reply

Your email address will not be published. Required fields are marked *

I am a leading ICT policy advisor and advocate, driven by a vision of the potential for information and communication technologies to advance socially just outcomes, coupled with a strong awareness of how this vision intersects with political, legal and economic realities.

Over two decades I have applied my expertise and experience in private legal practice, in high-level management of innovative businesses, as a respected and high-profile civil society leader, and as a bold social entrepreneur. Throughout my career, my quiet but unwavering commitment to achieve equitable solutions in fiercely contested domains has been a constant.

Jeremy Malcolm

Trust & Safety Consultant