How Your Platform Can Protect Young People from Online Harms

As any parent knows, raising children can be equal parts fulfilling, daunting, and frustrating. While Internet platforms aren’t parents, their relationships with their younger users can be similarly complex. Young people may be early tech adopters, but they frequently place a greater load on trust and safety teams than adults do, and their usage is difficult to monetize. On the other side of the coin, while they provide young people with opportunities for creative, educational and personal advancement, platforms too often do too little to protect them from obvious potential online harms.

Much ink has been spilled on the perceived dereliction of duty of major platforms towards young people, and many (often misguided) laws have been written in an attempt to nudge them into being better guardians of their youngest users’ interests. Sustained public pressure and media attention on this front has now led into the development of an emerging high-level transnational framework of expectations around how platforms should manage these online harms. For Silicon Valley tech firms, this takes the form of the California Age Appropriate Design Code Act.

But even with this guidance, it remains for platforms to develop their own operational-level policies and strategies for recognizing, preventing, and reducing foreseeable harms to young people. This article contains some high-level guidance on how to approach that endeavor, after first setting straight some popular misconceptions.

A diversity of harms and risks

There are many kinds of harm to minors that social media platforms might be held responsible for, including damage to teens’ self-esteem, bullying, self-harm, and sexual exploitation. Since my first article on this blog was a guide for platforms on detecting and removing sexuality exploitative images of minors (CSAM), I’ll continue focusing here on sexual exploitation and harm to young people online. After all, that that’s the usual end state of debates around tech responsibility for harm to minors, and the topic that spurs most legislative efforts.

Such harms don’t manifest in the abstract. They are result of unmanaged risks. The 5Rights Foundation has identified a typology of risks that young Internet users face online that are abbreviated as four “C”s – content, contact, conduct, and contract. In the context of sexual harms, content risks include exposure to age-inappropriate sexual content, contact risks include unsolicited sexualized contact from adults, and conduct risks includes sexualized cyberbullying, harassment and abuse. Since this article focuses on the management of user behavior, contract risks (eg. invasive marketing) are considered out of scope here, and content risks are covered only in passing.

Preventing adult perpetration

Traditionally, most attention from trust and safety teams has been focused on adult perpetrators of abuse, and for good reason: when unrelated adults interact with young people online especially in private channels, the intention is quite often sexual. Although sexual interest from adults in young children or (“pedophilia” in the true clinical sense) is uncommon (at around 1% of the male population), sexual interest in post-pubescent teens, despite being socially taboo, is extremely common, possibly as high as 27%.

It’s important to note that interest does not translate directly into action; most child abusers are not preferentially attracted to children, and many who are never act on that attraction. Thus, segregating user groups based purely on signals of sexual interest is not a viable prevention strategy. Monitoring of private conversations, being a violation of privacy rights, is also not an option that most platforms should consider.

However, there are other signals of inappropriate sexual contact or conduct that platforms can use to intervene in the prevention of harm caused by adult perpetrators. One of the companies that has been most public about its efforts in this regard is Meta, which acts on signals of risk such as when an adult follows and attempts to message multiple unrelated young people. Some other potential strategies will be highlighted below.

Preventing peer on peer perpetration

It is often assumed, even by professionals, that predatory adults are the prime perpetrators of sexual harm against young people. But in reality, the peak age for sexual offending is just 14, and over a third of cases of sexual violence against children are perpetrated by other children. Although these statistics relate to hands-on abuse, other types of sexual harms experienced by minors, such as sexualized bullying and harassment, are also predominantly perpetrated by peers rather than by adults.

A young person’s own behavior can also raise a risk of future harm. According to the most recent statistics from the Internet Watch Foundation, over 70% of known sexualized content of minors found online is self-produced, typically by girls aged between 11-13. When puberty hits and a young person begins to experience a sense of sexual selfhood, seeking external validation of these feelings is extremely common. Young people often frequently misrepresent their age in order to seek out sexual interactions with those who are older.

Trust and safety professionals therefore need to be conscious that their responsibility extends beyond protecting their platform’s young users from adult predators, and also includes protecting them from their own risky behaviors, and from harmful interactions with peers. Apple’s introduction of a feature that detects attempts to share nude selfies and diverts the user to warning screen is one example of an intervention geared at addressing risky behaviors by young people. Other examples will be given below.

Managing misinformation harms

As my previous article described, the online child safety sector is absolutely rife with misinformation, which has enabled the widespread weaponization of false smears of “grooming” and “pedophilia.” Although not widely recognized as such, this is also a form of sexualized harassment and abuse. Young people themselves are frequently its targets, creating a risk of harms such as obsessive-compulsive disorders and suicidality. Platforms need to be just as intentional about how they manage these risks, as they are about managing risks of real sexual abuse.

The first wake-up call for platforms about the need to crack down on the weaponization of false child exploitation rhetoric came with the rise of QAnon commencing in 2017, to which they belatedly but eventually responded. Since then, platforms such as Reddit, TikTok, and Meta have again responded to misinformation about child exploitation by classifying the use of the term “groomer” in reference to LGBTQ+ people as hate speech.

Conduct a risk assessment

Three distinct categories of harmful behaviors have been described above – those perpetrated by adults on young people, those perpetrated by young people on each other, and harms caused by the abusive weaponization of child exploitation rhetoric. The exposure of a given platform to risks of each of these kinds of harms depends on many factors, and preparing risk assessments based on a platform’s goals, its individual market niche, and its risk profile is one of the specialist services that I offer as a consultant.

With that said, there are some good, open source starting points for any platform that wishes to engage in this process itself. The Digital Trust & Safety Partnership’s Best Practices Framework comes especially recommended. The California Age Appropriate Design Code Act also contains a template for engaging in this sort of risk analysis.

The Act requires platforms to write a data protection impact assessment (DPIA) for all of its products – that is, a systematic survey to assess and mitigate risks to children that arise from its data management practices. The platform is also required to “create a timed plan to mitigate or eliminate the risks” identified in the DPIA. Factors that it is required to consider are whether the design of its products could:

  • Expose children to harmful, or potentially harmful, content.
  • Lead to children experiencing or being targeted by harmful, or potentially harmful, contacts.
  • Permit children to witness, participate in, or be subject to harmful, or potentially harmful, conduct.
  • Allow children to be party to or exploited by a harmful, or potentially harmful, contact.
  • Include algorithms or targeted advertising systems that could harm children.
  • Increase, sustain, or extend use of the online product, service, or feature by children, including the automatic playing of media, rewards for time spent, and notifications.
  • Over-collect or process sensitive personal information of children.

Although the law has been robustly criticized by some experts as a paternalistic imposition on platforms, the questions that it poses are the right ones for platforms to be asking anyway, and developing solid answers to them is a good discipline for a platform wishing to be mindful of the safety of its youngest users.

Policies support strategies, strategies support goals

With its risk assessment in place, a platform must consider the strategies it wishes to adopt to reconcile those risks with its goals and objectives as an organization. Once again, this is an individualized determination, that will depend on factors such as the platform’s size, its risk profile, and the jurisdictions from which it operates.

Although space does not permit an exhaustive treatment of all possible strategies that a platform might adopt for protecting young users from harm, here are some high-level points to consider:

  • Product design. Safety by design is the concept that user behaviors can be most effectively shaped not through penalties and rewards, but by designing the product in such a way that safe choices are the default. Laurence Lessig’s notion of code as law is a similar concept. A relevant example of this is the choice of whether it should be technically possible on a platform for an adult user to send a direct message to a young person – and if so, under what conditions.
  • Policy design. Policies are decision-oriented documents that align with and advance the platform’s strategies. An example is the platform’s written policies on what amounts to child exploitation – Meta’s policies providing perhaps the most exhaustive example of any major platform. Another example of a platform policy to reduce online harms to minors would be whether age assurance is required in order for a user to access certain platform features.
  • Education and support. Some platforms, such as Pornhub (link is safe for work!), provide their users with information on sexual health and wellness, including information on practising consent. Taking this a step further, Pornhub along with other platforms such as Facebook, Twitter, and Pinterest proactively surface links to external support organizations such as Stop It Now in response to signals that a user may be considering harmful behaviors.
  • Enforcement. One of the lowest hanging fruits of online safety for young people is to make it easy for them to report unwelcome contact and conduct. A best practice is to ensure that a reporting mechanism is directly accessible as an option in the message or content container itself – rather than requiring the user to navigate to a support page on the platform’s website.

Conclusion

For a topic that is often sensationalized, it can be difficult to bring a level head to the prevention of online harms to young people, particularly when those harms are sexual in nature. But in order to effectively address the problem, it is necessary to understand it accurately, and to respond proportionately and advisedly. This can mean “unlearning” a host of false assumptions that drive public discourse on the sexual abuse of young people, including as to the very identity of its perpetrators and victims.

The steps that I recommend for a platform seeking to develop strategies to protect young people from online harms, and that I follow in my own work, have been laid out above. First, develop a basic understanding of the landscape that is dispassionate and evidence-informed. Second, conduct a risk assessment that addresses the range of possible risks of harm to which young people may be exposed through the use of the platform. Third, develop strategies that address those risks and support the platform’s goals – taking care to consider product and policy design, education and support, and effective enforcement.

As a closing word, platforms should be especially mindful that even high-profile actors in the child safety field often come to it with complex motivations that can excessively emphasize carceral and regulatory approaches, while deemphasizing and even stigmatizing the work of prevention professionals. Having developed a deep understanding of this space and the actors who frequent it, I specialize in cutting through misinformation and providing impartial, evidence-based advice as part of my consulting service. Please feel free to reach out for a consultation.

✉️ Email me updates!

We don’t spam! Read more in our privacy policy

Share This Article
Facebook
Twitter
LinkedIn
Email

Leave a Reply

Your email address will not be published. Required fields are marked *

I am a leading ICT policy advisor and advocate, driven by a vision of the potential for information and communication technologies to advance socially just outcomes, coupled with a strong awareness of how this vision intersects with political, legal and economic realities.

Over two decades I have applied my expertise and experience in private legal practice, in high-level management of innovative businesses, as a respected and high-profile civil society leader, and as a bold social entrepreneur. Throughout my career, my quiet but unwavering commitment to achieve equitable solutions in fiercely contested domains has been a constant.