Jeremy Malcolm

How Your Platform Can Find And Report CSAM

Sooner or later, every Internet platform that allows user-generated content will have to deal with child sexual abuse material or CSAM being uploaded.

Being faced with this problem doesn’t necessarily mean that your platform has a seedy or abusive community. Research conducted by Facebook in 2021 revealed that most of the CSAM found on its platform had not been shared with malicious intent. But whatever the cause, CSAM isn’t something that you want on your platform for any length of time, due to the legal and reputational risks that it creates for your business – and more importantly, the harm that it causes to those it depicts.

So, how do you find and remove CSAM quickly – and who should you report it to? This post describes my experiences in answering these questions for several U.S. based platforms. If you are based outside of the United States, your mileage may vary, and you should wait for upcoming articles on laws and practices in Canada, Europe, and Australia. Non image-based forms of online child exploitation, such as live streamed abuse and sexual grooming, are also out of scope for this post.

Finding CSAM

The best way to locate CSAM on your platform depends on a number of factors including its size, its user demographics, its policies on adult content, and the virality and discoverability of user content.

User reporting is the default mechanism that many platforms rely upon to surface unwanted content. Legally, that approach works for CSAM also – platforms aren’t obligated to proactively search for it (18 U.S. Code § 2258A(f)), and aren’t liable for content that they didn’t know about. User reporting may therefore be the right approach for a platform that is small, has a predominantly professional or technical user base, and doesn’t publish a stream of user content. But this option only works if your platform has an engaged team of moderators who will act on reports swiftly – because once you know that CSAM exists on your platform, your obligation to take action arises (as discussed below).

General social media, gaming, and dating platforms are among those that have a higher risk profile, and are advised to do more to proactively surface CSAM, especially as they begin to scale. There are two complementary technologies that can be employed to do this – machine learning (AI) and hash matching. Once again, the best of these options depends on a number of factors, and in particular on whether the platform allows nudity.

For platforms that don’t allow nudity

For platforms that don’t generally allow nudity, the simplest and most reliable option for discovering such content on your platform is to utilize an image moderation service that can surface both adult sexual content and CSAM along with it. These can be integrated so that scanning is automatically performed when an image is uploaded, and in the event of a likely match, the upload can be blocked or routed to a human moderator.

Image moderation service providers break down into those that offer machine learning systems with pretrained models that can identify nudity – including Google CloudMicrosoft AzureAmazon RekognitionHive AIVue AISentiSight, and Sight Engine – and those that offer human moderation, such as BunchTelus International, and PureModeration. One vendor, Webpurify, is notable in that it offers a package of machine and human moderation, which provides the best of both worlds. (I have no commercial relationship with any of these companies.)

It’s worth noting that pretrained models for machine learning systems typically do not distinguish between real and illustrated nudity. Therefore, if your platform does distinguish between these types of content, it’s all the more important to keep human moderators in the loop and to train them on this distinction.

For platforms that do allow nudity

There are a number of vendors with products that claim to be able to programmatically identify previously unidentified CSAM. These include Google’s Content Safety APIL1ghtThorn’s SaferVigilAI, and Twohat Cease AI. However, these tools will never be completely reliable at distinguishing between depictions of adults and children. One reason is that the category of child pornography (which continues to be the legal term used for CSAM in the United States) extends to images of teenagers aged 16 and 17, who may be physically fully developed, legally having sex, and difficult to distinguish from adults.

A better option for identifying illegal content on platforms that allow nudity is to use hash matching as a first line of defense against images that have previously been affirmatively identified as CSAM by expert human reviewers. Hash matching is a technique that allows image content to be compared against a database of previously catalogued images. The best known hashing algorithm used for CSAM is PhotoDNA, which was developed by Microsoft in 2009. It allows “fuzzy” matching, which can indicate a likely match even if an image has been altered through cropping, resizing, or other minor deformations.

Databases of CSAM image hashes are maintained by several organizations, but most prominently the National Center for Missing and Exploited Children (NCMEC) and the Internet Watch Foundation (IWF). Since 2019, both organizations share their hash data, so that access to either organization’s database provides similar coverage. Direct access to these hashes can be obtained through membership of the IWF or by special arrangement with NCMEC (which is not granted except to the largest tech companies). One good reason why access to hash lists is limited is the relatively recent discovery that hashes can be reversed – revealing actual thumbnails of that content.

Even without direct access to CSAM hash databases, platforms can still obtain the same benefits by subscribing to a web service that utilizes this data. Web services that scan images against CSAM hash databases are provided by a number of vendors including Microsoft (which has a free Azure-based PhotoDNA cloud service) and Cloudflare (with its excellent CSAM Scanning Tool). This article contains a more in-depth review of these and other CSAM detection products.

Reporting CSAM

Apparent CSAM is defined under U.S. law as:

  • An image of a real child, or one that is indistinguishable from a real child
  • Showing sexually explicit conduct, or nudity that focuses on the genital region

Once an image of apparent CSAM comes to the attention of an Internet platform, it is required to be reported to the CyberTipline of NCMEC, which is the U.S. government contractor mandated to receive these reports by law. Failure to comply can expose a provider to a fine of up to $150,000 on a first offense; and up to $300,000 for subsequent offences (18 U.S. Code § 2258A).

Importantly, reviewing and reporting CSAM cannot be fully outsourced or entrusted to machine learning systems. Under a 2021 decision of the United States Ninth Circuit Court of Appeals, evidence of a user possessing CSAM is not admissible in court unless a human being at the reporting Internet platform – not a representative of NCMEC – first reviewed it. This is another reason why human beings can never be completely removed from the CSAM moderation and reporting workflow.

As a first step in executing its reporting obligations, a platform must register with NCMEC by emailing espteam@ncmec.org. This will give the reporter access to a reporting dashboard, which can be used directly for small volumes of reports, or to an API which can be used to integrate reporting into the platform’s moderation infrastructure. Semi-automated reporting is also integrated into some of the specialized CSAM detection tools mentioned above, including Microsoft’s PhotoDNA cloud service, Cloudflare’s CSAM Scanning Tool, and Thorn’s Safer.

The contents of a CyberTipline report are broadly within a platform’s discretion, but generally include whatever identifying information the platform has about the user who uploaded the content (such as username, email address, and IP address), details about the content (such as URL, date and time uploaded, and EXIF data embedded in the file), and a copy of the uploaded content itself.

The platform is also obliged to retain its own copy of the illegal content and associated data for a period of 90 days, after which it must be deleted. This requirement has not been well observed, according to recent reports that moderation contractors have been retaining illegal content indefinitely, and even circulating it for training purposes. I cannot caution against this too strongly, given that possession of such content except in the narrow circumstances permitted by law is a serious criminal offense.

Preventing CSAM

Necessary as it may be, removing CSAM from a platform after it appears is a fire-fighting approach, which experts in the field acknowledge can only go so far. The Trust & Safety profession is called upon to do more, and to set the groundwork for the prevention of abusive behavior to begin with. This includes identifying and reducing risk factors that fuel the dissemination of illegal content, and intervening early to prevent and minimize harm.

This is a bigger subject than can be tackled in this article, and also a bigger social responsibility than Internet platforms can handle alone – platforms can never take the place of sex educators, social workers, therapists, or media literacy professionals. But with that said, there are some prevention interventions that do fall squarely within the remit of platforms, largely concerning product design and community management, which will be treated in future articles.

CSAM is unlike any other form of toxic online content, due to the unique regulatory environment that surrounds it and the grave harms that it causes when propagated online. For these reasons, it makes sense to have a specialist on your team to help you respond to this threat. If you work for a platform that would like to establish a CSAM scanning and reporting workflow, or to have your existing policies and procedures reviewed, please feel free to reach out for a consultation.

✉️ Email me updates!

We don’t spam! Read more in our privacy policy

Share This Article
Facebook
Twitter
LinkedIn
Email

Leave a Reply

Your email address will not be published. Required fields are marked *

I am a leading ICT policy advisor and advocate, driven by a vision of the potential for information and communication technologies to advance socially just outcomes, coupled with a strong awareness of how this vision intersects with political, legal and economic realities.

Over two decades I have applied my expertise and experience in private legal practice, in high-level management of innovative businesses, as a respected and high-profile civil society leader, and as a bold social entrepreneur. Throughout my career, my quiet but unwavering commitment to achieve equitable solutions in fiercely contested domains has been a constant.

Latest Articles

This morning I was privileged to be part of a

On November 14 the European Parliament presented its compromise take

Nobody wants to have their private communications vetted by AI

Jeremy Malcolm

Trust & Safety Consultant