Jeremy Malcolm

Cybersecurity for Trust and Safety Professionals Handling CSAM

Following five years working in trust and safety in the United States, this year I moved back to Australia. While here, I received a notification from Cloudflare’s excellent CSAM Scanning Tool (reviewed here) that a forum post uploaded to a website of one of my clients had been identified as suspected CSAM (child sexual abuse material). No problem right? Simply review the image, determine if it really is CSAM, and then complete the usual removal, archival, and reporting procedures.

Well, it turns out that that’s easier said than done…

Law Enforcement and Trust & Safety at Odds

It’s surprising how often police and trust and safety professionals are not on the same team. Some years ago a client of mine, an adult website, received a request from the police that we not ban certain accounts that we identified as apparently engaged in grooming or solicitation, because those were actually bait or honeypot accounts being run by the police. For the safety of our users, we refused the request and banned the abusive accounts.

If that wasn’t bad enough, in both the United States and Australia, individual trust and safety professionals have been targeted by police over possession offences. For example, in 2022 a school principal was arrested for maintaining possession of child abuse images shared from students for the purpose only of reporting and discipline — there was no allegation that the images were used or intended for any other purpose.

The problem of overzealous enforcement isn’t limited to trust and safety teams and professionals. As described in depth in my recent submission on the review of Australia’s content classification laws, other odd choices of target for prosecution have included a grandmother over innocent footage of her grandchild, and a man who shared a sexual Simpsons meme.

So when I receive a report of CSAM, I’m not only worried for the safety of whoever might be a victim of that material, I’m also worried about the safety of the person who reported it, and about my own safety. Simply put, I no longer trust Australian law enforcement to allow me to do my job without hindrance.

This article will outline the legal risks that trust and safety professionals face (mostly in Australia, though also comparing with U.S. law), the insufficient legal protections that they enjoy, and some of the cybersecurity precautions that they may be advised to take in order to protect themselves.

Insufficient Legal Protection

Nineteenth century legal philosopher John Austin maintained that laws do not necessarily have any moral basis, but rather simply express the will of the sovereign authority, backed by the threat of sanctions. There is no better illustration of this than the case of CSAM. It might be lawful (but is never morally right) for police to widely distribute child abuse images, while conversely it is morally right (but might always not be lawful) for trust and safety professionals to handle CSAM as part of their duties to remove and report it.

If an ambulance driver exceeds the legal speed limit, they could raise a legal defence of necessity if they were ever charged over doing so, because the law recognises that their otherwise illegal actions were justified. So too, there are some circumstances in which trust and safety professionals are able to raise a defence to a charge of CSAM possession.

The problem for the profession is that these circumstances are very narrow. In the United States, handling as few as three items of CSAM, even if the images in question were promptly deleted and/or reported to authorities, can land a trust and safety professional with a possession charge to which they can raise no defence. This limit is very few for professionals who may uncover large amounts of CSAM all at once, all from a single user of their platform.

Under Australian law there is no such two item amnesty, but there are defences for those engaged in enforcing, monitoring compliance with, or investigating a contravention of Australian or foreign law. Unless such a defence can be established, handling such content at all amounts to a strict liability crime, which depending on the circumstances could see charges brought under the Criminal Code, the Customs Act, and/or State law.

It’s also worth noting that this Australian defence isn’t available at all in response to charges brought under the Customs Act following a search at the border. In other words, trust and safety professionals who travel with work devices are liable to having these searched at the border without a warrant, and have no defence if sensitive content is found on that devices, perhaps even in cache or deleted space.

Another problem is that both the U.S. two item amnesty and the Australian Criminal Code defences are what are called affirmative defences. This means that you can still be charged with a crime and possibly imprisoned without bail, before having the opportunity to raise the defence. You are then effectively required to prove your own innocence at trial (possibly waiting years), or to cop a guilty plea.

While you might not think that this would be an issue for you, because you don’t store such content on your work device anyway, does your company have any moderation guides that include samples that might be illegal (hopefully not, and yet…)? Does your web browser cache contain images that passed through your platform’s moderation dashboard? How sure are you of your answer?

Also keep the destination’s law in mind. In Australia, illegal content includes much consensual 18+ pornography, artwork, fiction, and non-fiction that might be entirely permissible under your platform’s terms of service and the platform’s local law – posing an especially high risk for professionals who work for adult, LGBTQ+, or fan platforms and who might routinely deal with such content.

Threat Modelling

This being so, it remains incumbent upon trust and safety professionals to take care of their own safety by exercising sensible cybersecurity practices. This doesn’t mean that they should ever intentionally break the law – but it does mean that they should avoid ever being put in a situation where they risk being arrested by overzealous law enforcement authorities, and having to affirmatively prove their own innocence.

The starting point in threat modelling for cybersecurity involves asking four questions:

  • Who are you? If your work includes receiving, triaging, investigating, or acting on reports of illicit platform content, then the channels through which you receive these reports are of interest to law enforcement. If you’re travelling with electronics, you’re also automatically placed under suspicion.
  • Who is your adversary? While our ultimate adversaries are online abusers, as explained above it is unfortunately necessary to treat state, federal, and border law enforcement agencies as potential adversaries also.
  • What do they want? Law enforcement’s priority is simply making arrests and convictions. To support these convictions, what is needed is evidence their target dealt with illicit material in some way, such as possession, importation, or sharing.
  • How will they try to get it? There are three main ways:
    • Reporting: Often charges begin with a report from a platform, and sometimes those reports can be false. For example, Google once reported a man to police over medical photos of his child that had been stored on Google’s cloud, and it was TikTok who informed on the Australian grandmother.
    • Surveillance: Under both U.S. and Australian law, telecommunications providers and online platforms have obligations to assist law enforcement, usually under warrant or similar order. These obligations are broader in Australia, where the communications regulator even possesses the power to compel providers to provide access to encrypted communications.
    • Border search: Both U.S. and Australian border police can also search personal electronics at the border without a warrant. In the U.S., a suspect can be forced to use biometrics to unlock their device. In Australia, a suspect can be forced to divulge a PIN or password in some circumstances.

In short, law enforcement agencies may wrongly treat the channels through which trust and safety professionals receive reports as potential evidence of criminal activity, and may select them as targets for investigation. This can include engaging in upstream online surveillance, physical attacks, and legal coercion.

This doesn’t necessarily come with a polite request or warning. In 2021, a client of mine was reported to authorities by Google and its entire cloud account suspended without notice, simply because a single user had misused the platform for CSAM. I’ve had other clients whose sites have been taken down in similar circumstances, before anyone thought to talk with the client’s own trust and safety team.

In some cases, the authorities literally even come in guns blazing. In 2019 the owner of a website that published taboo sex stories was the subject of an over-the-top paramilitary style raid of his property that uncovered nothing (though the publisher was eventually sentenced to an astonishing 40 years imprisonment). Neighbours filmed explosions at the scene.

Law enforcement do not care about the difference between fantasy and reality, between art and abuse, between a family photo and a crime scene, to them it is all one and the same. Frankly, most members of the public are of pretty much the same view.

Today, supposedly progressive commentators, arm in arm with sex work abolitionists, baldly put forth the view that it is crazy and wrong to say that AI pornography shouldn’t be regulated by the same legal standards as CSAM. In this environment, law enforcement operates with significant impunity for overstepping, and few are willing to even talk about it.

Travelling with a Work Device

A comprehensive personal cybersecurity tutorial is beyond the scope of this article. For that, I recommend DigitalDefense.io. Instead, here I will focus on some measures that trust and safety professionals should consider in one particular situation in which they are the most vulnerable – when they are travelling. Many of these tips are also equally applicable to those working remotely from a home office.

To protect against physical attacks at the border, the simplest advice is simply never to travel with a device that you have used for accessing material that might be unlawful in any country that you are visiting or transiting through. There is no simple way for you to ensure that traces of that content do not remain on your device, possibly in forms that are invisible to you and that you cannot easily remove. If you can, keep separate devices for travel, and only use them to access known safe content while you are away.

If that isn’t possible and you do need to work while travelling, then there are a few next best options. The one that I would recommend is only to work from a temporary operating system such as Tails or Whonix, that won’t store anything to your device. Both options also automatically use the Tor network to avoid network based surveillance. The main difference between them is that Tails can run directly from a removable device such as a USB flash drive, DVD, or SD card, while Whonix requires virtualisation software such Virtualbox running on the host machine.

If your tolerance for risk is a little higher, you could travel and work with a Chromebook. This would require that before crossing a border, you perform a powerwash and then sign in again with a separate, second account that hasn’t been ever used for work, in case you are selected for a search. This isn’t quite as safe as using Tails or Whonix, as it is possible that advanced forensics techniques may still be able to recover traces of the previous account’s data from the device’s storage or from Google.

A Chromebook does not come with a built-in VPN, but it does support many third party VPN services and corporate VPNs. For power users, enabling Chrome OS’s support for Debian GNU/Linux makes a range of other security software, including the Tor browser, available.

Storage

I would not advise ever travelling with a device that has previously been used to access unsafe content that may have been cached or stored to a local filesystem. Even deleting such content that you are aware of is not guaranteed to render it irretrievable. Secure deletion utilities that were once relatively reliable are no longer effective when used with certain filesystems and storage technologies, including SSDs.

With that said, a situation could conceivably arise in which a trust and safety professional could be required to store CSAM while travelling, eg. under the REPORT Act mentioned below. In this case, the best options are to keep that content in an end-to-end encrypted file storage service or in a separate locally encrypted filesystem – never, it should go without saying, on Google Drive, OneDrive, or similar. Note that relying on your device’s full-disk encryption is not enough. Although important in case of your device being lost or stolen, full disk encryption is less useful if the device is seized at the border and you are forced to power up and unlock it.

Passwords

It is also important to store strong and unique passwords for each device and service in a password manager such as Bitwarden or 1Password, so that they won’t all be compromised if you are required to unlock a device. Consider also using unique usernames for sensitive online services such as VPNs, and storing these in the password manager too.

If you use a PIN as a quick unlock mechanism for your device or your password manager – generally a bad idea – be sure that it is not reused. Also ensure that your password manager does not auto-unlock on login, that it auto-locks when inactive, and (of course) that you commit the unlock password to memory rather than writing it down. Ideally, temporarily uninstall it from your device while travelling.

Do realise that if you use a phone or phone-based authenticator app as a second-factor authentication device for any online service and that phone is seized, you will lose your second factor – this is aside from the fact that SMS authentication is insecure anyway. A better option would be to discretely travel with a 2FA device such as a Yubikey.

Holding Law Enforcement Accountable

If you are disturbed that it has become necessary for trust and safety professionals to jump through such hoops simply to protect themselves from overzealous enforcement authorities, then you’re right to be concerned. In the longer term, law enforcement does need to be held accountable for the misuse of its powers. But this will be a long fight, that few have shown themselves willing to take on.

I have experienced first hand the manner in which law enforcement has colonised the establishment child protection sector, crowding out and seeking to discredit dissenting voices such as sex worker collectives and human rights advocates, or simply making them feel unwelcome by partnering with sex work abolitionist groups or surveillance tech vendors that threaten the rights and safety of minorities.

It’s fair to say that in the broader trust and safety field however the terrain remains contested. While law enforcement perspectives remain influential, their advocates share mindspace with those promoting public health based approaches, human rights assessments, and LGBTQ+ rights. While law enforcement-friendly viewpoints and vendors still dominate, sufficient noise is being made on the sidelines that the field as a whole has not yet been ceded to law enforcement, nor ever should it be.

In fact I’ve written before that it is incumbent upon trust and safety professionals who situate their work within a rights framework to push back against government proposals that flout human rights norms, rather than acquiescing to a dystopian future of blanket surveillance that imperils our communities and ourselves. This means opposing measures such as those that would require private communications to be vetted by AI robots, or that would further criminalise speech.

Not only should new authoritarian law enforcement powers be on our radar, but it’s just as important to continue applying scrutiny to the use of existing powers by agencies such as the eSafety Commissioner which has platformed hate groups, and Australian Border Force (ABF) which has a record of systematically misusing its coercive powers of search and arrest for corrupt and illegal purposes.

Fighting for Better Protections for the Profession

Finally, as a profession we ought to be actively advocating for new protections to be enacted for trust and safety professionals performing their duties in good faith. While working in the United States, I routinely dealt with lawful content that could nevertheless be judged obscene under Australia’s puritan censorship laws. Even doing that, outside the country, is technically an offence under Australian law.

The existing protections in both U.S. and Australian law are in urgent need of strengthening, so that trust and safety professionals acting reasonably and in good faith have immunity for dealings with content that they are required to access as a bona fide requirement of their profession.

To begin with, in Australia, the existing trust and safety defences under the Criminal Code should be extended to importation offences, and Australians should not be criminalised over web browsing that they lawfully do overseas. In the United States, there should be a higher threshold number of images before a defendant loses the possible defence that they were reporting, or had deleted, the images.

Unfortunately however lawmakers have shown little interest in supporting a safe enabling environment for trust and safety professionals. The Revising Existing Procedures On Reporting via Technology Act (or the REPORT Act), which passed into U.S. law in May 2024, does now authorise platforms to retain illicit content for longer after they have reported it to NCMEC – now for one year, rather than 90 days. But this change was made for the benefit of law enforcement, not the trust and safety profession.

Conclusion

Trust and safety professionals who receive reports of objectionable material already bear a heavy mental and emotional burden by being exposed to this content. They shouldn’t also have to worry about being arrested simply for doing their jobs. Yet these professionals frequently work at great personal risk, especially when they travel.

The current legal landscape in both Australia and the United States presents significant risks to professionals in this field that cannot be ignored. For as long as this remains the case, it will be crucial for such professionals to adopt stringent cybersecurity practices and to stay informed about the legal implications of their work.

Furthermore, there is a pressing need for legal reforms that recognise and protect the essential work of trust and safety professionals. This means pushing for amendments that offer comprehensive immunity for actions taken in good faith and expanding current defenses to cover all aspects of their duties. Only through such measures can we create a safer and more supportive environment for those on the front lines of online safety.

✉️ Email me updates!

We don’t spam! Read more in our privacy policy

Share This Article
Facebook
Twitter
LinkedIn
Email

One Response

Leave a Reply

Your email address will not be published. Required fields are marked *

I am a leading ICT policy advisor and advocate, driven by a vision of the potential for information and communication technologies to advance socially just outcomes, coupled with a strong awareness of how this vision intersects with political, legal and economic realities.

Over two decades I have applied my expertise and experience in private legal practice, in high-level management of innovative businesses, as a respected and high-profile civil society leader, and as a bold social entrepreneur. Throughout my career, my quiet but unwavering commitment to achieve equitable solutions in fiercely contested domains has been a constant.

Jeremy Malcolm

Trust & Safety Consultant