In 2020, Yulia Tsvetkov, a 26-year-old Russian theater director and artist, found herself under house arrest, facing up to six years in prison. Her crime? Sharing feminist and LGBTQ+-friendly artwork on social media, which authorities branded as “propaganda of non-traditional sexual relations among minors.” Just a year earlier, Michelle, a 53-year-old transgender woman, was sentenced to three years in a Russian men’s prison—where she faced potential violence and medical neglect—for posting erotic manga illustrations online. Prosecutors twisted her art into evidence of child abuse, ignoring her identity and the context of her work.
These aren’t isolated incidents. They’re stark reminders of how governments can wield vague laws to silence queer voices, turning self-expression into a criminal act. Russia’s crackdown on “non-traditional” content is a chilling blueprint—one that feels uncomfortably relevant as the United States veers toward conservative authoritarianism.
With cultural tides shifting, how confident can we be that LGBTQ+ individuals—and the platforms hosting their voices—won’t face similar criminalization here? History offers little comfort: from Joe Orton’s 1962 arrest in the UK for defacing library books to the censorship battles over Allen Ginsberg’s Howl in 1950s America, queer art has long been a target. Today, as book bans surge and political rhetoric sharpens, the question isn’t if this could happen—it’s when. More urgently, what can human rights activists and trust and safety professionals do to protect these freedoms before they’re lost to us all?
A swing against LGBTQ+ content online
Not long ago, Big Tech platforms waved rainbow flags and touted inclusivity. Now, hate speech targeting transgender people is openly allowed on platforms like X and Facebook, echoing the U.S.’s rollback of transgender rights. Are we nearing a day when hosting LGBTQ+ content becomes a crime in itself?
Though it sounded far-fetched a year ago, the pieces of a ban are falling into place. A slick rhetorical trick equates queer content—think inclusive sex ed, heartfelt memoirs, or a kids’ book about two penguins—with hardcore porn, obscenity, and even child sex abuse material (CSAM), all swept under the vague label of “content harmful to minors.” With this sleight of hand, laws that look reasonable at first glance—like age verification for adult sites or holding platforms accountable for harmful content—threaten to silence queer kids seeking community and trans adults sharing their truth online.
Below, I unpack three tactics that could underpin an LGBTQ+ content ban—two already in motion, one looming on the horizon. I’ll make the case that anyone who values internet freedom and LGBTQ+ rights must act now to counter these attacks before they lock us out of a free web. I’ll share how I’m fighting back, and I invite you to join me—whether by speaking out, coding solutions, or amplifying this fight.
Threat 1: Age verification laws
The first tactic is already rolling out: laws that cloak censorship as child protection, starting with age verification. The far-right Heritage Foundation’s Project 2025 agenda for Trump’s second term lays it bare:
Pornography should be outlawed. The people who produce and distribute it should be imprisoned. Educators and public librarians who purvey it should be classed as registered sex offenders. And telecommunications and technology firms that facilitate its spread should be shuttered.
This isn’t just about porn—it equates LGBTQ+ sex education, dismissed as “gender ideology,” with an existential threat to conservative values, blaming Big Tech for enabling both. Offline, this conflation first hit libraries. Across the U.S., lawmakers are proposing and passing bills to jail librarians for lending books deemed “harmful to minors”—a label slapped on LGBTQ+ works like the award-winning Gender Queer.
Now, this library crackdown is fueling a wave of state age verification laws online. These rules force platforms to verify users’ ages before granting access to anything vaguely “harmful to minors”—a net that could catch everything from memoirs to queer teen forums. Texas’s law is under Supreme Court review right now, and its fate could decide whether dozens of similar measures survive or collapse.
Threat 2: Section 230 repeal
If age verification falters, there’s another play in the works: unraveling the law that keeps platforms safe from lawsuits. Whatever the Supreme Court decides on age laws, Section 230 of the 1996 Communications Decency Act stands as a key shield for sites hosting user-generated LGBTQ+ content labeled “harmful to minors.” It’s the rule that protects platforms from liability for what users post while letting them moderate freely—basically, the internet’s free-speech backbone.
Gutting Section 230 could force platforms to preemptively scrub LGBTQ+ content—no age checks needed. Bipartisan bills like the EARN IT Act (discussed here and here) are pushing this, tying immunity to aggressive crackdowns on CSAM. The result? Over-censorship on steroids. After FOSTA-SESTA passed, platforms didn’t just target sex trafficking—they axed legal content overnight. Tumblr’s 2018 porn ban, meant to dodge liability, ended up nuking queer art and support forums too. This could wipe out trans creators’ posts or queer teen support groups in a heartbeat, all to avoid a lawsuit.
Here’s the twist: while the Trump-aligned Heritage Foundation pushes deregulation elsewhere, bipartisan calls to dismantle Section 230 keep growing. Some Democrats, oddly, lead the charge, ignoring its role as a free-speech lifeline. A second Trump term might not prioritize this—even Elon Musk, who’s tweaked X with 230 in mind, calls full repeal a “disaster” after leaning on it in court. Still, that’s cold comfort when queer voices hang in the balance, one policy shift from being silenced.
Threat 3: Quasi-official censorship
Beyond age laws and Section 230, a third threat looms—one that’s more speculative but already weaponizable. The Internet has a built-in takedown machine: the CSAM hash list run by the National Center for Missing and Exploited Children (NCMEC). This quasi-governmental group maintains a database of real child abuse images—hashed for platforms to scan and remove matches, often triggering criminal probes. It’s meant for a narrow purpose: stopping actual child pornography, as U.S. law defines it. So why isn’t it censoring queer content yet? Because it’s been tightly controlled—until now.
To date, NCMEC has assiduously limited its hash list to real child abuse images; a 2024 audit helped to ensure that this was so, backed by Supreme Court rulings tying the legal definition of CSAM to harm against actual kids. But pressure’s mounting from both within and outside NCMEC to stretch that definition. Generative AI art is the thin edge—uncontroversial to some, but opening the door to other artwork being included. In 2025, Texas police raided an art gallery over non-sexualized pieces misidentified as child porn, while lawmakers have slapped that same loaded term on Gender Queer. Redefining “CSAM” to include LGBTQ+ content isn’t a leap—it’s a step.
Look abroad, and the warning signs flash brighter. The Canadian Center for Child Protection has overreached its NCMEC-like blocklist, demanding takedowns of frames from a kids’ movie and ethnographic photos—once even reporting a teen for her blog art. Europe and the United Kingdom push AI to flag broad swaths of text and graphics, while Australia prosecutes over Simpsons parodies. These aren’t hypotheticals—they’re blueprints for abuse.
As a trust-and-safety professional, I’ve always said we should tackle distasteful art without diluting the horror of real CSAM—abuse of actual kids. But NCMEC’s already leaning right. Post-Trump’s reelection, it scrubbed transgender victims from its site. How much would it take to flip it into a censorship tool, hashing queer content as “illegal”? With lawmakers long smearing LGBTQ+ folks as predators, the groundwork’s there. This isn’t just crystal-ball gazing—it’s a threat we can’t ignore.
Fighting back: a new Center for Online Safety and Liberty

I can’t stand by as these threats unfold. It’s been fun pouring my trust and safety expertise, advocacy, and tech skills into this blog for the past few years, but the time has arrived again for me to lead, and channel my skills into something bigger.
That’s why I’m thrilled to announce I’m taking the role of Chair at the Center for Online Safety and Liberty (COSL), launching today. The attack on online freedom—especially targeting LGBTQ+ content—isn’t hypothetical; it’s real. COSL is here to fight back, building a safer, freer internet for everyone.
This is a collective effort. COSL acts as an incubator for independent projects tied to a core mission: empowering individuals and communities to thrive online by building safer spaces, fostering creativity, combating harm, and championing digital rights and freedom. We’re confronting threats like age verification, Section 230 rollbacks, encryption battles, and content-scanning overreach. We’re crafting free, open-source trust-and-safety tools—starting with my own Dead Dove and Modtools:Image, with more ambitious ones ahead. And we’re fostering safe, inclusive communities, beginning with fan spaces (yes, I’m letting my geek flag fly).
Our first highlighted project, Liberato, is a nonprofit hosting service where I’m Head of Trust and Safety, that sets out to serve marginalized communities who face the highest risks of censorship and surveillance. It scans content against NCMEC’s CSAM hash list and removes matches—no compromises there. But if anyone demands we axe artistic or LGBTQ+ content, we’ll log it in a transparency archive and push back hard.
Liberato is only the beginning. Each month, COSL will unveil new efforts—podcasts, fundraisers, petitions, software, social platforms—all driving our cause forward.
But COSL needs you to succeed. These threats strike deep—silencing queer voices isn’t a future risk, it’s happening. If that stirs you, join us; there are plenty of ways that you can:
- Coders, pitch in to work on one of our open-source software projects, or pitch your own project.
- Organizers, help launch a bullying-free fan hub that we’re working on.
- Activists, flag the policies that worry you—we’ll mobilize together.
- None of the above? Consider a monthly donation.
However you contribute, COSL amplifies your impact. A freer, safer Internet starts with us—let’s build it now.