There's A Way To Stop The Algorithmic Spread Of Hate

It's well past time social media companies are forced to face the horrors they've created

There's A Way To Stop The Algorithmic Spread Of Hate

It was after a heated little league baseball game in the summer of 1993 that I said something super fucking racist. 

The game was a bitter affair between two shitty teams that desperately needed a win after getting kicked around all season by the league’s best teams, rosters full of elementary school boys with facial hair and biceps and second mortgages. There was a home plate collision that didn’t go our way, a sketchy call at second base on a pick-off play executed perfectly by our pitcher, and a late-game extra base hit down the right field line that looked awfully foul from where I was standing in center field. 

I appreciate the folks supporting BFT. Consider subscribing for $3 or $5 a month, or leaving a tip!

It all added up to a tough loss for my team. We hadn’t won a game all season. We sucked. And another team that sucked had beaten us. I was livid. It was an impossible-to-swallow defeat for your fourth favorite Bad Faith Times blogger. 

On the way home from the field, boiling over with directionless prepubescent anger and frustration, wanting to cry but knowing boys didn’t do that sort of thing, I blurted out that the Asian kids on the other team should go back to where they came from. 

I don’t know why ten-year-old me said such a thing. I had been brought up in a strictly apolitical environment; this was back when politics was for nerds and middle-class white Americans could go months or years without ever thinking about their political leaders or what they were up to. There was no explicitly discriminatory or racist messaging in my nonpolitical household, though I suppose kids can pick up on implicit bias just as easily as the in-your-face kind. 

As soon as I said it, I knew I had fucked up. My mom got quiet. Her face turned to stone. She glared at me through the rearview mirror and said nothing. The silence was terrifying. I got home and tried my best to convince myself that I had done nothing wrong in saying the kids who had beaten my baseball team that day didn’t belong here, but elsewhere. Where? I didn't know. It didn't matter, as long as they were gone. What was so wrong with that, I wondered in my attempt to justify the awful thing I had said, the genocidal utterance borne from the worst part of me. 

‘We Know What They Want To Say’
“The fact that I don’t believe in their untruthful and wrong ideology scares them.”

I had buddies back then, in the early and mid 90s, who frequently dabbled in racist stereotypes and jokes about whichever ethnicity they felt like mocking that day. I considered it harmless, even expected. These were mostly white kids, but not exclusively. And some of them were adamant about racial stereotypes being true. Wanting to believe my friends were not piece of shit racists, but rather brave truth tellers, I sometimes went along with the jokes and the stories meant to instill white supremacy as a law of the universe in our young, unformed minds. 

I wonder today, as I see the brains of kids and teenagers hacked by bad actors in the tech industry, how I would have turned out if I had unlimited access to gutter racist propaganda in the 90s. We had dial-up AOL, like a lot of middle class families back then, but nothing in the universe of today’s internet, replete with the most disgusting fascist garbage you can comprehend, and some you can’t. Google "racist memes" if you dare and feel your spine turn to ice.

What if I had been able to log on after that baseball game in the summer of 1993 and look up racist memes and videos and blog posts about Asian people in the US and across the world? What if I had had the ability to validate my anger-driven racist pondering? What if I had found something or someone to validate my worst impulses, to tell me nothing was wrong with me? What if I could have protected myself from the discomfort of introspection and self critique?

I might have walked away from that game, and that day, fully believing that I had said nothing wrong, and anyone who disagreed with me was not only misguided, but my enemy. They did not understand the situation, I might have thought.

Maybe I would have grown up to be someone completely different, someone who associated with racist bad actors who engage in bad-faith politics to push their hateful ideology into the political mainstream, where it sits and festers and draws susceptible minds like a bug zapper light on a meltingly hot July night. Maybe I grow up and marry someone with a similarly discriminatory belief system, and we convince each other that we’ve said and done nothing wrong, and we have kids and raise them to believe the menacing lie of white supremacy and tell them, too, that they’ve done nothing wrong, and that they are superior by nature. And then they go online one day and look up some hateful shit and say, hey, mom and dad were right about These People. They’re gross, they’re stupid, they’re a blight on the nation. They've gotta go.

And on it goes, because I had the means to validate my hate by someone on the internet determined to cultivate and spread hate and pain anywhere and everywhere through the power of a meme. 

‘A Particularly Grim Test’

You would be forgiven if you can’t recall the details of one of America’s many racist mass shootings, but the one that happened in Buffalo in 2022 – in which a radicalized young white man slaughtered ten people at a supermarket in a majority-black neighborhood – might be the one that makes social media companies think twice about platforming violent fascist content. 

Survivors of the live-streamed Buffalo mass shooting are suing social media companies, including Meta, Amazon, Discord, Snap, and the ultra-racist 4chan, for playing a vital role in radicalizing the racist killer, Payton Gendron, through their carefully-designed recommendation algorithms. “It’s a particularly grim test of a popular legal theory: that social networks are products that can be found legally defective when something goes wrong,” The Verge’s Gaby Del Valle wrote

Blog Posts From The Glorious Future: The End Of Social Media
If representative democracy was going to be re-established in the US, social media had to go

The social media empires are expected to do what social media empires always do and simply point to Section 230 of the Communications Decency Act, which has been used over and over to help these companies escape legal culpability for the harmful things spread on their platforms. As Mary Anne Franks argues in her book, Fearless Speech, Section 230 has been expertly wielded by social media companies to defend reckless speech in the guise of protecting free speech. Everything that’s fucked about today’s free speech discourse is encapsulated in Section 230, which has functioned as an accelerator of fascist radicalization over the past decade. 

I’m aware of the hesitation among leftists and liberals to nuke Section 230. The fear, as I understand it, is that allowing social platforms to bear the legal brunt of the shit spread on their sites could be weaponized by those on the right who would deploy all manner of bad-faith arguments to bully social media companies into censoring left-wing discourse. While I don’t know how off the top of my head how to stop that kind of fallout, I know for certain that people – particularly young people – will continue to be corrupted by fascist influencers unless and until social media companies feel the weight of the law. Somehow, some way, they have to be held accountable for the society-killing content that spreads to hundreds of millions of logged-on human beings via their platforms. 

That's the goal of the legal action taken by victims of the 2022 Buffalo massacre.

“In the US, posting white supremacist content is typically protected by the First Amendment,” Del Valle wrote. “But these lawsuits argue that if a platform feeds it nonstop to users in an attempt to keep them hooked, it becomes a sign of a defective product — and, by extension, breaks product liability laws if that leads to harm. That strategy requires arguing that companies are shaping user content in ways that shouldn’t receive protection under Section 230, which prevents interactive computer services from being held liable for what users post, and that their services are products that fit under the liability law.”

Algorithms designed to keep people enraged and engaged and scrolling forever and ever, the argument goes, are “dangerous and unsafe” and therefore defective under New York state’s product liability law. 

“This community was traumatized by a juvenile white supremacist who was fueled with hate — radicalized by social media platforms on the internet,” said John Elmore, an attorney for the plaintiffs in the Buffalo shooting case. “He obtained his hatred for people who he never met, people who never did anything to his family or anything against him, based upon algorithm-driven videos, writings, and groups that he associated with and was introduced to on these platforms that we’re suing.”

Gendron, spurred on by the so-called Great Replacement conspiracy theory pushed by mainstream conservatives like Tucker Carlson, was found with a manifesto filled “with memes, in-jokes, and slang common on extremist websites and message boards,” according to the New York Attorney General’s office. 

Racist online memes, Gendron said, “have done more for the ethno-nationalist movement than any manifesto.” A cursory scan of Elon Musk's fascist playland – the X platform formerly known as Twitter – will offer terrible insight into what Gendron is saying here. The images, the videos, the text posts: They are designed by bad actors for scrollers to mainline racism, to send it straight to the frontal lobe.

Take it from the guy who shed blood and ruined lives – including his own – after looking at memes for years and years: These things have a real and tangible impact. They corrode minds. They turn otherwise ordinary people into monsters, into remorseless murderers. All of this has a source, and that source is social media, where hatred and racism spread like a disease. It’s deep in our marrow now. We have to do something to treat the sickness, and if that means suing social media platforms into oblivion, so be it. Make them afraid to platform hate. Make them think twice about turning away from the terror they are creating. Force them to face the horrors they have created.

Strip away the powerful algorithmically-driven reinforcement mechanisms available and see what happens. Removing pathways toward radicalization is our more direct way out of this hellish political culture, out of this fascist moment. I’m all too relieved that I did not have those pathways available to me. 

Follow Denny Carter on Bluesky at @dennycarter.bsky.social