Mourners pay their respects at a memorial four days after a mass shooting at a Walmart in El Paso. (Callaghan O'Hare/Reuters)
P.E. Moskowitz is a New Orleans-based journalist and author of two books, including “The Case Against Free Speech: The First Amendment, Fascism, and the Future of Dissent.”

It’s so predictable at this point that it seems to follow a script: There’s a mass killing linked to white supremacist views; there’s outrage, then reporters, law enforcement and Internet sleuths find out that the accused killer was freely and openly using social media to project their views. The enabling companies apologize and take action.

After the latest such attack — the mass shooting Saturday in El Paso that killed 22 and left dozens wounded — authorities found a manifesto, filled with racist conspiracy theories, they believe the accused shooter had posted on the imageboard website 8chan. The manifesto cited the accused shooter in Christchurch, New Zealand, where 51 worshipers were killed at two mosques earlier this year. The accused Christchurch shooter is also reported to have posted a manifesto on 8chan. He, in turn, is reported to have credited convicted Charleston Emanuel A.M.E. Church shooter Dylann Roof. The accused killer in April’s shooting at a Poway, Calif., synagogue also reportedly wrote on 8chan that he was inspired by the Christchurch attacks. In all these cases, 8chan boards were filled with messages of support for the shooters.

On Sunday, Cloudflare, a service provider that helps sites like 8chan run, dropped 8chan from its servers. When a company called Epik took over support for the site, Voxility, which leases servers to Epik, cut it off, too, effectively incapacitating 8chan in the short term.

But why did it take all these attacks for tech companies to act? And more broadly, why, when Facebook and Twitter are routinely littered with white supremacist propaganda, hate speech and harassment directed toward women, people of color and the LGBT community, do these companies’ executives routinely defend decisions that wind up keeping hate- and disinformation-filled accounts and posts active?

They don’t want to engage in censorship, they say: In a March op-ed in The Washington Post, Facebook chief executive Mark Zuckerberg wrote that hateful content probably should be taken offline, but that he didn’t want to be the one to do it, saying companies like his have “immense responsibilities,” but “we shouldn’t ask companies to make these judgments alone.” His argument prompted FCC Commissioner Brendan Carr to respond that Zuckerberg was “passing the buck” to government “for a lot of the criticism that Facebook has been receiving.”

After the violence in Charlottesville two years ago, including the murder of Heather Heyer, Cloudflare chief executive Matthew Prince announced that his company would stop providing service to the Daily Stormer, a white nationalist site, but wrote in a company email that he felt conflicted, saying, “No one should have” the power to kick someone off the Internet. (The Daily Stormer has struggled to stay online since but is currently up.)

Are these companies really concerned with free speech, though?

If that’s the stated reason, then there’s a hypocrisy in their reluctance to ban hateful content, because these platforms were never bastions of free speech in the first place. From the beginning, various tech companies have limited the scope of content on their platforms: Facebook and Instagram (which is owned by Facebook) generally don’t allow nudity. Several people I know, and, apparently, quite a few others, have been banned from Facebook’s platforms for posting about negative interactions with and feelings they’ve had toward men. Facebook has removed posts with the expression “all men are trash.” When I recently posted a joking caption on Instagram stating that the concept of gender is “dumb” (I’m trans), Instagram sent a friendly automated reminder that I shouldn’t engage in hateful conduct on the platform.

Big tech companies like Google, Facebook and Twitter have rules in their terms of service that ban outright hate speech, but the parameters are often narrow. Facebook’s “Community Standards” define hate speech as any “direct attack on people based on what we call protected characteristics” (race, religion, gender, etc.). This type of limitation might auto-delete a post for a racial slur, but still allow more subtle forms of discriminatory content — false or misleading political stories, racist memes, and posts that are racist or sexist but don’t use commonly known epithets to attack people — to remain up. And a quick search on Facebook for discriminatory language will show you how much hateful content remains.

It’s not just arbitrary content moderation. These companies’ algorithms constantly decide what news and whose updates deserve amplification, placing more value on verified and celebrity accounts. That’s not a ban, but it’s an acknowledgment that they operate on the premise that some content deserves more of a platform than other content.

And these algorithms have been found to mirror the views and biases of the people (often white men) who created them. In her book, “Algorithms of Oppression: How Search Engines Reinforce Racism,” UCLA researcher Safiya Umoja Noble even implicates these algorithms in the radicalization of Roof: When he searched “black on White crime,” Google’s algorithm readily surfaced a slew of right-wing conspiracy sites.

Tech companies already block or de-emphasize some speech, while highlighting or promoting other speech, making their free speech arguments for defending white supremacists ring hollow.

This approach, however, may be shifting. After anti-fascist activists pointed out that several hate groups were using Slack, the popular messaging platform, to organize, Slack banned the associated accounts. After El Paso, Prince may have had a change of heart. “If we see a bad thing in the world and we can help get in front of it, we have some obligation to do that,” he said. 8chan’s founder Fredrick Brennan now says he believes the site should be shut down.

But even if 8chan shuts down permanently — as of Wednesday, it was reported to be at least partially back online — will other tech leaders, especially at larger companies, prioritize the threat posed by hate content living on their platforms, or will they hide behind a blinkered definition of free speech?