Last month, the Supreme Court temporarily prevented a controversial Texas law on social media censorship from going into effect. The statute gives Texans the right to sue social media platforms that have more than 50 million U.S. users if they believe their posts have been censored based on ideology. It also requires platforms to have systems that respond to complaints about content moderation.
Supporters of what became the Texas statute argued that social media platforms “have acted primarily to limit mostly conservative views.” Opponents of the Texas law pointed to the “cesspool of racial slurs, misogyny, and targeted harassment that the platforms would be powerless to control” if it went into effect. But new research shows that pushing defamation, lies, incitement and hate speech off mainstream platforms — “deplatforming” — can backfire, driving extremist speech onto niche platforms where conspirators thrive in isolation from reality-based discourse.
How the research was done
To understand deplatforming’s effects, one of us, Tamar Mitts, collected data on 30,000 individuals with accounts on two social media platforms, Twitter and Gab, in September 2020, and began following their accounts. The data included the information that users publicly shared on both sites, including their posts and public profile information. Since Gab caters to people on the far right, the study examined whether some of these people would be thrown off Twitter — and if so, how deplatforming would affect their comments and other social media behavior.
Indeed, 2,200 of the users tracked in the study were suspended from Twitter by February 2021. Of these, 762 had actively posted on both platforms during these months. The study focused on these 762 suspended users to understand whether being banned from Twitter, a mainstream platform, affected their behavior on Gab, a less-moderated platform.
The study identified users who had not been thrown off Twitter, but otherwise had posted very similarly to those who had: using the platform roughly as often, posting similar amounts of hateful content, and reaching roughly the same numbers of other users. It then compared both groups’ online behavior on Gab before and after one group was suspended from Twitter. In particular, the study examined posts agitated about censorship by big tech, endorsements of white supremacy and hate speech, and interaction with hate groups active on Gab.
Users banned by Twitter gravitated farther toward extremism
Being suspended from Twitter did indeed change how users behaved on Gab. Those who were deplatformed were more than twice as likely to express frustration with content moderation and anger toward Twitter and to blame the platform for “liberal bias.” Banned users wrote that the act of deplatforming felt like a direct attack on their political views.
The data also show that these users engaged more with extremist posts on Gab after being suspended from Twitter. Using keywords associated with white supremacy and hate speech, the study found that users banned from Twitter posted more than three times as many posts expressing hate speech, while similar users who weren’t suspended didn’t change what they posted, as you can see in the figure below. And banned users increased their interaction with hate groups, as measured by mentions of those groups’ Gab handles in their posts.
More extreme but reaching a smaller audience
On the other hand, other research finds that extremist groups banned by mainstream sites lost significant numbers of followers when they switched to less-regulated hosts, and remained at far lower levels. Deplatforming may radicalize a militant hard core — but it also reduces how often casual social media users are exposed to provocative falsehoods.
The Texas statute, if it goes into effect, would apply only to platforms with at least 50 million U.S. users. That’s consistent with broader trends in social media regulation. For example, one bill recently proposed in Congress would create a Federal Digital Platform Commission that would regulate social media outlets — but would exclude “small digital platform businesses” from its authority. Such an approach could have unintended consequences. Yes, it would reduce the chances that ordinary social media users would encounter hate speech. But users kicked off the large platforms might be further radicalized on unregulated niche platforms. Deplatforming can have unintended consequences, and so can selectively prohibiting it.
Tamar Mitts (@TamarMitts) is an assistant professor of international and public affairs at Columbia University, where she uses data science and machine learning to examine conflict and political violence, especially the causes and consequences of radicalization and violent extremism.
Jack Snyder is the Robert and Renée Belfer professor of international relations at Columbia University and author of the forthcoming “Human Rights for Pragmatists: Social Power in Modern Times” (Princeton University Press, July 2022).