Will Elon Musk turn Twitter into a beacon of free speech or a “hellscape?” Since Musk bought the social network, his self-proclaimed “free speech absolutism” has alarmed some who fear that extremist views and hate speech will dominate the platform. Some academics, journalists and celebrities announced that they will leave Twitter and move to alternative outlets like Mastodon, which gained 30,000 new users the day Musk completed Twitter’s takeover, or Post.news, a news-focused alternative to Twitter which promises civil conversations.
We wanted to know whether a typical user of social media sees offensive speech as a mere nuisance, a price worth paying, or as so objectionable that they’d rather see more active oversight. What kind of language do they see as too offensive? What sanctions would they choose for those who “misbehave” on social media? Here’s what our research finds.
How we did our research
About two-thirds of Americans say that people are too easily offended, but what qualifies as offensive behavior varies for each person. To explore where people are most likely to draw lines, we fielded three survey experiments between July and Oct. 22 this year, before Musk took over Twitter. We recruited 5,130 adult participants living in the United States via Prolific’s opt-in panel, exposed them to a social media post and then asked them to answer a series of questions about their content moderation preferences.
We randomly divided respondents into five groups. The control group read a neutral social media post about watching a movie. The second read a post criticizing a particular group; the third read a post that included swearing at that group; the fourth included speech that disparaged that group; and the fifth included violent threats against the group.
In the first study, the social media post targeted a member of the LGBTQ community; in the second study, billionaires; and in the third, the driver of a pickup truck covered in religious bumper stickers.
We selected respondents for demographics that were much like those of adult Twitter users in the United States. They were frequent social media users, younger, and more likely to be Democrats than the general U.S. public. Once they read those posts, we asked if they believed social media content should be left as it is, if it should get a warning label, if it should be less visible, if it should be permanently removed from the platform, or if the posting user should be suspended.
How and how much do users want to moderate offensiveness on social media?
Support for content moderation increased by nearly 20 percentage points after reading foul language and by over 40 percentage points after reading violent threats.
But many respondents — ranging from 22 percent to 88 percent, depending on the group — told us that they did not want any action taken against offensive posts. There was one exception: Respondents who read attacks on LGBTQ people did want some moderation, ranging from 44 percent when they read swearing to 80 percent when they read the violent threats. But even there, while 75 percent called for content moderation, only about 50 percent wanted the content removed or the aggressor banned.
Responses also varied by which party respondents identified with. While Republicans always leaned in the same direction as Democrats about moderating content, in all cases they were less likely to support content moderation.
American social media users do prize free speech. Of the four methods of content moderation available, suspending the person’s account or permanently removing the post — those most akin to “censorship” — were least preferred. In all experiments except one, the majority indicated that the post should stay online (and we saw some support for adding warning labels).
The ‘Musk effect?’
Does Musk’s takeover of Twitter change everything? Soon after Musk closed the deal, reports of racist content emerged. Some advertisers have been withdrawing from Twitter, apparently to avoid having their names appear next to offensive material.
We replicated the experiment in which content moderation preferences were the highest, the LGBTQ study. We recruited 1,200 adult participants living in the United States Nov. 18 to 29 this year via the same online panel (Prolific) with the criteria that they hadn’t participated in any of our previous studies. The demographic profile resembled that of the previous studies. We got the same results — suggesting that, so far at least, social media users haven’t changed their minds about moderating posts.
What does this all mean?
Observers worry that Twitter may become a cesspool of hate speech. That might not drive most users away. They haven't abandoned Twitter in large numbers yet. And our work suggests that few users support content moderation, which regulators and rights groups may wish to consider as they discuss policy that’s consistent with democratic values.
Our work suggests that most social media users seem to tolerate highly offensive language toward at least some groups. If that’s true more broadly, with most users willing to tolerate hateful speech toward a wide range of groups, platforms may have little incentive to invest in curbing this type of behavior — though a mass exodus of advertisers and social pressure campaigns could affect this dynamic, too.
Spyros Kosmidis is an associate professor of politics at the University of Oxford and on Twitter @KosmidisSpyros.
Jan Zilinsky is a postdoctoral researcher at the Technical University of Munich and a research affiliate at the NYU Center for Social Media and Politics (CSMaP) who’s on Twitter @janzilinsky and Mastodon @firstname.lastname@example.org.