SAN FRANCISCO — Twitter is amplifying hate speech in its “For You” timeline, an unintended side effect of an algorithm that is supposed to show users more of what they want.
Many were from users previously suspended by Twitter and let back on by new owner Elon Musk, who pledged to de-boost hate speech following his takeover of the site.
The tweets appeared on Twitter’s new “For You” page, which the company unveiled in January as part of Musk’s redesigned site. Twitter says the timeline includes “suggested content powered by a variety of signals,” including “how popular it is and how people in your network are interacting with it.”
In one instance, after an account created by The Post followed dozens of others labeled as extremist, Twitter inserted a quote and a portrait of Adolf Hitler — from a user the account did not follow — into its timeline.
In November, Musk — who had recently purchased the site for $44 billion — announced a policy of restoring previously banned accounts after saying earlier that the site had a new rule.
“New Twitter policy is freedom of speech, but not freedom of reach,” he tweeted. He said that negative and hateful tweets would be de-boosted and would not be monetized.
“You won’t find the tweet unless you specifically seek it out, which is no different than the rest of the internet,” he wrote.
New Twitter policy is freedom of speech, but not freedom of reach.— Elon Musk (@elonmusk) November 18, 2022
Negative/hate tweets will be max deboosted & demonetized, so no ads or other revenue to Twitter.
You won’t find the tweet unless you specifically seek it out, which is no different from rest of Internet.
But according to The Post’s experiment, Twitter is amplifying hateful tweets.
Twitter and Musk did not respond to requests for comment.
Twitter is in the midst of changing its curated feed and plans to make public its algorithm for recommending tweets at the end of this month, according to Musk.
The site was also set to implement a new “freedom of speech but not freedom of reach” policy before the end of this month, according to reporting from the news site Platformer. The policy appeared to be geared toward fulfilling Musk’s November promise to de-boost hateful content. It was not clear what direct changes Musk had made in the intervening months to fulfill his pledge.
But on a webpage explaining the “For You” timeline, Twitter wrote: “We recommend Tweets to you based on who you already follow and Topics you follow, and don’t recommend content that might be abusive or spammy.”
And this week, Musk said that beginning April 15, only “verified” users — those paying $8 a month for a Twitter subscription — will populate the “For You” feed, though he later clarified that accounts followed by the user will also appear in it.
Since Musk took over Twitter, he’s made sweeping changes to the site, including restoring the accounts of thousands of suspended users who had been previously booted for breaking the site’s rules, leaning heavily on a subscription model with features such as long-form tweets, and bringing back the banned account of former president Donald Trump. He also let go more than two-thirds of the company’s employees, something that has resulted in some problems with the site, including troubles addressing frequent outages.
Musk has touted his plan to make the site the default space to learn what’s going on in the world and weigh in, calling it the “de facto public town square.”
Musk, one of the world’s richest people, has described himself as a free speech “absolutist.”
But experts have warned that removing the guardrails Twitter previously had in place — which some critics, including Musk, have decried as censorship — could result in the platform becoming a cesspool of hate and threats of violence, as well as fomenting dangerous echo chambers showing users only content they agree with.
“It reveals how Mr. Musk’s goal for the platform is contradictory and self-defeating,” said Chris Bail, a Duke University professor who is director of its Polarization Lab, which examines political echo chambers in an effort to reduce the partisan divide. “You can’t have this sort of pure freedom of speech, and you can’t prevent freedom of reach, without some capacity to throttle or flag users. But producing such a list would be an inherently political act, and it would be pretty controversial.”
Even before Musk’s purchase of the company, Twitter was taking an ever-larger role in choosing the content to be shown to its users. Twitter’s feeds originally showed tweets from the accounts a user followed chronologically. Later, Twitter began showing tweets liked by or replied to by a followed account.
More recently but still before Musk’s takeover, Twitter began showing recommendations of tweets “You might like.” These sorts of insertions are sometimes marked with a label.
Twitter’s new “For You” page, introduced earlier this year, leaned into that model, taking Twitter further away from its roots as a chronological feed of events and reactions. Users complained after the site defaulted to the “For You” timeline, prompting Twitter to quickly roll out a change that remembered their preference.
To conduct its experiment, The Post created four brand-new “sockpuppet” accounts on Twitter, as well as two comparison accounts that followed mainstream users. For each sockpuppet, The Post followed a random 27 to 39 of the 91 members of the SPLC’s list of “concerning extremist or extremist-associated” accounts.
The Post then browsed the “For You” page feed for each sockpuppet, recording its findings. The accounts following the SPLC list members were each shown tweets from at least one other member of the list whom that account hadn’t followed. One account, for instance, was shown a tweet from a writer for an anti-immigrant group designated a hate group by the SPLC.
The algorithm did not merely surface accounts from the SPLC’s list of extremists and extremist-associated users or self-proclaimed racists. Those tweets constitute a small fraction of the tweets shown overall.
In all, many of the tweets fed to The Post’s accounts were from more-mainstream right-wing politicians and conservative influencers and some religion-related accounts. Examples included Rep. Lauren Boebert (R-Colo.), podcaster Candace Owens and “Dilbert” cartoonist Scott Adams — whose recent racist rant led many newspapers to drop the comic strip.
None of the accounts created by The Post followed Twitter’s second-most-popular user at the time, Musk. But Musk showed up in users’ feeds anyway. The accounts following extremists and some of those following mainstream users were often shown posts from Musk, sometimes multiple times in a browsing session. (Musk officially became Twitter’s most popular user on Thursday.)
Many of the tweets that Twitter chose to show to The Post’s test accounts were from users that had been suspended from Twitter under its previous management.
One such user posted a quote about how immigration supposedly “dooms” Whites that was taken from a racist French novel popular among white supremacists. Twitter’s algorithms surfaced this tweet to two of The Post’s accounts, which were not following the user.
Another previously suspended account posted a complaint about “rootless globalists,” an antisemitic dogwhistle. Replies to the tweet attacked Jews explicitly.
The user posting the Hitler quote was marked as “withheld” in France and Germany, both of which ban certain Nazi symbols.
There is no sign Twitter is recommending extremism and hate speech to ordinary users. The Post’s test was an exercise to see whether extremist content is eligible to be fed to users via Twitter’s algorithms, but it may not reflect the real-life experience of anyone on the platform. Real extremist users might, for instance, follow non-extremists as well, in a way that would affect what the algorithm chooses to send to them.
Still, the echo chambers created could be harmful, experts said.
“On social media, when people are spoon-fed harmful content without context, history, or a countermessage, that’s just a recipe for further radicalization,” said Rachel Carroll Rivas, the SPLC’s deputy director of research and analysis.
Cat Zakrzewski contributed to this report.
Methodology: For its experiment, The Post created four brand-new “sockpuppet” Twitter accounts. For each sockpuppet, we followed a random 27 to 39 of the 91 members of the Southern Poverty Law Center’s list of “concerning extremist or extremist-associated” accounts. Then we browsed the For You page feed for each sockpuppet, recording what we saw via WebArchive.page, a specially instrumented web browser that archives all the data involved in a browsing session locally. The accounts observed between 350 and 779 tweets each over one or two browsing sessions between March 6 and March 14; between 29 percent and 55 percent of those observed tweets were from non-followed users. Finally, we extracted the tweets shown to our sockpuppet accounts with the snscrape Python library to find tweets from non-followed users. We used data from software developer Travis Brown to determine which now-active accounts had been suspended under Twitter’s previous management.