Researchers also found that 65 percent of links to phony and conspiratorial news reports went to just 10 prolific websites — a finding that challenges claims that the creation of such content is too widespread and diffuse for technology companies to effectively combat.
“The fake news that matters is not organic, small-scale or spontaneous,” said the conclusion of the 60-page report. “Most fake news on Twitter links to a few established conspiracy and propaganda sites, and coordinated campaigns play a crucial role in spreading fake news.”
Twitter, which was provided an advanced copy of the report, said that it works to suspend accounts that it considers fake or that produce spam. The company sometimes also “locks” accounts that are suspicious, meaning other users cannot access them until they can prove to Twitter that they are legitimate.
“As a uniquely open service, Twitter is a vital source of real-time antidote to day-to-day falsehoods,” Del Harvey, global vice president of trust and safety for Twitter, said in a statement. “We are proud of this use case and work diligently to ensure we are showing people context and a diverse range of perspectives as they engage in civic debate on our service.”
The researchers on the Knight Report — Matthew Hindman, an associate professor of media and public affairs at George Washington University, and Vlad Barash, science director at network analysis firm Graphika — examined what Twitter accounts were linking to more than 600 sites that “regularly publish unverified stories or flat-out falsehoods.” They declined to name the top accounts but said their overall list comes from an open-source list of fake news and conspiratorial sites kept on opensources.co.
More than 73,000 accounts linked to these sites 10 or more times in the 30 days before the 2016 election.
While the authors found ample evidence of the importance of “bots” in the networks of disinformation they studied, they said accounts run by actual humans may have been even more important. One-third of the most heavily followed Twitter accounts that linked to phony news reports appeared to be bots, but more appeared to be humans.
These users who spread phony news reports on Twitter in some cases did so unwittingly, because they believed the reports, Barash said. He said that complicates efforts by Twitter and other technology platforms to combat the spread of the faulty information.
“Many of their users do consume this information, and just cutting it off wholesale is something that their users may not be very happy about,” Barash said. “This is a very complicated situation.”
The Knight Foundation report does recount an apparent success story in the battle against phony and conspiratorial news. One of the leading sites in producing this content in 2016, therealstrategy.com, had its account suspended by Twitter and some other platforms. The website has since apparently gone offline and the amount of content it produces has plummeted, from more than 700,000 links during the pre-election period to about 1,500 afterward.
The website, whose owners did not respond to efforts to contact them by email and phone, still has an active site on Facebook, which said it was examining allegations against the site after receiving questions about it from The Washington Post.
“Fake news isn’t hundreds of accounts, and [fighting it] isn’t Whack-a-mole,” Hindman said. “It’s a couple of dozen persistent sites that do this all day, every day.”
The report, because it has been prepared for a private client, has not gone through a formal peer-review process that would be routine for published academic work, but the authors said it has been informally reviewed by fellow academic experts.