Propaganda and other forms of “junk news” on Twitter flowed more heavily in a dozen battleground states than in the nation overall in the days immediately before and after the 2016 presidential election, suggesting that a coordinated effort targeted the most pivotal voters, researchers from Oxford University reportedThursday.
The volumes of low-quality information on Twitter — much of it delivered by online “bots” and “trolls” working at the behest of unseen political actors — were strikingly heavy everywhere in the United States, said the researchers at Oxford’s Project on Computational Propaganda. They found that false, misleading and highly partisan reports were shared on Twitter at least as often as those from professional news organizations.
But in 12 battleground states, including New Hampshire, Virginia and Florida, the amount of what they called “junk news” exceeded that from professional news organizations, prompting researchers to conclude that those pushing disinformation approached the job with a geographic focus in hopes of having maximum impact on the outcome of the vote.
The researchers defined junk news as “propaganda and ideologically extreme, hyperpartisan, or conspiratorial political news and information.” The researchers also categorized reports from Russia and ones from WikiLeaks — which published embarrassing posts about Democrat Hillary Clinton based on a hack of her campaign chairman’s email — as “polarizing political content” for the purposes of the analysis.
“The distribution of junk, conspiracy and polarizing content across the country was not equal,” said Philip N. Howard, an Oxford professor who co-authored the report. “Some states got more than others.”
Twitter prohibits many kinds of bots and has expanded its efforts to combat the problem after years in which independent researchers have documented the proliferating numbers of phony, automated accounts.
The company, which was provided with an advance copy of the report by the researchers, complained about the limits of research conducted using publicly available sets of tweets, as Oxford’s was, through a function called the Twitter search API.
“Research conducted by third parties through our search API about the impact of bots and misinformation on Twitter is almost always inaccurate and methodologically flawed,” Twitter said. It also noted that the report was not reviewed by academic peers before publication.
Howard acknowledged that the report was not peer-reviewed, saying: “This is a working paper, but it is based on an empirically vast body of work: our own studies of five elections in the last year alone and case studies of 26-plus countries; other people’s research on Twitter use and social networks. I admit they have better data that they don’t share through their API, and would welcome the chance to get better resolution on all this. It’s unlikely that the punchline would change.”
Howard and his co-authors, Bence Kollanyi and Lisa-Maria Neuderg of the Oxford Internet Institute, said much of the propaganda shared on Twitter had its origins among bots and trolls.
Bots are computer programs that post information automatically on social media, often to push ideas and narratives programmed by their creators. Trolls, by contrast, are human users who post frequently on social media, sometimes using phony identities, to shape online conversation, often in exchange for payment.
The use of bots and trolls in the 2016 presidential election has become a point of national concern, especially since Facebook revealed that 470 accounts and pages managed by a notorious Russian troll farm, the Internet Research Agency, bought more than 3,000 ads during the election season. Many independent researchers have mapped how information increasingly flows back and forth among such platforms as Twitter, Facebook and YouTube, with each amplifying and sometimes manipulating information appearing on other platforms.
Lawmakers and Capitol Hill investigators have pushed major technology companies to disclose what they know about deployment of propaganda and disinformation on their platforms during the campaign.
Howard said junk news originates from three main sources that the Oxford group has been tracking: Russian operatives, Trump supporters and activists part of the alt-right, a group that includes white nationalists, anti-Semites and others who rail against “political correctness.”
“Those three kinds of organizations shared a lot of content and push a lot of each other’s content,” Howard said. “They worked in concert. They worked to the same ends, the goal being getting polarizing stuff into the swing states.”
The study drew on a sample of tweets issued between Nov. 1 and Nov. 11 using the Twitter API, which contains up to 1 percent of all tweets. The researchers pared this sample further, studying a subset of tweets that, through keywords or hashtags, appeared to have political content and also included some indicator of the account owner’s location.
The researchers then categorized the resulting Twitter links based on whether they originated from professional news organizations, political parties or some “junk news” source. The overall sample set included 781,087 links.
The report identified 16 battleground states. Of those, the researchers said, 12 received higher-than-normal flows of propaganda and other low-quality information near the election: Ohio, Georgia, Pennsylvania, Colorado, North Carolina, Nevada, Michigan, Missouri and Arizona, along with Virginia, New Hampshire and Florida. Four battleground states — Iowa, Minnesota, Maine and Wisconsin — got less low-quality information on Twitter than the nation as a whole.
Follow The Post’s tech blog, The Switch, where technology and policy connect.