Russian President Vladimir Putin speaks to the media after his annual televised call-in show in Moscow on Thursday. (Alexander Zemlianichenko/AP)

— Bots airing pro-Kremlin views have flooded the Russian-language portion of the social media platform Twitter, in what researchers from the Oxford Internet Institute say is an effort to scuttle political discussion and opposition coordination in Russia. 

In a new study of “political bots” on the social media platform, the sheer scale of automation is staggering: Of a sample 1.3 million accounts tweeting regularly about politics in Russia reviewed by researchers between 2014 and 2015, around 45 percent, or 585,000 of them, were bots. 

So if you were to mention or enter a flame war with a random account from that sample, there would be a nearly 1-in-2 chance you’re not communicating with a real person.

The study, released on Monday by the Computational Propaganda Research Project housed at the the Oxford Internet Institute, investigates the manipulation of public opinion through automated processes on social media in nine countries: China, Russia, Poland, Brazil, Canada, Germany, Ukraine, Taiwan and the United States. The research was backed by U.S. and European Union government grants. 

It shows, to varying degrees, how regimes, parties and politicians have repurposed social media accounts to direct streams of abuse against domestic rivals or foreign foes, quickly build massive political followings, game metrics on social media, or create bots to create more bots. 

Those Internet campaigns are a threat to democracy, the authors claim.

“For democracy to work, voters need to have high quality information,” said Philip Howard, professor of Internet studies at the Oxford Internet Institute and the project’s principal investigator, in an interview. “Social media could provide that. But at the moment, it looks pretty bleak.”

Some of the surprising findings: Right-wing bots in Poland outnumber those on the left by a factor of 2-to-1. In China, automation is more widely used by pro-democracy agitators than by the government. And in the United States, bots amplifying digital propaganda had a “measurable influence” during the 2016 presidential election that saw Donald Trump narrowly edge out Hillary Clinton in several battleground states. 

Taken together, the studies point to a significant update in the narrative around social media: Once seen as a tool of democratization and protest, like as, for instance, during the Arab Spring, it has increasingly become a weapon wielded by established political actors and authoritarian regimes, simulating public opinion through memes and hashtag democracy. 

Twitter has been criticized for the site’s tendency to play down complaints about online abuse, often spouted from anonymous accounts. A March study from the University of Southern California and Indiana University estimated that as high as 15 percent of Twitter accounts may be automated.

Facebook, Howard said, should also take greater steps to identify and eliminate bots. He said the company should testify and release metadata about fake accounts in the investigation into Russian influence on the 2016 U.S. presidential elections.    

“We’ve seen they can adjust when authoritarian regimes ask them to comply with data authorization rules,” he said. “Maybe we can see if they would adjust to requests from democracies.”

Additionally, the study’s authors said, while Russian social networks like VKontakte or Odnoklassniki could be censored because they were based in Russia, bots are a way to muddy the waters on those social network platforms that remain out of the Kremlin’s reach. 

The study does not provide direct evidence that the bots were created by the Russian government or how they were employed, relying heavily on media reports of bot farms in Russia where young employees in cubicles were paid to churn out comments by the hour. 

But it does show the degree to which someone, or a bevy of actors, have deployed fake, online personalities devoted to sharing pro-government content. 

In Russia, the bots were identified by a number of attributes, the authors wrote, including copying and pasting content, an abnormal number of retweets, high frequencies of hashtags, and a lack of biographical data. 

A separate study by the group on Ukraine said that those bots had a number of functions: creating mass followings for politicians, amplifying favored political content, requesting accounts be blocked, tracking changes to Wikipedia articles, or automatically registering new bots. 

In Ukraine, researchers found fake online profiles were widely used by a variety of political insiders across the political spectrum to target rival members of parliament, or even to target themselves and then claim they were the victims of a plot by Russian paymasters. 

Automated activity vastly increased in July 2014, when a surface-to-air missile shot down a plane with 298 people on board over southeast Ukraine, a region held by separatist forces backed by Russia. That was the moment when Howard, then teaching in Budapest, said he hatched the research proposal.

“It was really noticeable how social media was being used by [Hungarian Prime Minister Viktor] Orban and Putin to speak to many of my Hungarian friends,” he said. “They all started coming to me saying that Putin said the Americans had shot it down, that the Ukrainians had shot it down.”