Less than a decade after his invention in 1934, Superman was called on to do battle with what was at the time an unthinkable scourge: A small army of “mechanical monsters,” or, as we would call them now, robots.

In the Fleischer Studios cartoon, these robots were controlled by a nefarious evildoer who sits behind a large bank of dials and knobs, dispatched remotely to seize money and jewels. They overpower Tommy-gun-armed police — but are no match for Superman.

At the time, the idea of an automaton being used to commit crime was a novelty. The entire cartoon, in fact, is a battle between Superman and modern technology: A mechanical monster (imagine!) that can fly (!!) knocks down Superman and entangles him in electrical wires (!!!). The robot was a gauzy, conceptual threat, filling a role that would soon come to be played in comic books by the murky dangers of nuclear energy.

On Saturday, the New York Daily News reported on another gauzy, conceptual threat: an army of robotic social media users being deployed to seize the political conversation from innocent Americans.

The nefarious evildoer pulling the levers of that battalion of bots, according to the Daily News, is Robert Mercer, a tech billionaire who helped revamp Donald Trump’s presidential campaign last summer and who’s often given broad credit for Trump’s victory. Mercer’s hand in the conservative political sphere was subtle until 2016, bankrolling Breitbart News and a “media watchdog” organization. He’s also a major investor in Cambridge Analytica, a firm that prides itself on its ability to leverage data to influence action.

For several weeks, there has been a renewed interest in the question of social media bots, after rampant reports about a sudden influx of followers to the @realdonaldtrump account on Twitter. It was, therefore, probably only a matter of time before Mercer — pro-Trump, private and data-savvy — was given the credit. The Drudge Report gave the story prominent play.

These stories, though, including the Daily News’s, tend to be embraced for the same reason that Superman’s monsters were so chilling: The threat is novel and not well understood. There’s another level here, too. Assuming that vocal Trump supporters on social media are not real people reinforces an important political effect as well.

With that in mind, here’s what we know.

Bots exist.

It’s important first of all to define what we mean by “bot.”

A bot is, by definition, not a person. Think of it like the difference between a bank teller and an ATM. Each can do some similar things, but the ATM can do specific things a lot faster and a lot more universally.

On Twitter, a bot is a bit of code that leverages the platform’s programming interface to tweet, retweet or follow another account. For example, the bot @trumphop retweets old Trump tweets at the same time of day and the same day of the year as when Trump tweeted the original.

In the context of influencing politics, the goals are different. As the New York Times’s Farhad Manjoo reported last month, one way that Twitter bots are used to amplify news stories is by creating an artificial flood of interest in a particular story, with the hope that the story gets onto Twitter’s trending topics list. From there, it can get much more national or international attention.

In a way, this is similar to efforts to game attention in other contexts. Efforts to boost a website in Google’s search results, for example, can similarly make something that’s not all that popular or important seem much more so. In the short-attention-span ecosystem of Twitter and the news media, that artificial importance might have a more immediate effect.

Another way that Twitter bots can influence the political conversation is by quickly replying to posts from prominent users. During the campaign, the first response to a tweet from Trump was often from a bot called “PatrioticPepe,” which usually included some anti-Hillary Clinton graphic.

Where bots are apparently less successful is in influencing conversation. While there are bots that manage to lure Twitter users into endless useless conversations, for the most part bots aren’t going to convince someone of a political point with an extended rhetorical argument any more than an ATM is going to get you to sign up for a new savings account.

There are efforts to game news stories, and were during the campaign — but not all of this was automated.

There are a lot of examples of efforts to draw attention to news stories over the course of the campaign, but those efforts are often driven by people, not automated accounts.

In December 2015, Adrian Chen, a reporter for the New Yorker who wrote about Russian influence efforts in 2015, noted that he’d seen a lot of the Russian accounts that he was tracking switch to pro-Trump efforts. But many of those were accounts that were better described as “trolls” — accounts managed by real people that were meant to mimic American social media users. To continue our bank analogy, it’s as if someone set up a fake store front and pretended to be a teller.

Former FBI agent Clint Watts testified before the Senate Intelligence Committee in April about the extent to which the Russians were using precisely that strategy to try to boost conflicting or confusing messages on Twitter. Leveraging a mix of bots and trolls, he said, Russian interests tried to promote false or slanted stories from sources, reputable and not, in an effort to muddy the waters of truth. This happened earlier this year when Russian actors apparently planted a fake news story about Qatar that spurred a diplomatic crisis in the Middle East.

We spoke to Kate Starbird, a University of Washington researcher, who noted that Trump and those hoping to muddy the water were able to take advantage of an existing network of conspiracy theorists to great effect over the course of the campaign. Some of the outlets leveraged bots to tweet out their stories, but for the most part, there was a community of people interested in sharing and promoting information that cast Trump in a favorable light and his opponents in a negative one — even if and when the information was obviously nonsense.

The Times looked at a recent example of a popular user in the conspiracyverse who managed to promote an inaccurate story all the way to a Fox News Channel broadcast. There’s no evidence that any bot was required.

It’s not clear how pervasive those efforts are.

All of that said, the extent to which bots and trolls influence the conversation on Twitter isn’t clear.

A study from the University of Southern California cited by Manjoo estimated that a fifth of the accounts that tweeted the most about the campaign last year were automated accounts. How can you tell if an account is automated? A few ways. How does it tweet? Does it only retweet? Tweet an abnormal amount? Did the user set up a profile picture? Are its tweets geolocated?

That statistic, though, is also interesting in its mirror reflection: The vast majority of those who tweeted the most about the 2016 campaign during the election were just humans who tweeted a lot about the campaign.

Another study found that pro-Trump bots were much more prominent online than pro-Clinton ones as the campaign wrapped up. What was the net effect? It’s extremely hard to say.

This is where the Mercer connection becomes irresistible.

Cambridge Analytica, that data firm, has been cast as the magical black box that scientifically squeezed out just enough votes for Trump to make the difference in the election. That depiction is perfectly fine with the company, of course, because that’s its sales pitch. The Guardian has run several stories positioning the company as “a shadowy global operation involving big data” that also made the difference in the U.K. Brexit vote. So there’s your answer, one might suggest: The bots may not have been numerous, but they were smart.

But on its main selling point — the ability to persuade people to take a particular action by triggering psychological cues — reviews are mixed. The company itself couldn’t tell the Times when it had been the difference-maker in a political campaign. We can point to 2016 to that end: The firm started out in the camp of Sen. Ted Cruz (Tex.), who did not end up as the Republican nominee.

The evidence that Trump’s support is heavily artificial is awfully thin.

So does Mercer control a massive and growing army of bots doing Trump’s bidding? It’s hard to say, but the most commonly cited evidence to that effect — Trump’s Twitter following — doesn’t show anything of the kind.

When some social media users started focusing on Trump’s Twitter following a few weeks ago, claiming, inaccurately, that he’d suddenly seen a surge in new followers, news outlets used a site called Twitter Audit to evaluate his following. Half of Trump’s followers, that tool suggested, were fake.

But that evaluation is both less rigorous than the one used by the researchers in the USC study — and a lot more variable. As of writing, Trump’s Twitter following is estimated to be only 30 percent fake. That’s a lower percentage than, say, @barackobama — or The Washington Post.


It’s also not clear why bots supporting Trump would need to follow him on Twitter anyway. Some have suggested that Trump’s interest in bragging about his social media audience might prompt him or his team to buy fake followers. Possible. But that’s a very different thing than a “bot army.”

The idea that Trump is being followed by millions of bots makes another error: It doesn’t take millions of bots to have the desired effect. A small number of Twitter accounts responding in short order can seem like a flood to an individual user. A number of examples of people being “targeted by bots” with hostile comments, for example, come down to a person receiving a few dozen negative responses. It seems like a lot in the context of normal human interactions; in the streamlined world of online whining, though, it’s trivial.

There’s a reason bots are being blown out of proportion right now.

The Post’s Abby Ohlheiser noted that the conspiracy theories surrounding Trump’s account fit into a broader pattern of Trump opponents seizing on easily debunked information in the early months of his presidency.

In part, that’s a function of the unprecedented distrust and dislike of the president. But it’s probably also a function of the polarization of American politics.

In August, the Pew Research Center found that nearly half of supporters of Hillary Clinton knew precisely zero people who were backing Trump. (A bit less than a third of Trump supporters knew no Clinton voters.) If you don’t know anyone who supports Trump and you see a number of people online who are vociferously supporting him, that would naturally lead to some skepticism.

When you then read news reports about Russian influence on the 2016 election and couple that with this vague idea that robots are swooping in to help bolster Trump’s candidacy and presidency? Well, that’s a perfect Petri dish for a conspiracy theory to blossom.

That’s the undercurrent to all of this. A poor understanding of the role of automated systems is coupled with a ready-made villain to present a chilling threat that demands a heroic response.

It would be fun thing for a superhero cartoon, really.


(Giphy)