It’s easy to read that litany and uncritically assume that our politics are guided by an insidious foreign hand. So many things of such significance tainted by Russian actors? What, after all, can we trust?
The problem with that list, though, is that it is alarmist both as presented and, to a lesser extent, as reported. There simply isn’t any evidence that bots linked to the Russian government had or have a significant effect on the U.S. political conversation.
Consider this tweet from Ari Fleischer, who was press secretary under President George W. Bush. He links to a comment made by a vice president at Facebook who notes that the Russian activity on the social network was more about divisiveness than about Trump, with more than half of the ad spending coming after the election.
The two words that undercut Fleischer’s tweet are the last two: “It worked.”
We looked at the indictment from Mueller’s team Friday. It included efforts to launch real-world events as well as social-media posts and ads. The ads included taglines such as “Donald wants to defeat terrorism … Hillary wants to sponsor it,” and “#NeverHillary #HillaryForPrison #Hillary4Prison #HillaryForPrison2016 #Trump2016 #Trump #Trump4President.” Democratic nominee Hillary Clinton at the time was expected to win the election; a central Russian goal was apparently to make that victory as close as possible and to have a cloud of skepticism follow Clinton into the White House.
But reading the ads included in the indictment and looking at other ads released publicly by Facebook, it’s hard to come away with the sense that these were decision-makers for many voters. It’s often hard to measure the effectiveness of political advertising, but these ads seem particularly mediocre. If your goal is to mix things up and frustrate people, the bar for success is lower.
What Fleischer’s tweet implies is that Russian actors successfully sowed division in the United States. That’s incorrect. Russia’s efforts reflected and tried to leverage existing divisions.
The Pew Research Center measured political animosity in June 2016 and found that more than 4 in 10 partisans viewed members of the other party as a threat to the United States. That survey came before many of the more provocative actions taken by the Russians and reflects a long-term trend of growing partisan frustration. The United States was already divided; the Russians appear to have tried to make that gap wider.
There’s not really much reason to think those efforts were successful. The scale of Russia’s ad buys — and the other apparent Russian efforts before and after the election — is tiny. Data released by the House Intelligence Committee show that Russian ads were viewed only 340,000 times in the last month of the campaign. There were 231 million monthly active Facebook users in North America in the fourth quarter of 2016, meaning that perhaps one-tenth of 1 percent of users saw Russian ads that month — assuming that all of those views were in North America (not a safe assumption) and that no one saw one of the ads more than once (also not a safe assumption).
When we talk about “flooding” social media, the same scale applies. The New York Times wrote that in the hour after news broke of the shooting in Parkland, Fla., last week, “Twitter accounts suspected of having links to Russia released hundreds of posts taking up the gun control debate.” They aren’t ones who used the term “flooding,” but Wired did.
Hundreds of tweets is not a “flood.” It’s unclear how many tweets there are each day in the United States, but it’s safe to say that there are more than 100 million. Assuming those tweets are distributed evenly over the course of a day (which they aren’t), 1,000 tweets is 0.02 percent of an hour’s total.
If anyone saw those tweets. Russian bots (and accounts powered by real Russian trolls) tweet a lot but don’t often have many real followers. Getting real followers is a lot harder than getting fake ones. So a bot throwing a tweet out into the void doesn’t mean it will ever be seen. Twitter has provided an answer to the koan: If you tweet and no one is around to see it, it’s not heard.
As evidence, we can look at that report about how bots tried to push for Franken’s resignation, a story picked up by Newsweek and the liberal site Raw Story. The evidence for the bots affecting what happened to Franken? The bots started tweeting an article that undercut Franken politically but that pointed back to two newly created websites.
“The bot accounts normally tweeted about celebrities, bitcoin and sports, but on that day, they were mobilized against Franken,” Nina Burleigh wrote for Newsweek. “Researchers have found that each bot account had 30 to 60 followers, all Japanese. The first follower for each account was either Japanese or Russian.”
The author of that article weighed in on Twitter. Precisely none of the traffic the article received came from the sites the bots were pushing. Why were the bots pushing it? One possible reason is that the Franken story was big news and the new sites were hoping that people searching Twitter for Franken news would visit — and see the dozens of ads littering them.
Bots do clearly try to game Twitter’s built-in tools for attention. In Politico, Molly McKew of New Media Frontier argues that bots pushing the “#ReleaseTheMemo” hashtag helped propel that hashtag into national prominence, shifting the debate over Nunes’s memo alleging misbehavior by the Trump administration. McKew notes how prominent conservatives such as Sean Hannity eventually picked up the hashtag (which, we’ll note, was originally tweeted by a real person), as did the president’s son.
McKew doesn’t demonstrate unequivocally that the bot activity led directly to the hashtag’s usage among high-profile conservatives, that key accounts using it were bots or that the hashtag wouldn’t have been as successful had the bots not weighed in. Does anyone really think, though, that it was the existence of the hashtag that spurred Trump to release the memo? Hannity’s television show and prominent members of the conservative media were also arguing that Nunes’s memo showed bias against Trump. The hashtag may have been an organizing tool, but the nature of the issue itself is clearly what spurred Trump’s enthusiasm.
The pattern is similar to 2016: There were already a lot of people who supported Trump and releasing the memo and the Russians may (may) have tried to reinforce that position. Whether those efforts were successful is another question entirely.
Data from a project called Hamilton68 is often cited as evidence of the Russian bots’ pervasive presence. On Feb. 14, the day of the shooting in Parkland, the bots tracked by Hamilton68 tweeted 18,000 times — 0.02 percent of the estimated 100 million daily tweets. The most discussed subject in the past 48 hours, as of writing, was Syria. The bots tweeted about it 209 times over the past 48 hours. That’s 0.0001 percent of all tweets over that time period.
People are bad at scale when it involves figures in the millions. We didn’t evolve to need to comprehend how 100,000 compares to 1,000,000 and, to us, “hundreds” seems like “a lot” almost regardless of context. Hundreds of thousands of dollars in your bank account is a lot. Hundreds of thousands of tweets in a month is not.
The reason Coke runs ads is to remind people of a message they’ve heard hundreds of times in the past. No one Coke ad is probably the reason you may have gone and bought a Coke recently. What Russia seems to be trying to do on social media is, figuratively speaking, to throw out the occasional Coke ad. It’s not a flood of Coke ads, especially compared with all of Coke’s ads, much less to all advertising in general.
If those ads prompted anyone to buy a Coke, that purchase was also not probably a big part of Coca-Cola’s annual sales.