Philip Howard has a fancy name for partisan election bots. He calls them “computational propaganda” — and lately, he sees them a lot.
What does that actually signify, though? And is anyone listening to these bots? Ahead of Wednesday evening’s debate — and the flood of automated tweets it will inevitably entail — Howard agreed to take some of our questions.
What’s a Twitter bot?
A Twitter bot is nothing more than a program that automatically posts messages to Twitter and/or auto-retweets the messages of others. Some bots declare their status openly; some masquerade as humans; others are actually hybrids that rely on some combination of manual input and automation. (@WashingtonPost works this way, for instance!)
Regardless of their exact attributes, bots generally tweet far more often than regular users do — 50-plus tweets per day on a target hashtag — which is why, when they’re deployed in large networks, they can take up a big share of the conversation.
How do you spot one?
In elections past, this would be an easy question: Just look for the day-old Twitter eggs with no profiles and lots of very repetitive messages. Unfortunately, Howard explains, some bots have gotten far more sophisticated since then: “The goal of bot design,” he points out, “is to make them indistinguishable from your family and friends.”
To that end, the best modern bots usually have a photo scraped from the Web and a bio composed of jumbled pro-Trump or pro-Clinton phrases. Some of them will be several months, or even several years, old — they’re more expensive, but more effective. To further complicate things, some botmakers will occasionally take over their fake accounts and send a few tweets manually, the better to foil both researchers like Howard and Twitter’s own spam-prevention team.
Still, Howard said, there are some telltale signs: Humans tend to tweet about a range of subjects and during certain times of day. If an account never seems to sleep, or tweets more than 50 times per day, there’s a good chance it’s tweeting automatically. If you suspect a bot, try the BotOrNot tool from Indiana University: It uses a range of factors — including an account’s timing, language use, and larger social network — to predict whether the tweeter in question is a human, with pretty high accuracy.
How do pro-Clinton bots compare to pro-Trump ones?
In terms of bot followers, Clinton and Trump are pretty even: An analysis conducted by The Atlantic earlier this year found that both candidates had low bot-follow rates at roughly 3 percent. In terms of propaganda bots, though, Trump far outstrips Clinton — as well as Howard’s own well-informed predictions.
“We’ve studied bots in Russia and Venezuela, and after Brexit, and found that the bot traffic was generally 10 to 20 percent,” he said. “So I was very surprised to see that bots accounted for almost a third of Trump’s traffic.”
During both the first and second debates, roughly one-third of the tweets on pro-Trump hashtags, like #MAGA and #CrookedHillary, came from accounts with a degree of high automation. (That could mean pure scripts, or it could entail some degree of human/machine collaboration.)
The rates of pro-Clinton bot tweets, by contrast, were 22 percent during the first debate and 25 percent during the second. All in all, pro-Trump bots sent roughly four times as many tweets as Clinton ones did — though the pro-Clinton bots did up their output for the second contest.
What do these bots generally tweet about? What are they trying to achieve?
Bots aren’t attempting to change hearts and minds — that’s an ambitious task for a bit of code. Instead, most bots exist simply to muddy the facts, making it difficult for neutral bystanders to discern the truth and easier for partisans to reject any views that may clash.
Bots, particularly pro-Trump bots, tend to circulate links to persuasive conspiracy sites, Howard said — often, they’re the primary force keeping these links in circulation. In fact, Trump bots tend to be more sophisticated than Clinton ones, utilizing hashtags and including pictures that make them more persuasive.
During the first debate, pro-Trump bots focused on Clinton’s email scandal and Benghazi; during the second, they honed in on the whole “I’ll-send-her-to-jail” thing. Pro-Clinton bots, meanwhile, spread a number of messages about Trump’s taxes during the first debate, and pivoted to his treatment of women in the second.
“In 2008 and even 2012, Twitter bots were used to make someone seem more popular,” Howard said. “Now they’re more about keeping negative messaging, misinformation, suspicion and even hate speech alive.”
Who’s behind them? Is it the campaigns?
There’s absolutely no evidence that the Clinton or Trump campaigns have anything to do with this. In fact, it’s really hard to trace who’s behind any bot — that’s just the nature of the business. Anyone can cobble together a simple bot or a highly automated account; you don’t even have to code. More sophisticated bots are widely available, for a price, from any number of shady hobbyists and Internet marketers.
Howard hopes to uncover, through more long-term research, if any botmakers were approached by the campaigns or their surrogates. At this point, however, there’s no telling if a bot is the work of a PAC or a 15-year-old kid.
Do these bots actually influence the election?
This is another unclear one: There have been no studies that quantify the impact of bot messaging on U.S. elections. Still, many experts in the field are convinced that they have one.
Clayton A. Davis, a PhD student at Indiana University who studies Twitter bots, told the Atlantic that bots could contribute to a sort of majority illusion, where many people appear to believe something … which makes that thing appear more credible. Howard, meanwhile, thinks bots degrade trust among voters — an issue in a polarized and sensitive cycle.
It’s possible that we haven’t even seen the full power of bots quite yet. Howard worries about a worst-case scenario in which Twitter’s untold numbers of sophisticated dormant bots, or “sleeper bots,” might become active.
“My big concern is that all these sleeper bots might wake up and spread a huge amount of misinformation at a critical time, such as on Election Day or even during the third debate,” he said. “That could really be a problem.”
What should we expect bot activity to be like Wednesday night?
Doomsday scenarios aside, we probably won’t see much activity during the debate itself. The vast majority of the tweets sent during the debate are from humans and largely neutral. In the hours and days after the event ends, however, we may see even more “computational propaganda” than we have previously: If the patterns from the first two debates hold, the number of bot tweets about Wednesday night’s debate could spike to nearly one in three.
Liked that? Try these!