Kate Starbird, a researcher at the University of Washington, first realized this directly when she was studying the online reaction to the Deepwater Horizon disaster in 2010. Analyzing a dataset of 600,000 tweets about the Gulf of Mexico oil spill, she helped put together a map of how information was shared among those close to the event and more broadly. In addition to tweets about the likely effects of the spill, she noticed an undercurrent of another type of information: misinformation about the most dire possible outcomes — often coming from politically focused accounts.
“There were these weird claims the ocean floor was going to collapse and there was going to be a tsunami of oil coming ashore,” Starbird said when we spoke by phone last week. “It was confusing. I remember people were emotionally affected and scared about this, people that lived in the area.” One woman who lived in Louisiana even sent Starbird a panicked message asking if that risk was real. It wasn’t, of course. But the story was shared within that community as though it might be, including by one Twitter user central to the conversation whose main focus during the spill was using it to be critical of Barack Obama.
After the 2016 election, Starbird revisited that discussion and noticed something resonant.
“I went back and tracked some of these articles using the Wayback Machine and they cited Russian scientists, and they went through right-wing blogs that we might call alt-right now,” Starbird said, referring to the Internet Archive’s tool for cataloging the history of websites. “At the time, I didn’t notice what was going on, but with the benefit of hindsight, you notice that this stuff was happening for a long time.”
While Starbird didn’t document any direct influence from Russian actors in her analysis, it would not have been the only instance of their behaving in that way. In his analysis of a Russian disinformation agency for the New York Times in 2015, journalist Adrian Chen documented an incident from 2014 in which Russians actively spread a news story about a disaster at a chemical plant in Louisiana, going so far as to create fake Web pages and news videos to add realism to the effort. Why? As Chen later explained, the intended effect “was not to brainwash readers but to overwhelm social media with a flood of fake content, seeding doubt and paranoia, and destroying the possibility of using the Internet as a democratic space.”
In early 2016, Starbird and her research team embarked on a different but related analysis. Over the course of the first nine months of the year, they gathered any tweet about a shooting incident in the United States, including tweets using the words “shooting,” “shootings,” “gunman” or “gunmen” and any tweets that used language indicating skepticism about the official story of an attack: “hoax,” “false flag” — meaning an attack secretly launched by authorities to push a political agenda — and “crisis actor,” a term for people theoretically playing roles in a false-flag type of attack. The team then analyzed the data, looking at how “alternative narratives” — a nonjudgmental way of saying “conspiracy theories” — took root and spread.
Of course, the first nine months of 2016 was a particularly interesting time to document the spread of rumors on the Internet. And, sure enough, Starbird documented an ecosystem with which many would soon become intimately familiar as the campaign of Donald Trump was bolstered by the same sort of doubt and factual relativism that Chen’s Russians sought to sow.
The graph below shows the result of Starbird’s work. Bigger dots show domains that were mentioned more frequently in tweets her team analyzed. The size of links between them indicate how frequently the same user tweeted links to those two domains. They are colored by type: purple is mainstream media; aqua, alternative media; red, government-controlled media.
The story that image tells isn’t simple to extract. For example, why, in a network of conspiracy theories, is The Washington Post so big?
One of the major shooting events Starbird documented was the attack at the Pulse nightclub in Orlando. As it turns out, The Post debunked conspiracy theories about that attack. We’d also written about a professor who was fired for claiming that the attack at Sandy Hook was a hoax. Both of those articles were resonant within that universe of people discussing whether or not shootings in 2016 were real or manufactured, for perhaps obvious reasons. And one less obvious reason: That The Post was debunking the conspiracy theories was seen by some as evidence supporting the theory. After all, the Establishment must have been spooked if The Post were going to that trouble.
As Starbird summarized that argument: “Look, the mainstream media says this is untrue. This is even more evidence that it must be true.”
Several sites were central to sharing and fostering the hoax theories: BeforeItsNews.com, NoDisinfo.com and VeteransToday.com. One trait she noticed among this galaxy of sites was that the same story or theory was often repurposed among multiple sites, giving readers an impression of corroboration when what was actually happening was duplication.
Also prominent is Newsbusters.org, a site run by the conservative group Media Research Center — which itself is heavily funded by the Mercer family, who helped guide Trump’s victory in 2016. Newsbusters’ stated aim is to unearth the bias of the “liberal media” — meaning, in short, that it’s in the business of positioning many mainstream media outlets as fundamentally untrustworthy, making it of use to those wishing to promote alternative narratives.
It’s prominent in Starbird’s graph, she said, mostly thanks to another name that you may be familiar with from the 2016 election: Mike Cernovich, a prominent voice in the far-right social media community. A Cernovich tweet of a Newsbusters story about how CNN edited a statement from the victim of a shooting was used by Cernovich to suggest that mainstream narratives are often wrong.
Two Russian government-backed sites, RT.com and Sputnik, were also included in the alternate-narrative conversations. RT (formerly Russia Today) would duplicate stories from a site called 21stCenturyWire — a site which was generally shared by users in Starbird’s dataset who also shared stories from NoDisinfo and VeteransToday.
A node on that graph you might expect to see is Infowars, a site predicated on sharing poorly sourced theories of this nature. But Infowars, while prominent, stands apart from the main network. That’s in part because a lot of the accounts tweeting Infowars links were automated, Starbird said.
“It was amplified by an army of bots,” she said. Of the tweets she collected, “probably 80 percent, maybe even 90 percent were accounts that sent only one tweet that I captured, it pointed to Infowars, it was usually a retweet of another account.” This was evidence, she said, of a “pretty unsophisticated bot.”
Bots played a much bigger role in boosting another site — so much so that Starbird removed it from the graph because it was so inflated. TheRealStrategy.com “coordinated hundreds of accounts that tweeted content related to several different alternative narratives from these events and others,” she writes in an article about her research.
These bots were much more sophisticated and looked more like actual social media users.
“We think they were borrowing a set of accounts or leasing them,” Starbird said, “where certain accounts all of a sudden changed their profiles to become part of this botnet for a set period of time, and then they go back and later they’re tweeting about something else for somebody else.” In his analysis, Chen noted that the Russian disinformation agency was also selectively loyal. While many of the users he was following had stopped tweeting by the middle of 2016, “some continued,” he writes, “and toward the end of last year I noticed something interesting: many had begun to promote right-wing news outlets, portraying themselves as conservative voters who were, increasingly, fans of Donald Trump.”
The central challenge Starbird encountered was in determining “which properties are emergent and which properties are orchestrated” — that is, what parts of a network of conspiracy theorists is a function of natural skepticism and information-sharing and which part is bolstered by the use of automation and/or to promote a particular idea. Starbird’s study, limited in scope, couldn’t suss out where that boundary might lie.
Her research, though, did reveal a common theme from 2012 — when she analyzed the Deepwater Horizon spill data — through 2016: a group of sites focused mainly on opposition to “globalist” and corporate hegemony that peddled in alternative explanations for the world around them. These themes (and that strategy) have been echoed by Trump and his team.
It also demonstrated to her the extent to which these sites and this network powers an alternate arguments in American politics, bot-driven or not.
“I don’t know how far these ideas are echoing,” she said in a telephone interview. “It just became clear to me that they weren’t as marginal as I originally thought they were.”