Shortly after last week’s presidential debate, people started to argue on Twitter that Donald Trump, not Hillary Clinton, had won the debate. The #Trumpwon hashtag made its way to the top of Twitter’s worldwide trending topics, demonstrating how many people cared about the results of the debate and wanted to argue that their candidate had won. Trump gleefully announced his status as No. 1 on Twitter trends, seeming to imply that this was evidence that he had won the confrontation.
Some people argued that the hashtag had started in Russia and was being spread by foreign actors who want Trump to win the election and Clinton to lose. I took a deep dive into the actual data on how the meme spread on Twitter. The data tells us that the Trump meme did not start in Russia. Instead, it was spread by a variety of conservative online communities.
The #Trumpwon meme did not start in Russia.
After the #Trumpwon hashtag became popular, some people claimed that it had started in Russia. Some critics of Trump believe that Russian President Vladimir Putin has deliberately tried to help Trump win the election through hacking and other online dirty tricks. At first, there appeared to be some possible evidence to support this claim.
To understand the evidence, you first have to understand how Twitter does and does not make information available about where its users are located. While most Twitter users don’t share their location publicly, Twitter often knows who they are and where they are tweeting from. It can use the IP addresses (Internet addresses) associated with their computer or phone to figure out their location, and it can prepare lists of trending topics that highlight top words, phrases and hashtags from cities and countries around the world. An independent service, TrendsMap, uses this to track and aggregate trending topics across the world. One source (@DustinGiebel) claimed that he grabbed a map from TrendsMap that suggested that the #TrumpWon trending topic had started in St. Petersburg. The image was shared over 14,000 times but was later debunked by a number of sources, including Philip Bump at The Washington Post.
Using new techniques, it’s possible to figure out what did actually happen.
Even if this claim seemed to be wrong, it points to a real set of questions — how actors are using social media to gain attention. Last year, John Borthwick and I published a lengthy analysis of “media hacking” — how players are finding smarter and more elaborate ways to gain attention in online media. We described the importance of people’s positions in broader networks of actors and talked about how to identify the underlying agendas that might be driving particular communities. Over the past year, we’ve been building a new network-based data product — Scale Model — which has recently been launched out of Betaworks. I decided to use it to figure out how the #TrumpWon hashtag spread. To find out more about how I did my analysis, go here.
What I found was interesting. First, the hashtag never shows up as a trending topic in any Russian city. The diagram below shows how the hashtag started trending in different parts of the United States and the world, breaking up the period during which it spread into 10-minute intervals (each plus sign represents a point in time when the hashtag was trending in a particular city). Looking at this data, it is clear the trend began in Baltimore and Detroit, but very quickly jumped to “Worldwide” status and pretty much stayed there for a few hours as it began to spread across Australia and the United Kingdom.
The hashtag reaches the worldwide trending topics list — which looks at users’ IP addresses as well as the novelty of the hashtag and the extent to which its spread is accelerating — by 7 a.m. Eastern time. The timeline below shows how many people wrote #TrumpWon posts on Twitter over more or less the same period.
This is an unusual pattern of spread. Typically, trends have to jump from one city to another before reaching countrywide or worldwide status. Here, instead, it would appear that there was a group of highly organized users who all posted the exact same message at around the same time, from (seemingly) different geographic locations. This exact same message was published by thousands of accounts, probably all over the world, a few hours after the debate had ended (between 3 and 5 a.m. UTC time).
This is the piece of content that was shared:
By the time people woke up on the East Coast of the United States, Twitter timelines were filled with these prompts to tweet #TrumpWon — probably generating enough acceleration for the hashtag to reach worldwide trend status.
So who were the people who started the trend?
The first important question to ask is: Were the people who started the trend real? There are a lot of “bots” on Twitter — automated accounts with fake names that are used to robotically pump out standardized tweets to generate the illusion of support for a particular cause or argument. One way to spot bots is to look at the platforms that are used to write tweets. Because bots are automated, a specific bot network will often use just one platform (Android, iPad or whatever), to pump out tweets. If we look at the people who wrote #TrumpWon tweets, we see that they use a reasonably wide variety of platforms. This suggests, although it does not prove, that they are real people.
Given that the evidence suggests that they are real, who are they? I used Scale Model to map out the community of users who were first to post to the hashtag. The diagram below shows the network between these users, and their connections (who follows whom). The bigger the name of a user, the more important they are within this community.
Each color represents a different “cohort” or densely interconnected subgrouping within the bigger community. I then tried to figure out more about each cohort by looking at how their previous tweets used specific words or phrases. For example, different groups of users might use different catchphrases, such as #MAGA (Make America Great Again), Trump, #Trump2012 and #TGDN (a tag used by conservatives to identify each other as a mechanism to boost follower numbers). Many of these accounts have words such as God, America, family, proud, wife, mother, father and veteran in their bios.
Thus, we can tell that the people who pushed #TrumpWon seem to be real people, belonging to a variety of different cohorts.
What do these people say and believe?
Identifying the community allows us to look at the ways that community members behave on Twitter. What kind of content do they share? Are they active over time, or did they appear out of nowhere?
We can use this data to effectively get into this group’s mind-set — see the world through their lens. If we immerse ourselves in this world of Trump supporters, we learn that they are likely to talk about the following claims:
- Trump apparently has a 4-point lead over Clinton.
- Hillary apparently cheated in the debate by sending hand signals to debate moderator Lester Holt of NBC (notice how active the comments are on this page).
- And Alicia Machado, the former Miss Universe who came up during the debate, was “apparently” accused of threatening to kill a judge and being an accomplice to a murder in Venezuela (dailymail.co.uk).
These users are densely connected to each other. They follow each other at significantly higher rates than the average Twitter user and know who are the “hubs” — the central actors in the network who are able to spread information quickly and widely.
There’s also continuous reinforcement of the dominant hubs within the network:
And active efforts to push content worth spreading:
What about Trump skeptics?
Of course, we can do just the same thing for people who are willing to believe the worst of Trump. If we use Scale Model to model the followers of @DustinGiebel, the account that tried to spread the erroneous information about Russia’s involvement, we are presented with a parallel world of partisan beliefs — this time anti-Trump.
In this case, user cohorts are labeled #NeverTrump and #ImWithHer. Their top shared images and links include the following:
- Trump apparently didn’t pay for $100,000 worth of pianos. (Washington Post)
- A company controlled by him apparently conducted secret business in Cuba during Fidel Castro’s presidency. (MSNBC)
- A post by a self-proclaimed “Bernie Bro”-turned Hillary supporter. (aplus.com)
- And Trump denied that he said he didn’t pay taxes, right next to him making that statement in the debate. (CNN)
Obviously, there is a very different vibe. It is unclear who exactly photoshopped that image to make it seem as though there was a Trump-Russia connection. What we do see, just as with the #TrumpWon community, is a highly organized group of interconnected accounts, dedicated to making their agenda as visible as possible.
What does this mean?
This kind of data analysis obviously doesn’t tell you who is right or wrong, or who you should support in the presidential election. What it does do is illuminate the ways in which different groups of supporters behave online as they vie with each other for public attention. Trending topics are a valuable political tool because many people directly engaged with political questions use Twitter. When a topic trends, it can cut across information silos, gaining significant amounts of attention from influential people who would otherwise never see a piece of content or be exposed to a particular person or group’s point of view.
From a partisan perspective, people who win attention on Twitter can reap real rewards in setting the agenda and making it look like many people support a particular candidate, cause or interpretation of events. Yet this also means that false information can spread like wildfire, especially when there are enough people invested in making it seem true.
Glad Lotan is chief data scientist at Betaworks and a co-founder of Scale Model. He can be found on Twitter at @gilgul.