After it discovered that Russians had used its platform in an effort to sow discord in U.S. politics, Twitter released a massive trove of data about the accounts the Russians used and tweets they sent or retweeted. A researcher for the security and software firm Symantec dug into the data, building an understanding of how the effort worked and the extent of its reach.
From the standpoint of the 2016 election, however, the findings are significant in what they don’t show. The research shows that the accounts associated with the effort sent more than 770,000 tweets or retweets from January 2016 through the election, about 400,000 of which were original tweets. In 2018, Twitter users sent 320 million tweets a day, meaning that the original tweets sent by Russians would have constituted about a tenth of 1 percent of a day’s tweets.
The most retweeted account, TEN_GOP, was retweeted 6 million times over its existence, including by prominent conservatives such as Donald Trump Jr. Those retweets would have made up less than 2 percent of one day’s Twitter activity. As with the Russian effort on Facebook, there is not much evidence that the push made a significant dent in boosting Donald Trump’s candidacy — especially given the more limited ability to target voters over Twitter.
That wasn’t the most interesting part of what Symantec found, though. The most interesting finding was that some of the accounts used a link-shortener that included display ads when clicked — allowing the Russian trolls to make ad money while they tried to sow discord in American politics.
"In total, we found 13 different accounts using monetized URL shorteners,” the Symantec report reads. “One account stands out as making a substantial income from its tweets. The user handle was blanked-out by Twitter since it only had 4,123 followers, but it masqueraded as a pro-Trump political account.” Despite the limited number of followers, the account “may have generated an income of almost $1 million if each of its followers clicked on a link just once ($949,890). The account was also retweeted 8,362 times which may have resulted in even more clicked links.”
That’s remarkable (though the calculation of how much might have been earned is suspect). It is also not surprising.
During the 2016 campaign, I profiled a guy in Albany, N.Y., who ran a fake-news website called Prntly. Why “Prntly”? Because the site started as the website for his print-and-design company, until he realized that he could make a ton of money by hyping or making up news stories about Trump and plastering those stories with ads.
He wasn’t alone. Shortly before the election, BuzzFeed News looked at a group of teenagers in Macedonia who were running pro-Trump websites littered with made-up stories. Some had “experimented with left-leaning or pro-Bernie Sanders content,” the BuzzFeed article noted, “but nothing performed as well on Facebook as Trump content.”
Facebook was central. It connected these Macedonian kids with the audience for their plagiarized or made-up articles. That’s the social network’s value proposition of course, that it can connect buyers and sellers. Its power was so massive that even before the 2016 election, the New York Times detailed the extensive, mostly right-wing “political-media machine” on the network. The Prntly guy had success on Twitter, too: Trump himself tweeted Prntly articles on multiple occasions.
It’s not new that political content — particularly conservative content — is fertile ground for scammers. In 2012, political action committees made money by tricking visitors to websites into thinking they were contributing directly to candidates. Other PACs were more direct in their attempts to separate contributors from their money, claiming that they would spend heavily on political activity but instead spending most of the money on their own salaries. This never died out: Trump’s campaign took the unusual step last month of indirectly criticizing a longtime ally of the president for similar behavior.
What's new (relatively speaking) is how social media networks have become integrated into the effort to share and monetize conservative and even more right-wing content — and, specifically, how they've been pushed to defend outlier behavior as a result.
This week, a video journalist named Carlos Maza described on Twitter how he’d been targeted by conservative commentator Steven Crowder. Crowder repeatedly disparaged Maza’s sexuality and race, referring to him as, among other things, “Mr. lispy queer from Vox” and “an angry little queer.”
Maza appealed to YouTube to intervene on his behalf, noting that Crowder’s attacks appeared to violate YouTube’s prohibitions against abuse. YouTube, however, declined to take any action against Crowder.
Crowder has 3.8 million subscribers on the platform. His 20 most-viewed videos have been seen a combined 232 million times. He is, it's safe to say, a YouTube success story — leveraging the network for both his and its benefit.
“As an open platform, it’s crucial for us to allow everyone — from creators to journalists to late-night TV hosts — to express their opinions w/in the scope of our policies,” the site explained in a tweet. “Opinions can be deeply offensive, but if they don’t violate our policies, they’ll remain on our site.”
YouTube has taken action against other prominent users in recent months, ousting Infowars’s Alex Jones, for example. As with Crowder, such efforts quickly run into political murkiness.
Crackdowns by social media networks against fake-news purveyors and abusive accounts have led to a backlash on the right, including from Trump himself. Conservatives argue that they’re being systematically targeted for their political beliefs, a claim for which there’s no real evidence. Instead, we’ve seen multiple examples of social media companies trying to uproot falsehood or abuse and stumbling into conservative media and politics.
Facebook’s news feed was removed after efforts to moderate away fake news resulted in blocking conservative news sites, which spurred a backlash. Twitter came under fire for “shadow banning” conservatives when, in reality, it was implementing new tools that punished accounts believed to be unusually abusive.
How blurry is the line? A Twitter employee told Vice News in April that efforts to uproot white supremacy on the site ran into problems because engineers were concerned that conservative politicians would trigger the algorithm. Less than two months later, Fox News’s Laura Ingraham lamented on her Fox News show that Twitter had banned Paul Nehlen, a onetime Republican candidate for Congress who has since openly used white supremacist and anti-Semitic rhetoric.
These social media companies are incredibly powerful at merging political conversations with economic incentives, including their own. They're built to make money by connecting buyers and sellers — including of noxious products.
That creates the struggles we see now: How to address obvious homophobia from a guy defended by a large conservative community, or how to address foreign actors trying to influence American politics who decide to make a few rubles on the side.
But, again: Why could those Russians make money? Because their pro-Trump rhetoric (which is what the most successful account was peddling) sold.
Trump calls social media companies biased because he derives power from positioning himself and his supporters as oppressed. Other conservatives make the same claim because they want to make the companies defensive about policing questionable behavior. Those efforts have been effective, as the YouTube/Crowder and Twitter/white nationalist incidents suggest.
The happy side effect for the companies, of course, is that they can keep making money from those users.