Both Facebook and Twitter say Kremlin-linked organizations used their platforms to try and influence voters during the 2016 election. Here's how. (The Washington Post)

The key to effective political communication is to say a particular thing to a particular audience. It’s easy but expensive to say a general thing to a lot of people: That’s the role of an ad on CBS during a prime-time show. It’s expensive to say a specific thing to a lot of people: That’s an ad targeting a specific cable channel. It’s quite expensive to say a particular thing to a particular audience — and it can be very expensive to make a precise argument to a particular voter.

But it’s not necessarily hard. It used to be that you could do this with a piece of mail, though people wouldn’t necessarily read it. You can do it with a volunteer who knocks on their doors, but that doesn’t scale well. And then the Internet arrived.

In late 2014, we looked at how Facebook aimed to dominate political advertising by allowing campaigns to have that level of precision at a lower cost. You could target groups by geography and demographic, or you could upload a voter file and find specific voters. The more refined, the more expensive — but still cheaper than sending someone to people’s homes.

The 2016 campaign marked Facebook’s arrival as a political force, though not necessarily in the way the company expected. The Trump campaign invested heavily in Facebook, using the tool to target voters with very specific messages and, it hoped, to spur people to the polls.


Facebook chief executive Mark Zuckerberg. (2010 photo by Robert Galbraith/Reuters)

Or, if needed, away from them. A Bloomberg report last October outlined how the campaign was also using Facebook to suppress votes, showing, for example, black voters an ad focused on Hillary Clinton’s past comments about black crime. Black voter turnout was down in 2016, probably because of the Democrat on the ticket. But in an election that gave Donald Trump the White House thanks to 78,000 votes in three states, it’s possible that the targeting of voters on Facebook played a bigger role than expected.

Thus adding new potency to a Washington Post report on Wednesday afternoon about another group investing in Facebook ads during the campaign: Russian trolls. From that report:

Facebook officials reported that they traced the ad sales, totaling $100,000, to a Russian “troll farm” with a history of pushing pro-Kremlin propaganda, these people said.

A small portion of the ads, which began in the summer of 2015, directly named Republican nominee Donald Trump and Democrat Hillary Clinton, the people said. Most of the ads focused on pumping politically divisive issues such as gun rights and immigration fears, as well as gay rights and racial discrimination.

(The “troll farm” at issue was apparently this one, profiled by the New York Times in 2015.)

A Facebook official outlined who had been targeted with the ads — “people on Facebook who had expressed interest in subjects explored on those pages, such as LGBT community, black social issues, the Second Amendment, and immigration.” Some 3,300 ads were linked to Russians.

In July, McClatchy reported that investigators looking at Russian interference in the 2016 election and any connection to the Trump campaign were looking at whether someone directed the Russian activity toward particular voters. That is, did the Trump campaign, for example, tip off someone to populations that might be susceptible to targeted advertising?

The comment from Facebook doesn’t suggest that’s the case, for three reasons. First, the ad buy was fairly limited in scope; $100,000 over 18 months isn’t very much. Second, the ads began in 2015, suggesting that this wasn’t necessarily tightly tied to Trump. Third, the Facebook official said that the targeting was done by generic interests — which doesn’t exclude the possibility of more specific advertisements being purchased but suggests a lower level of sophistication than targeting a voter file, for example. In other words, the ads appear to have targeted people who clicked the “like” button on certain pages, not, it seems, people who someone had identified as likely voters in the 2016 election.

What’s worth pulling out here is that we’re learning about this investment only in September 2017, 10 months after the election. After the election, facing criticism for its role in spreading rampant misinformation, Facebook set out to uproot problematic content and review what happened in the campaign. That review led to this discovery.

Murky political advertising is far from eradicated on the platform. On Tuesday, my feed showed me this post, about an incumbent city council member in the neighborhood in which I live.


The post had no link. It was just this image and this text. There was no way to know anything more about the ad until I noticed, tucked in the corner, a small disclaimer: “Paid for by Fox 2017″ — one of the politicians’ opponents. (A representative of that campaign confirmed that it had paid for the ad.) On my phone, that tagline was all but invisible; it was only when I transferred it to my laptop to send to Facebook for this article that I spotted it.

It’s not clear whether this ad is legal in the city of New York. (The Board of Elections didn’t return several calls.) But the rules for campaign materials mandate a review by the city for “all broadcast, cable or satellite schedules and scripts, internet, print and other types of advertisements” that are “published to 500 or more members of a general public audience.” On Facebook, it’s trivial to slip in under that bar, particularly if you’re targeting registered voters. Facebook can’t possibly police the legality of political ads on its site, of course, since the laws vary widely by jurisdiction. (A spokesman for the company told me in an email: “We require ads to follow our ad policies, which includes following local laws. If a recognized authority body reported that the ad is out of compliance, we would remove it.”)

The question that arises in the context of the Russia investigation is twofold. The first is, how many other people saw the ads? This is the key question for the investigation into Russian interference. If the ads were shown only to African Americans in Detroit who were likely to vote, for example, that has much different implications than them being shown to people who “liked” the NAACP from anywhere in the United States.

The other question is how many people would bother to find out who is behind the negative ad that I was shown. It’s very easy for things such as this to slip through the cracks. If I hadn’t captured a screen shot of the ad, it would be hard to demonstrate that I’d even seen it — making it hard for outside observers (and campaign finance authorities) to track it. It’s this dark corner of the world’s most popular social network that campaigns are happy to exploit.

Campaigns — and Russian trolls, it seems. Facebook won’t reveal details of what those 2016 ads showed, citing its “data policy and federal law,” and we don’t know who was targeted with them. It’s clear, though, that those looking to influence American politics have embraced Facebook’s tools — and not always in the way that one might hope.