As Facebook sought to recover from its disastrous 2016 election season, company officials debated ways to curb distortions and disinformation on the platform. One of the most potentially powerful — limiting advertisers’ ability to target narrow slices of voters with political messages — struggled to find support and was abandoned, say people familiar with those discussions.
But today, as disinformation begins to spread ahead of the 2020 presidential vote, Facebook again is discussing “microtargeting” and weighing whether to restrict a set of advertising tools so powerful that, critics say, it may threaten democracy itself.
Though political advertising is a relatively small source of revenue for Facebook, which took in nearly $56 billion overall last year, the stakes are high because many politicians, including President Trump, are avid users of its microtargeting tools. These include the ability to tailor messages to lists of individual voters or to small groups based on characteristics such as age, education, Zip code, income, relationship status, interests, political leanings or combinations of the above. Politicians and operatives from both major parties have reacted angrily to proposals to curb these powers.
Facebook has been reluctant to meddle with this profit-making machinery, and even now its officials are downplaying talk of major changes as it discusses what new restrictions ― if any ― to announce. Nick Clegg, Facebook’s head of global affairs and communications, told reporters last week in Brussels that the company would not alter the “fundamental architecture of our approach” to political advertising.
But pressure to change the rules on microtargeting is being applied to Facebook from several directions.
Disinformation experts and researchers say limiting or banning political microtargeting would help slow the spread of falsehoods online by exposing problematic ads — by politicians or political interest groups — to a wider audience, where inaccuracies or outright lies would be more likely to be noticed and exposed.
Actions by rival tech companies also are increasing the pressure. Last month, Twitter announced it was banning ads by political candidates, and Google sharply limited microtargeting for political campaigns, adding to the company’s long list of advertising topics, including gambling, alcohol consumption and medical treatments, for which the company has targeting restrictions.
Some U.S. officials also have spoken out against microtargeting of political ads. Federal Election Commission Chairwoman Ellen L. Weintraub has repeatedly warned about microtargeting, including in a Washington Post opinion piece last month in which she called the practice “a potent weapon for spreading disinformation and sowing discord.”
Sen. Ron Wyden (D-Ore.), a longtime advocate of Internet freedoms, called for restrictions on microtargeting in October in an addendum to a Senate Intelligence Committee report detailing Russian interference in the 2016 campaign. Wyden issued a statement last week reiterating his call. “Now that Google and Twitter have taken responsible steps to guard against shadowy political influence campaigns, Facebook should do the same, rather than continuing to chase political advertising dollars,” he said.
Facebook spokesman Kevin McAlister acknowledged that Facebook is discussing how it handles political ads. “We are looking at different ways we might refine our approach to political ads. We’re continuing to collect feedback and are considering the options,” he said.
Response to 2016 Russian meddling
Facebook began its efforts to address rampant falsehoods on its platform soon after the 2016 election and intensified them the following year, when it reported that the Internet Research Agency, a Russia-based troll farm that mounted a major online influence operation during the U.S. presidential campaign, had bought ads using Facebook’s targeting tools.
That prompted congressional hearings and intense debate within Facebook over how to address the increasingly obvious problems on the platform. The company gradually adopted several remedial actions, including expanding teams to combat disinformation campaigns, creating an online archive so outsiders could monitor political ads and establishing systems for verifying the identities of ad purchasers.
Limits on microtargeting did not make the cut, and discussions about the idea quieted. But the backlash over Facebook’s political ads policy, which in September officially exempted the claims of politicians from the company’s network of outside fact-checkers, prompted renewed interest in other ways to curb disinformation and jump-started the conversation on microtargeting.
Amid mounting external pressure, an October employee letter to chief executive Mark Zuckerberg, which said the political ads policy “allows politicians to weaponize our platform,” also called for limits on microtargeting, according to a New York Times report that underscored the internal unrest on the issue.
Still, the changes Facebook ultimately undertakes are likely to be far less sweeping than the ones imposed by Twitter or Google, say people familiar with internal discussions who spoke on the condition of anonymity.
One leading idea under discussion is setting a larger minimum size for microtargeted groups. The standard limit of 100, for example, could be raised to 1,000 or 10,000 for political ads, increasing their transparency somewhat, say those familiar with the discussions.
Even that modest change would upset political campaign operatives who have become dependent on microtargeting to communicate messages not only to potential supporters but also to people who are likely to donate to a political campaign.
Microtargeting also is a lucrative staple of corporate advertising, and Facebook has built its business around its remarkable ability to deliver messages tailored to individual Facebook users.
But disinformation experts say that, in the political context, microtargeting opens the door to distortions because no outside observer has a full view of the torrent of tailored messages being delivered. This essential fact of microtargeting undermines the routine scrutiny undergone by television ads or billboards, where any demonstrable falsehood is on display for all to see ― and potentially challenge.
Very narrow targeting, say disinformation experts, fractures political understanding far more severely than the traditional red-blue partisan divide.
“Instead of two Americas, you’re potentially looking at hundreds of thousands of Americas,” said Alex Stamos, the former Facebook security chief, who is now head of the Stanford Internet Observatory, which studies disinformation campaigns. “I don’t think our democracy can survive that kind of Balkanization of the electorate.”
The impact of microtargeting is multiplied by the frenetic pace of modern politics in which a campaign might create thousands of ads a day, each destined for a separate tiny audience.
Microtargeting in many ways
Cambridge Analytica, a former Trump campaign consultancy that was forced out of business by controversy last year, once bragged about its ability to use Facebook data it had obtained to tailor messages with unmatched precision, based on personality types of voters determined via preferences expressed through “likes” and other social media signals. While many political professionals considered such claims overblown, the practice of harnessing social media posts and other personal data to direct advertising remains popular, despite the demise of Cambridge Analytica itself, experts say.
Advertisers can even pursue similar tactics using Facebook’s own microtargeting tools, based on the company’s vast troves of data on its nearly 2.5 billion users, including a large majority of U.S. voters. The sources include what users voluntarily post, what they do elsewhere on the Internet and what commercial data brokers collect about personal characteristics and financial histories.
Facebook also allows microtargeting by letting advertisers upload lists of people — by name, phone number, email address or other identifying characteristics — to create a matched “Custom Audience” of Facebook users. Advertisers that create “Custom Audiences” also can create a secondary list called “Lookalike Audiences” of people with similar characteristics.
“No other advertising platform in history comes close to having Facebook’s capabilities,” said Jonathan Albright, research director of the Tow Center for Digital Journalism at Columbia University. “If there was no Custom Audiences, then we probably wouldn’t be talking about microtargeting right now.”
Those who favor curbing microtargeting say the simplest and most effective approach would be to ban “Custom Audiences” and “Lookalike Audiences” for political ads, but there is not wide support for such dramatic action within Facebook, say people familiar with discussions there. The company has seriously discussed limiting the number of ads a single candidate can run simultaneously, imposing a blackout on political ads in the 72 hours before a vote and raising the minimum number of people that a campaign could target with a particular ad.
Facebook does have an existing list of restrictions that prohibit targeting for housing, employment or credit offers based on race. It expanded these in March to also include restrictions on advertising based on gender, age and Zip code ― which often serve as proxies for race ― to settle civil rights litigation accusing the company of enabling racial discrimination.
The Russians at the Internet Research Agency used Custom Audiences in 2016 for some of its Facebook ad purchases, targeting people who had visited websites and Facebook pages the Russians had created on such hot-button issues as illegal immigration, African American political activism and the rising prominence of Muslims in the United States. But Custom Audiences and microtargeting in general go far beyond such nefarious actors. Their use is routine in political online advertising by campaigns on the right, left and middle.
Trump’s campaign in 2016, for example, produced more than 50,000 ads a day ― many with just slight variations in graphics, wording or colors ― according to Brad Parscale, who was the campaign’s digital adviser and now is the campaign manager of Trump’s reelection effort, while describing the operation in a “60 Minutes” interview in 2017.
“Facebook now lets you get to places and places possibly that you would never go with TV ads,” Parscale said in the interview. “Now, I can find, you know, 15 people in the Florida Panhandle that I would never buy a TV commercial for. And, we took opportunities that I think the other side didn’t.”
The result was that an Army veteran in Texas concerned about immigration probably saw ads different from those seen by a Pennsylvania nurse concerned about health care. And, more important, they each likely saw ads that were different from those shown to their neighbors, their friends, even their siblings or spouses, because Facebook has such enormous data stores it can distinguish among people with generally similar views.
The reaction to Google’s ban on microtargeting and reports that Facebook might impose its own restrictions highlighted the political stakes involved.
“Google has made an extraordinarily poor decision which will lead to less-informed voters, lower voter engagement, and voter suppression,” said a joint statement last month from Trump’s reelection campaign and three national Republican groups. “Google should immediately reverse its decision in order to ensure they do not suppress voter turnout during both the Democrat primaries and the 2020 general election.”
The Democratic National Committee sent a letter to Facebook Chief Operating Officer Sheryl Sandberg last month, the day after Google announced its restrictions on microtargeting, urging Facebook not to follow suit and calling instead for more transparency, policy enforcement and fact checking. “Banning political ads or severely inhibiting targeting capabilities on Facebook would not be in our party’s best interest nor in the best interest of promoting voter participation,” said the letter, a copy of which was obtained by The Post.
But the record of Facebook’s third-party fact-checking system, created in 2016 as one remedy to disinformation, is viewed as a mixed success within the technology industry because it has proved impractical for reviewers to keep up with the flood of falsehoods that appear online every day.
Limiting microtargeting wouldn’t directly address false claims but it could make their circulation easier to spot and challenge.
“We’re used to seeing the ads where the candidate turns to the camera and says, ‘I approved this ad.’ This is so far from that context that it’s hard to comprehend,” said Ashley Boyd, vice president of advocacy for the Mozilla Foundation, a nonprofit group that supports an open and inclusive Internet and has called for curbs on microtargeting.
Tony Romm contributed to this report.