This arrangement benefits Facebook, a social-networking company that profits from user engagement. It also benefits politicians when their aims include fundraising and motivating their base. But it may undermine public debate, the research suggests, and make it more difficult for denizens of the digital world to be well-informed citizens of a flesh-and-blood democracy.
When the researchers sought to force Facebook to show posts to users not already aligned with the ideology of the advertising, the cost of the advertising rose. The effect is that a campaign pays more when it tries to speak to the other side, say the researchers, who are computer scientists at Northeastern University and the University of Southern California, as well as the managing director of the technology nonprofit organization Upturn.
The researchers spent more than $13,000 on a set of advertising campaigns designed to test how Facebook promotes political messaging. The partisan skew appeared in their experiment, the authors stress, even though all ads were run from the same account and at the same time. The ads all focused on the same audiences and used the same “goal, bidding strategy, and budget.”
By slotting liberal ads to liberal users and conservative ads to conservative users, the study warns, Facebook is “wielding significant power over political discourse through its ad delivery algorithms without public accountability or scrutiny.”
Facebook disputed the novelty of the conclusions.
“Findings showing that ads about a presidential candidate are being delivered to people in their political party should not come as a surprise,” said Joe Osborne, a spokesman for the company. “Ads should be relevant to the people who see them. It’s always the case that campaigns can reach the audiences they want with the right targeting, objective and spend.”
Facebook’s delivery decisions rely on artificial intelligence, vaunted as a revolution in commercial advertising. In the context of politics, however, an algorithm’s determination of relevance “has the potential to be quite destructive,” said one of the study’s authors, Aleksandra Korolova, an assistant professor of computer science at USC.
Eli Pariser, the Internet activist who coined the notion of the “filter bubble” to explain the online cocoon created by algorithms that serve users with information that aligns with their existing worldview, echoed that concern. The research reveals how digital advertising — not just on Facebook but on peer platforms as well — “fragments political discourse,” he said, and shortchanges users, who are not clued in on how tech giants are categorizing their political views.
“Advertising that performs the best is most targeted to the core base, which, if you’re a soap or pillow company, doesn’t matter, but if you’re a political candidate, it has some distortive effects,” Pariser said. “Even if you’re trying to reach people who disagree with you, or reach across the aisle, that is increasingly difficult and expensive.”
Daniel Kreiss, a professor of political communication at the University of North Carolina at Chapel Hill who was not involved in the research, said the findings are “on point.” The paper breaks new ground, he added, in documenting the gulf “between who campaigns target and who is actually shown an ad.”
The findings arrive at a moment of rapid transformation in political advertising, which has spurred debates about bias, polarization and privacy — even as the digital infrastructure of promotion technologies remains poorly understood.
Research has shown that optimizing ad delivery for user engagement can skew results based on race and gender. But the increasing prominence of digital advertising in politics makes the new results especially noteworthy. As recently as 2010, just 0.2 percent of spending on political marketing went to online advertising, according to the paper. More than a quarter is expected to be funneled online in 2020, the authors note.
When campaigns relied on television and newspaper advertising, their messaging was bound to reach a mixed audience of different races, income levels and ideologies. Now, digital platforms have handed politicians precise methods of identifying and targeting would-be supporters — zeroing in on particular demographics or uploading voter data to hunt for potential matches. They are also able to choose different objectives, such as displaying the ad to the largest number of users, which is the option used in the study.
The use of these technologies in 2016, including their manipulation by Russian actors as part of the Kremlin’s sweeping disinformation campaign, caused public outcry. It also touched off a raft of legislation — which has languished in the GOP-controlled Senate — and a flurry of research by scholars, who have raised alarm about the lack of transparency and access to the data needed to scrutinize platforms like Facebook. The company, meanwhile, has strained to balance the sort of data sharing that enables academic research with its promise to protect the privacy of its users.
Part of what the new research shows is that any move to curtail targeting opportunities offered to advertisers may have only limited impact, given the powerful targeting baked into Facebook’s delivery mechanism — in a way that’s not fully visible to advertisers.
“Our work shows that just getting rid of microtargeting isn’t going to solve the problem,” said Alan Mislove, one of the paper’s authors and a computer scientist at Northeastern. “Facebook will deliver the ad to the subset of users that Facebook thinks is interested.”
As the paper observes, campaigns purchasing political ads to target a geographic area might expect to reach users whose political views are roughly representative of opinions in the region. “To discover otherwise would require careful research,” the paper warns, because the precise political segmentation performed by Facebook is opaque not just to the public at large but to the campaigns that are investing resources on the platform.
The reason is that Facebook provides only limited information to advertisers about how their messaging is performing. That includes the number of times the ad was shown and the number of users who saw it, for example, as well as the breakdown of these metrics by gender, age and location. The company does not report how the ad was received according to Facebook’s prediction of user interest in political content.
But the researchers gained insight into how political leaning affected ad performance by creating a set of audiences — one set divided by party using public records in North Carolina and another by attributes provided by Facebook, including “Likely engagement with US political content (Conservative)” and “Likely engagement with US political content (Liberal).”
They used ads for President Trump and for a potential Democratic rival, Sen. Bernie Sanders (I-Vt.), in most cases repurposing messaging created by the two campaigns. They also had a neutral ad that simply encouraged users to vote. How the ads played in the distinct groups sheds light on how Facebook seeks to match messaging to a friendly audience.
The ads for Trump were shown disproportionately to Republicans, while the ads for Sanders were shown disproportionately to Democrats. The neutral ads were shown more evenly. Meanwhile, the paper notes, “the only difference between them is the content and destination link of the ad.”
That final point is crucial. Facebook’s predictive technologies are so thorough that they effectively slotted the ads according to partisan alignment even when the post was neutral. The content of the linked website alone — either Trump’s campaign or Sanders’s — was sufficient to send ads to different audiences. To the authors, this effect illustrated that Facebook’s delivery decisions rest not on the requests of the campaigns buying them or on the live reactions of Facebook users, but at least in part on the platform’s priorities.
The company hints at this approach in documentation for advertisers, saying Facebook opts to “subsidize relevant ads in auctions, so more relevant ads often cost less and see more results.”
But how Facebook defines relevance, especially in the context of politics, is less clear, alarming experts. Ad placement on other platforms, such as Google, similarly relies on relevance.
“If a voter is being asked to vote on a ballot measure or candidate, ads from both sides are equally relevant to them — but there is probably one side they ‘like’ more,” said Laura Edelson, a researcher at New York University’s Tandon School of Engineering who was not involved in the research but praised its results. “This is a classic example of how a system designed for commercial advertising just doesn’t work for political advertising.”
A possible limitation of the findings, Edelson said, is whether they would apply to lesser-known candidates, whose messaging may be more difficult to categorize. Other researchers noted that higher budgets would also shift the results. The paper’s authors were surprised that an additional experiment using political donors to Trump and Sanders didn’t result in a similar skew, hypothesizing that Facebook lacked sufficient information about these specific users to know which ads they would prefer.
The authors stress that the delivery of Facebook ads may have negative implications for users and candidates alike, calling into question the effectiveness of a medium that is quickly becoming a staple of modern campaigning — and a sign of digital savvy — even though its functioning remains something of a black box. Light shed on the system’s internal workings by the new paper suggests that Facebook’s algorithm limits exposure to diverse viewpoints, while creating barriers for candidates who aim to reach across the aisle.
“Put simply, Facebook is making decisions about which ads to show to which users based on its own priorities,” the authors conclude, meaning “user engagement with or value for the platform.”