By the end of the 2014 election, campaigns and political committees had directly spent about $8 million on Facebook advertising, less than half the amount they’d spent on Google. Through September of last year, that figure neared $46 million, 50 percent more than what Google took in.
Facebook's push into politics paid off.
That’s only direct spending, excluding spending by political consultants on behalf of candidates or campaigns. In the 2016 campaign alone, Donald Trump’s team spent somewhere around $70 million on Facebook through a digital firm run by Brad Parscale, who is now Trump’s campaign manager. Parscale has been celebrated as something of a wunderkind, having appeared out of nowhere to guide Trump to an unexpected win, but it doesn’t take much looking under the hood to understand how that misses a critical detail. As he explained to “60 Minutes” in 2017, it was actually Facebook that provided the tools — and an on-site consultant — that enabled him to maximize the campaign’s digital efforts. Parscale won the Indianapolis 500, but the car did the work.
Once Trump won, though, his political base turned on Facebook. Conservative activists at the heart of MAGA world, including Donald Trump Jr., decided social media companies broadly were targeting conservatives to stifle their points of view. Despite the lack of any evidence of systemic bias, President Trump embraced the idea, because the argument reinforces his perpetual narrative about facing head winds from any and all institutions that might offer any friction. And, probably, because Trump actually believes that maybe the companies are out to get him.
Facebook had bigger problems. The revelation that Russian actors had used the platform to try to influence the outcome of the 2016 race drew new scrutiny to how Facebook enabled engagement in elections. The company had bragged about how its tools could influence election outcomes, conducting an experiment in 2010 that showed how it could increase voter turnout. Here, it seemed, was evidence of a malign actor upending the most significant election the nation holds.
That narrative, which is still common, is incorrect. There’s no evidence that the Russian effort on Facebook altered the outcome of the 2016 race; in fact, there’s little evidence it had much of an effect at all. Facebook did help elect Trump — but by giving Parscale the tools he used, not by Russian actors using those same tools more effectively.
The tools were also used more effectively by scammers, people like the guy in Albany, N.Y., who discovered that the website he’d created to promote his printing business could be more lucratively deployed in making up news stories and polls and promoting them on social media. Trump himself twice shared those untrue stories on Twitter, driving traffic to pages loaded with ads. Kids in Macedonia made money hand over fist by pushing fake information on Facebook, information that broadly served to boost Trump. The politics were incidental; it just happened that backing Trump was more lucrative than backing Hillary Clinton.
What the revelations about Russia and fake news did was draw attention to Facebook’s tools as potential vectors for inappropriate activity and misinformation. Facebook suddenly faced new questions about how powerful its targeting tools were and about its ability to police the content that was being shared. It, like Google and Twitter, engaged in an internal effort to find a balance between ameliorating legitimate concerns and maintaining the network’s core value proposition. That recalibration was buffeted by the ongoing evolution of criticism, like the emergence of tools that allow for the faking of videos. How would Facebook police those videos? To what extent, more broadly, would it allow users to share fake information? What if one of those users happened to be the president of the United States?
On Thursday, Facebook announced it would not significantly revise its rules regarding political advertising. Instead, users will be given the choice of seeing fewer political ads or opting out of targeted marketing — advertisements from campaigns and companies that use the personal information collected by Facebook to identify who should see them.
It’s hard not to assume that part of that decision derived from the close business relationship between the president’s campaign and Facebook. The campaign, which raised half a billion dollars last year across multiple political committees, spends heavily on Facebook ads even now. (One post-2016 change Facebook made was to make the ads publicly viewable. You can see Trump’s current ads here.) Facebook was pulled in two directions. It was pulled one way by Trump critics who understand the importance of the network to the president and recognize his often casual interest in sharing accurate information. It was pulled the other way by Trump’s team and by supporters of the president who have been convinced that an unfounded bias against conservatives exists.
The Trump campaign’s Tim Murtaugh praised Facebook’s decision not to revise its policies on political advertising.
“Our ads are always accurate, so it’s good that Facebook won’t limit political messages because it encourages more Americans to be involved in the process,” Murtaugh said. “This is much better than the approaches from Twitter and Google, which will lead to voter suppression.”
It's not true that the Trump campaign's ads are always accurate. An ad that ran last year included an unfounded claim about a Ukrainian energy company that's at the heart of the impeachment inquiry into Trump. (“The truth hurts,” Murtaugh replied when former vice president Joe Biden's campaign took issue with the ad.)
It’s tempting to see this struggle within Facebook purely in political terms, particularly if you, like me, spend a lot of time thinking about politics. But it’s bigger than that, as was made clear in a long essay published on Facebook by a company vice president late last year (and offered publicly this week).
In the essay, Andrew Bosworth praises the Trump 2016 effort as “the single best digital ad campaign I’ve ever seen from any advertiser.” He made that point in less effusive language to The Washington Post in 2018, when we looked at the way Trump’s efforts on Facebook better leveraged Facebook’s mechanisms than Clinton’s campaign had — in part, no doubt, a function of that embedded staffer (the on-site consultant has since gone on to work publicly for Trump’s ouster).
Bosworth, who describes himself as a liberal and a Trump opponent, advocates for a position of neutrality by Facebook of the sort that’s reflected in the decision announced Thursday. He likens it to corporate efforts to encourage healthy eating. When Kraft cut sugar, he writes, the company lost market share and its CEO. A new CEO came in, introduced giant Oreos and the company regained its footing.
It’s a nifty analogy, but a bad one. Bosworth dismisses the idea that Facebook should have a stronger hand in policing what it offers by analogizing it to Kraft. Notice, though, who suffered in the anecdote: Kraft, not the customers. And Kraft was in a marketplace where people could still buy sugar elsewhere. What is Facebook’s competition at this point? Bosworth compares Facebook’s power to the ring from “Lord of the Rings,” which, as critics were quick to point out, had to be melted into nonexistence to protect the universe in which it existed.
What’s important to remember is that, for all of the focus on political ads and on Trump’s activity on Facebook, political ads are a small subset of what Facebook does. They are to Facebook what Tang is to Kraft, a thing it does that won’t drive all of its business decisions. If Trump’s campaign spent $70 million on Facebook in 2016, that was out of $26.9 billion the company took in from its ad sales that year. If Trump hadn’t spent a dime, Facebook’s revenue would have plunged to … $26.8 billion. Every single part of the conversation above about the president and about the integrity of elections is, to Facebook, some bad public relations stemming from a small part of what its business does.
Facebook’s value propositions are clear. To users, it’s simplified access to sharing information, generally framed in terms such as “pictures of babies” instead of “here’s an article I saw about how Hillary Clinton is a cannibal.” To advertisers, it’s a unique ability to offer products to people fitting increasingly refined demographic characteristics. This, too, is presented in corporate materials with an emphasis on “connect Kraft with people on diets,” instead of “allow people in Albany to find suckers who will consume nonsense about politics.”
What it is particularly challenged by, though, aren’t the hyperbolic examples above. Instead, the challenge is less-extreme scenarios, such as nuanced efforts to leverage the massive scale of the platform to steadily influence people by using misleading, incomplete or inaccurate information. Or even accurate information. This is something that should prompt concern not only when deployed by political leaders but also, potentially, by corporate entities. Facebook’s core struggle is in defending the extent to which it should be as powerful as it is.
No company has been more effective at leveraging Facebook for its own dominance than Facebook. Its constituency is larger than Trump’s, and its global scope is, in some senses, larger. That’s part of its value proposition, too. It’s powerful enough to perhaps deserve credit for getting a president elected.