In the meantime, however, corporate America, while far from perfect, has actually become more responsive to public opinion. It has come to appreciate (as we saw in the gun debate after the shooting in Parkland, Fla.) that consumers do not want to patronize companies they perceive as acting in antisocial or irresponsible ways. They might not, for example, want to shop at stores that sell guns or finance gun sales. More recently, on the racial justice front, the New York Times reports that “companies like Nike, Twitter and Citigroup have aligned themselves with the Black Lives Matter movement.” Netflix is committing $100 million to support African American communities. (Others are making less significant gestures, raising questions about the depth of their commitment.)
Now major corporations are undertaking an effort to change how tech companies respond to hate speech. A widespread advertising boycott of Facebook has shamed the platform’s chief executive, Mark Zuckerberg, in a way that lawmakers have failed to do. Per the Times: “Marketing giants like Unilever, Coca-Cola and Pfizer announced that they were pausing their Facebook advertising. That outcry has grown, hitting the company’s wallet.” By refusing to be associated with content that is racist or poses a threat to our democracy, such corporations have forced Facebook to take some initial steps, such as agreeing to an audit by the Media Rating Council:
The push from advertisers has led Facebook’s business to a precarious point. While the social network has struggled with issues such as election interference and privacy in recent years, its juggernaut digital ads business has always powered forward. The Silicon Valley company has never faced a public backlash of this magnitude from its advertisers, whose spending accounts for more than 98 percent of its annual $70.7 billion in revenue.
Advertisers might not limit their demands to hate speech. They might, for example, refuse to spend their ad dollars on Facebook so long as it remains in the political ad business, an area fraught with disinformation and out-and-out lies. Advertisers might also refuse to associate their brands with platforms that do not remove election misinformation or attempts at voter suppression or that peddle disinformation about a public health crisis.
This is a moment for critics of socially irresponsible media companies to take on platforms that enable foreign interference, provide a megaphone for extremists and hate groups and make herding people into ever more extreme groups part of their business model. It is far from clear what such efforts would look like and what mechanism could be developed to reduce Facebook’s toxic effect on our public discourse and politics. But there is plainly widespread interest in influencing the way Facebook conducts itself.
Relying on government to do the heavy lifting on reducing the power of tech giants might be a mistake, both because political actors themselves are unresponsive to the public and because government regulation carries the threat of serious First Amendment violations. That leaves civil society — businesses, nonprofits, nongovernmental organizations, faith-based groups and community groups — with a unique opportunity (and responsibility) to apply pressure on social media companies.
Considerable thought has already gone into this concept. The Commission on the Practice of Democratic Citizenship, put together by the American Academy of Arts & Sciences, recently included several suggestions in its report for bolstering American democracy: “Form a high-level working group to articulate and measure social media’s civic obligations and incorporate those defined metrics in the Democratic Engagement Project”; tax digital advertising to "support experimental approaches to public social media platforms as well as local and regional investigative journalism” (think of it as PBS or C-SPAN for the Internet); and start a new project to “conduct a focused, large-scale, systematic, and longitudinal study of individual and organizational democratic engagement” in the context of digital media. Others want to use antitrust laws to limit the reach of tech behemoths.
Collectively, we need to engage in a debate to determine what problems with Facebook and other platforms we’re trying to fix (disinformation? polarization? hate speech?). From there, we can develop some agreed-upon rules of the road (including transparency as to how our data is used and how Facebook’s algorithms manipulate what we see).
Tech companies might finally see the handwriting on the wall. Reddit, one of the least-moderated platforms, has already had a change of heart. Banning a group with nearly 800,000 subscribers that became a haven for “racism, violent threats and targeted harassment,” Reddit’s chief executive, Steve Huffman, is now committed to eliminating hate speech. Other tech companies should follow his lead. Before the government or advertisers impose rules upon them that would undermine their business model or affect their content, it would behoove them to end the hostility toward outside criticism. Unless they adopt a collaborative approach that results in more self-regulation, they risk losing their biggest advertisers, becoming social pariahs and seeing the government begin to regulate micro-targeting and use of personal data. In other words, if they don’t clean up their act, they might see their business model collapse.