Facebook never wanted to police political speech
The ruling — and indeed the existence of the Oversight Board — stem from a fundamental dilemma confronted by Facebook and other social media platforms such as Twitter and YouTube. The platforms don’t want to have to regulate their users’ political speech, but increasingly, they can’t avoid doing it.
Their reluctance stems in part from an attachment to free-speech libertarianism. Facebook’s CEO and board president, Mark Zuckerberg, originally saw the platform as helping ordinary people to speak truth to power, while as late as 2016, a senior Facebook official suggested in an internal memo that while it can be bad if “someone dies in a terrorist attack coordinated on our tools,” the “ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is *de facto* good.” Their unwillingness is also a product of social media platforms’ business model. Moderating the speech of hundreds of millions or billions of users is hard. Some tasks, such as monitoring for nudity in content, can be automated cheaply, but monitoring political speech requires human interpretation, which is relatively expensive and likely to be controversial.
However, Facebook and other social media platforms have gradually been dragged into moderating the political speech of their users, as a response to public controversy. Facebook posts were used to incite the genocidal killing of Muslims in Myanmar. YouTube videos have been blamed for radicalizing viewers. While social media companies (and some researchers) argue that the dire consequences of social media are exaggerated, public outrage has forced them to take stronger action in deleting certain kinds of political content and banning users who directly call for violence.
This is why Facebook eventually set up the Oversight Board. As the scholar Tarleton Gillespie has explained, social media companies have had a very hard time figuring out how to moderate content in a way that seems legitimate to their users. By creating the Oversight Board, and giving it authority to tell it what to do, Facebook hoped to lower the heat, transferring responsibility and blame to a somewhat independent body (Facebook appointed the board and said it would accept its decisions, but it did not control it).
The Oversight Board is demanding Facebook do more
The board was asked to look at the decision to ban Trump. Now, it has both agreed with that decision and condemned the process through which it was reached.
The board found the original ban was justified because Trump had created an environment “where a serious risk of violence was possible” and “there was a clear, immediate risk of harm and his words of support for those involved in the riots legitimized their violent actions.”
However, it found fault with the decision to suspend Trump from Facebook indefinitely. First, it said Facebook hadn’t followed any clear procedure, because “ ‘Indefinite’ suspensions are not described in the company’s content policies.” As Evelyn Douek has noted, the board has previously complained about the vagueness of Facebook’s rules. Second, it said Facebook was deliberately ducking its responsibilities, by trying to make the board take the difficult decisions for it. In the board’s words:
“It is Facebook’s role to create necessary and proportionate penalties that respond to severe violations of its content policies. The Board’s role is to ensure that Facebook’s rules and processes are consistent with its content policies, its values and its human rights commitments. In applying a vague, standardless penalty and then referring this case to the Board to resolve, Facebook seeks to avoid its responsibilities. The Board declines Facebook’s request and insists that Facebook apply and justify a defined penalty.”
Although the board upheld Facebook’s initial suspension, it said Facebook must re-examine the “arbitrary” penalty of indefinite suspension within six months and decide on a penalty that is consistent with Facebook’s rules. It also made several recommendations for how Facebook should deal with problematic content from “highly influential users,” including rapid decisions by specialized expert staff who are free from political interference, clearer information on how Facebook’s process works (especially when it is trying to balance the newsworthiness of influential people’s speech against other issues) and a comprehensive and open review of how Facebook’s design and policy choices may have helped contribute to the narrative of election fraud.
Facebook is now in an awkward position
The board’s decision puts Facebook back where it doesn’t want to be — making hard decisions about political speech. If it allows Trump to return to Facebook, it risks outrage from liberals. If it bans Trump permanently (rather than indefinitely suspending him), it risks backlash from conservatives, who already want to remove Facebook’s legal protections. Finally, it has to think about the consequences of its decisions outside the United States. Last month, Facebook temporarily hid posts calling on India’s prime minister, Narendra Modi, to resign for his mishandling of the coronavirus pandemic after India’s government ordered it to ban a more specific set of posts, some from opposition politicians. The Oversight Board (which faces its own criticisms) has publicly declined to make the hard choices on Facebook’s behalf, thereby obliging Facebook to come up with clearer rules and practices.