On Wednesday, the Facebook Oversight Board issued its much-anticipated decision concerning the suspension of Donald Trump’s account. Basing its decision on international human rights standards, the board decided to uphold the suspension — but required Facebook to revisit its decision in six months to see if the emergency conditions that led to the account removal still exist. Of course, the decision is important for the future of speech online, but it also sets a precedent for this unique experiment in institutional design.
The decision is important. The process is more so.
Most observers will stop at the headline decision: Board upholds Trump suspension. But how the board reached its decision and reconceived its powers and jurisdiction deserves greater attention. Originally, the board was conceived to issue up-or-down decisions, approving or denying Facebook takedowns. But now the board has set a precedent by giving itself a new kind of remand power, effectively telling Facebook, “Okay for now, but come back to us in six months to justify yourself again.”
It’s anyone’s guess as to whether and how it intends this rule to apply to account takedowns more generally. Facebook takes down over 4 billion accounts per year, almost all for violating its spam rules but many others for violating community guidelines regarding hate speech, obscenity and other harmful content.
The board also provided information on its dialogue with Facebook over the Trump case, explaining what it asked Facebook and what Facebook refused to provide. Facebook declined to answer seven of 46 questions, justifying its refusals in terms of privacy, relevance, attorney-client privilege and other considerations. Those unanswered questions related to its news-feed algorithm, its impact on the Jan. 6 insurrection, its contacts with government officials and the impact of the suspension on advertising.
This back and forth with the principal party to a case is quite different from how a court of law operates. It’s important, because it sets the ground rules for the board’s relationship with Facebook going forward. The reasons that Facebook offers for declining to cooperate will affect its behavior and that of the board. What’s more, now Facebook knows the board will report its nonanswers. We do not know Facebook’s actual answers to the questions posed to it, although we now have the public comments the board received, totaling over 9,000.
The board relies on international human rights
The most important aspect of the board’s adjudication of the Trump takedown was its reliance on international human rights law to guide its decision. The board’s earlier decisions also referred to applicable United Nations Conventions and treaties, as does the charter that established the board in the first place. In many respects, the Oversight Board is a first step toward realizing human rights’ defenders long-standing dream of a world court with transnational jurisdiction.
However, it’s hard for Facebook to implement human rights principles that were designed to bind governments, rather than to guide a private company trying to moderate content. Facebook is not a government; its news feed, which uses algorithms to deliver personalized content to billions of people, is not the public square. Facebook is in the business of what constitutional lawyers call “prior restraints” — that is, filtering speech before it reaches its audience. Its rules on hate speech, obscenity, self-harm and disinformation, to name a few, would all be unconstitutional under the First Amendment if passed by the U.S. government.
For example, an earlier Oversight Board decision ordered Facebook to reinstate a breast cancer educational video temporarily taken down under the firm’s nudity ban. Legislation that banned such videos from being published would violate both First Amendment and international human rights principles. The community standards of a social media platform present a different set of concerns, however. If Facebook filters nude pictures, it may sweep away some valuable speech, but the Internet offers no shortage of places to view the naked body. Furthermore, platform speech policies need to be applied at scale across the world by a company that has to rely on algorithmic filters and an inherently short-staffed team of content moderators in real time. These are not the conditions under which national constitutions and international human rights principles were designed to operate.
However, human rights law has a natural place when it comes to Facebook’s confrontations with governments around the world, where it can act as a sword as well as a shield. The Indian government is ordering takedowns of content and accounts criticizing the government’s coronavirus response. There, as elsewhere, Facebook could object to these enforcement actions on human rights grounds. (Notably, legally ordered takedowns are outside the Oversight Board’s jurisdiction, according to its charter.)
The Oversight Board does refer to an emerging body of U.N.-sponsored guiding principles on business and human rights. They take this guidance to suggest that just as a company should respect the rights of workers and consumers and consider the firm’s impact on the environment, so should a social media platform consider the harms it is causing or permitting. These principles are unobjectionable, as far as they go, but they do not answer the questions posed in the Trump case or any of the previous ones. It is at best unclear what they would say about whether Facebook should allow Holocaust denial, self-harm videos, or campaign-sponsored disinformation on its platform.
In the end, the board itself will decide the meaning of human rights law in these concrete circumstances. While it is making up the rules as it goes along, and has tried to kick the difficult decision over the Trump ban back to Facebook itself, it cannot escape that responsibility. Until and unless governments take on the responsibility that Facebook has delegated to the Oversight Board, this unique and novel institution will play a significant role in shaping online free speech.
Nathaniel Persily (@Persily) is the James B. McClatchy Professor at Stanford Law School and co-director of the Stanford Cyber Policy Center.