Facebook’s Oversight Board, the latest creation from the social media site that has repeatedly promised to regulate itself with little in the way of positive results, will announce on Wednesday whether it will let the disgraced former president back on its platform. Remember: This is a man who instigated an insurrection and spread dangerous disinformation about covid-19. Nothing has changed since he was banned; indeed, he has habitually reaffirmed the Big Lie that the election was stolen and denied his own role in setting off the violent riot that left five people dead and injured scores more.

Why, then, would Facebook, which claims to patrol for hate speech and dangerous disinformation, let him back on? The Post recounts what the company’s chief executive, Mark Zuckerberg, said when the former president was banned for praising the violent insurrectionists as “patriots” and “special”:

First Facebook and then Twitter suspended Trump’s account that week on the grounds that those comments were encouraging or inciting further violence and lawbreaking to delegitimize the election — or worse, to conduct an attack on the inauguration itself.
“The shocking events of the last 24 hours clearly demonstrate that President Donald Trump intends to use his remaining time in office to undermine the peaceful and lawful transition of power to his elected successor, Joe Biden,” Zuckerberg posted [while insisting that] … “we believe that the public has a right to the broadest possible access to political speech, even controversial speech.”
“But the current context is now fundamentally different, involving use of our platform to incite violent insurrection against a democratically elected government,” he wrote.

Is Facebook waiting for him to incite the next riot before really, absolutely banning him for good?

If Facebook were serious about enforcing its own standards, this would not be a hard decision. The former president no longer gets the “head of state” exception to terms of service. Had the average person spread the same hateful speech that he shared while in office, they would have been bounced from the platform long ago.

But to be candid, Facebook’s dedication to upholding its own terms of service has been altogether lacking, even for people who are not celebrity politicians. The Anti-Defamation League has led a group of major advertisers in an attempt to force Facebook to live up to its own self-professed standards. In December 2020, the ADL reported that Facebook had minimized the extent of hate speech on the site, which, as of that reporting, included 11 in 10,000 posts. “If one piece of hate content goes viral, thousands or millions of people could have viewed it — an important indicator of the reach and potential impact of hate speech on Facebook,” the ADL reported. Moreover, Facebook’s business model revolves around increasing toxicity and radicalization:

Its algorithms play a great role; they feed the platform’s 2.3 billion users tailored content based on individual identity and behavioral data, (e.g. user browsing activity) and can increase exposure to hate speech for users who already have seen hateful content or are searching for it. This is a particular concern with users who may be susceptible to hateful disinformation and conspiracy theories. Radicalization often is the result.
Algorithmic amplification is at the heart of the criticism of Facebook’s controversial News Feed. It has been accused of increasing polarization in the United States by filtering the news that different users see and amplifying news sources that spread misinformation and incendiary content. The damage it can impart is profound.…
Whether intentional or not, that denial of the enormous scope and impact of the problem of hate speech on its platform has to change. Moreover, the company must enforce its policies far more vigorously. As was the case with Holocaust denial, Facebook—which in these high-profile and highly contentious incidents is virtually identical to one man, Mr. Zuckerberg—can choose whether to acknowledge a certain kind of hate speech is indeed a problem in the first place.

Internal memos uncovered in 2018 and again in 2020 give further ammunition to Facebook’s critics. The memos, written by Andrew Bosworth, head of Facebook’s virtual and augmented reality division, show that the company understands perfectly well the deleterious effects of its algorithms but proceeds anyway in search of ever-greater profits.

Facebook may decide to return one of the most effective promulgators of disinformation just as it chooses to underplay the extent of its hate speech problem and refuses to disclose how its algorithms encourage polarization and extremism. That, in part, is because social media has been given special dispensation — an exemption under Section 230 of the Communications Decency Act that allows it to avoid civil liability for anything its users post on its site without any obligation to disclose data about enforcement of its terms of service or to explain how its algorithms perpetuate hate speech. That lopsided arrangement is now at risk. Politicians on the left and right are enraged, threatening a variety of actions from antitrust reform to eliminating Section 230. Even Zuckerberg has suggested it may be time to modify the law (though nothing prevents Facebook now from adopting transparency standards, setting standards for removal of hate speech or providing more candor about its algorithms).

Facebook’s problem goes beyond the hypocrisy of potentially returning one of the worst abusers of its terms of service to its platform. To be sure, if returned, he will no doubt magnify the hate speech and disinformation problem Facebook claims to be addressing. The real issue, however, continues to be whether Facebook will pay a price for its stubborn refusal to live up to its own standards and subject itself to greater scrutiny.

Read more: