Over the past few years, pressure has been building on online platforms to do something — anything — about increasingly hostile, misleading and distasteful Internet content. The 2016 election, Brexit and other polarizing events have brought out some of the worst in human nature, much of it amplified and rapidly disseminated on free digital services.
Growing conflict over who gets to say what to whom and where, of course, is not limited to the Internet. Even here in Berkeley, Calif., the free-speech capital of the world, we have been engulfed in fights — some violent — over which viewpoints should be heard on the UC campus. Berkeley Free Speech Movement founder Mario Savio would not be proud.
But this year, largely unregulated Internet companies have fallen into a black hole of disgruntled users, hyperventilating activists and an angry Congress. Facebook, Twitter, Instagram, YouTube and other social media companies are innovating wildly, implementing increasingly Rube Goldberg-like fixes to adjust their content policies and the technologies that enforce them.
“Users are calling on online platforms to provide a moral code,” says Daphne Keller, director of the intermediary liability project at Stanford’s Center for Internet and Society. “But we’ll never agree on what should come down. Whatever the rules, they’ll fail.” Humans and technical filters alike, according to Keller, will continue to make “grievous errors.”
Do not look to the Constitution to solve the problem. Contrary to popular belief, the First Amendment plays no role in determining when content hosts have gone too far, or not far enough. That is because, as I regularly explain to incredulous students, free-speech protections limit only censorship by governments and then only in the United States.
Some restrictions on foreign nationals — e.g., electioneering — are permitted. With very limited exceptions, private actors can press mute on whomever and whatever they want. Indeed, the Constitution protects the sites from government efforts to impose speech codes — moral or otherwise.
But while the First Amendment does not apply to the practices of Internet companies, the inevitable failure of platform providers to find the “Goldilocks zone” of just-right content moderation underscores the wisdom of the Founding Fathers. Picking and choosing among good and bad speech is a no-win proposition, no matter how good your intentions.
So here is my advice to tech CEOS: Don’t try. Don’t moderate, don’t filter, don’t judge. Allow opinions informed and ignorant alike to circulate freely in what Supreme Court Justice William O. Douglas famously called “the marketplace of ideas.” Trust that, sooner or later, truth will prevail over lies and good over evil. Deny yourself the power to interfere, especially at those excruciating moments when the temptation is most irresistible — when the most detestable content is flowering malodorously.
Today, that solution may seem even more unpalatable than it was when the Bill of Rights was being debated nearly 250 years ago. But every day brings new evidence that the alternative of unaccountable private gatekeepers appointing themselves the task of deciding what violates virtual moral codes, especially in the chaos of messy and often ugly political and social disruption, is worse. Much worse.
A sobering report last week on Motherboard, for example, details the “impossible” effort of a beleaguered Facebook to reinvent its “community standards” — a daunting task given the billions of posts a week originating in over a hundred countries. Acceptable content rules are developed unilaterally by a policy team “made up of lawyers, public relations professionals, ex-public policy wonks and crisis management experts.”
Enforcement, according to the report, is now the job of about 7,500 low-wage “moderators,” deciding case by case whether to remove posts flagged by artificial intelligence software or by complaining users — with the latter assigned a “trustworthiness score.” Flowcharts guide the review, asking, for example, whether the challenged post encourages violence, curses or uses slurs against a protected class or is guilty of “comparing them to animals, disease or filth.”
National laws and local customs also have to be taken into consideration. The process and the rules are constantly and opaquely updated, often in response to the latest public relations crisis. No surprise few moderators last a year in the job, according to the report.
As one indication of just how fraught the complex system has become, moderators removed a July Fourth post quoting the Declaration of Independence. Why? A reference to “merciless Indian savages” was deemed hate speech.
Yet Facebook’s face-plants seem almost trivial compared with the free-speech barbarism of other Internet giants. Consider the social news site Reddit, which three years ago announced a confusing set of changes to its “Content Policy” in an improvised effort to curb sexist posts. Forums dominated by such content were simply erased.
The deleted groups, said then-chief executive Ellen Pao, “break our Reddit rules based on their harassment of individuals,” a determination made solely by the company. (Due process is also a government-only requirement.)
After users and volunteer editors revolted over both the policy change and its ham-handed implementation, Reddit’s board of directors dismissed Pao and revised yet again the amendments to its policy. But Reddit founder and returning chief executive Steve Huffman still defended the changes. Neither he nor co-founder Alexis Ohanian, Huffman said, had “created Reddit to be a bastion of free speech, but rather as a place where open and honest discussion can happen.”
Except that Ohanian, in an earlier interview, said precisely the opposite, down to the same archaic phrasing. When asked what he thought the Founding Fathers would have made of the site’s unregulated free-for-all of opinions, Ohanian boasted, “A bastion of free speech on the World Wide Web? I bet they would like it.”
Even worse, consider the approach of website security provider Cloudflare, whose CEO, Matthew Prince, personally terminated the account of the neo-Nazi Daily Stormer after previously defending his company’s decision to manage the site’s traffic. Prince’s reasoned explanation for the change of heart? “I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet,” he wrote in an internal memo to employees.
In a supreme gesture of having his cake and censoring it too, Prince then condemned his own action, fretting “no one should have that power.” But he does. (Activists for “net neutrality,” which would prohibit blocking access to any website, notably want restrictions solely for ISPs.)
Refusing to moderate at all would certainly be easier. But could Internet users stomach it? The First Amendment, after all, is nearly absolute. The U.S. Supreme Court has carved out a few narrow exceptions, most of them irrelevant to the current debate over online speech. Discussions of current events and politics, for example, are considered the most protected category of all.
Even the most repulsive opinions are protected from government suppression. As First Amendment scholar Eugene Volokh reminds politicians, “There is no hate speech exception to the First Amendment.”
Correction: An earlier version of this column misstated what Cloudflare does. This version has been corrected.