The Washington PostDemocracy Dies in Darkness

Why outlawing harmful social media content would face an uphill legal battle

Even in cases with tragic facts, courts have resisted penalizing speech involved with some general societal harm.

Former Facebook employee and whistleblower Frances Haugen testifies before a Senate committee. (Jabin Botsford/The Washington Post)
Placeholder while article actions load
correction

A previous version of this article misspelled the last name of Roddy Lindsay as Lindsey. The article has been corrected.

Testimony from Facebook whistleblower Frances Haugen has given new life to a pair of ideas that, given the alarming revelations about “misinformation, toxicity, and violent content” in our social media feeds, may sound good in principle. The first is that Congress can and should prevent the spread of lies on social media platforms like Facebook. The second is that the law should restrict platforms’ “amplification” of such content, on features like Facebook’s algorithmically ranked news feed.

Whatever the merit of these ideas as a matter of morals or policy, they both run into serious problems with the Constitution. Unless Congress is realistic about those limits, it could squander political momentum on proposals likely to founder on the constitutional rocks.

Proposals to address these issues generally call for an amendment to or repeal of Section 230, the 1996 law that protects platforms from many kinds of liability arising from user content. Commentators and politicians have floated various proposals, including to remove Section 230’s protections for some harmful content or for content amplified by a platform’s content-ranking algorithms.

Facebook hides data showing it harms users. Outside scholars need access.

Even without Section 230, though, it’s unlikely that lawmakers could require platforms to stop sharing misinformation. Some small portion of false content might count as defamation — material so damaging to a person’s reputation that they can sue over it. But many of the concerns about misinformation involve more generalized societal harms, such as influencing people to take unreasonable health risks. Courts have long resisted penalizing that kind of speech, holding that the First Amendment or common law often preclude lawsuits based on false statements — even in cases with tragic facts.

For instance, on New Year’s Day 1988, Wilhelm Winter and Cynthia Zheng went hunting for wild mushrooms in Marin County, Calif. They claimed that they relied on “The Encyclopedia of Mushrooms to identify and discard suspicious fungi — and as a result, they collected, cooked, and ate an amanita phalloides mushroom, also known as a “death cap.” They both became so ill that they required liver transplants, and incurred about $400,000 in medical costs.

Winter and Zheng sued the book’s U.S. publisher for negligence, false representations and other claims. A federal judge in California ruled in favor of the publisher, which had not edited the book, but had purchased copies from its original British publisher and distributed them in the United States. The U.S. Court of Appeals for the 9th Circuit affirmed, concluding that publishers do not have a legal duty to investigate whether their books are accurate. “Were we tempted to create this duty, the gentle tug of the First Amendment and the values embodied therein would remind us of the social costs,” the Court wrote.

Like many other courts, the 9th Circuit also refused to hold the publisher liable under products liability law, the sort of claim you might file against the manufacturer of a defective car. “We place a high priority on the unfettered exchange of ideas,” the Court wrote. “We accept the risk that words and ideas have wings we cannot clip and which carry them we know not where. The threat of liability without fault (financial responsibility for our words and ideas in the absence of fault or a special undertaking or responsibility) could seriously inhibit those who wish to share thoughts and theories.”

This line of thinking pervades dozens of court opinions in which injured people have sought redress for misleading information. The cases run the gamut of harms: investors who say they lost money due to a ticker service’s inaccurate stock price, people who became sick after following a book’s risky diet plan, and a man who had a heart attack after reading a newspaper’s inaccurate report that his father had died.

These cases generally have not made it to the U.S. Supreme Court. But in 2012, the high court struck down a federal law that imposed criminal penalties on those who falsely claimed to have received certain military honors, regardless of their intentions or the harm caused by the lie. Although the First Amendment allows liability for some lies — such as defamation, fraud and false advertising — lawmakers cannot simply prohibit all misleading speech. “Our constitutional tradition stands against the idea that we need Oceania’s Ministry of Truth,” Justice Anthony M. Kennedy wrote for the plurality, in a nod to George Orwell’s “1984.”

Facebook silences the people who know its operations best

Some might argue that the stakes have changed in 2021, and that the individualized harms in the earlier cases pale in comparison to the threats to public health and democracy created by modern misinformation. Courts may not have necessarily struck the right balance in the past. It’s not impossible for the law to change. But the point is that courts would need to substantially rethink First Amendment and common law doctrine for such a rationale to justify restricting much of the misinformation shared on social media. Until that happens, there isn’t much Congress can do to restrict this kind of speech. There also isn’t much that Congress could do by tinkering with laws like Section 230.

What if Congress didn’t directly restrict false but legally protected speech, and instead created new liability for the platforms that amplify it? That apparent workaround may appeal to lawmakers. It’s unlikely to convince courts, though. The Supreme Court has been clear that laws restricting the distribution of unpopular speech raise the same First Amendment problems as laws prohibiting that speech outright. As it explained in 2000, when it rejected a law limiting pornography on cable TV, “laws burdening and laws banning speech” are equally suspect.

To be clear, the question here is not whether platforms like Facebook can or should change their amplification practices or their handling of potential misinformation in light of societal harms. They already have, and could voluntarily do more — without the kind of government mandate that raises First Amendment concerns. Nor is it solely a question about whether Congress can regulate platforms’ speech.

The question is whether Congress can regulate everyone’s speech by telling platforms which of our currently legal posts must be banished to the bottom of the news feed or promoted to the top. A law like that would use government power to restrict Internet users’ speech. Courts would rightly scrutinize it under the First Amendment.

Even proposals that tackle only amplification of genuinely illegal content — for example, by eliminating platforms’ Section 230 immunities for defamation — would raise real concerns. We already know what happens when platforms are put in charge of deciding which user speech violates the law: They err on the side of taking down lawful speech to protect themselves. Facebook notoriously took down images of law enforcement abusing protesters in Ecuador based on a fake claim of copyright infringement, for example. The tools platforms adopt to police user content can also cause new harms, disproportionately penalizing users from minority or marginalized groups or invading users’ privacy. A law that gave platforms this kind of policing duty for ranked news feeds, recommendations or search results would concentrate those harms into the very places where speakers most want their speech to appear — the places where other people will see it.

I worked on political ads at Facebook. They profit by manipulating us.

It’s possible to design better laws to limit distribution of illegal content while avoiding the worst of these foreseeable harms. But it’s complicated. Difficult policy trade-offs are impossible to avoid, and even the best laws involve complicated mechanisms like appeal rights for users, transparency reporting or regulatory oversight. None of that would happen under a law that simply swept away statutory immunities under Section 230.

Both Haugen and another former Facebook employee, Roddy Lindsay, have suggested that eliminating immunity for amplified content could have a different consequence: Making platforms stop their current “engagement-based” ranking practices altogether. For critics who believe that platforms’ current systems inevitably prioritize emotionally engaging but societally damaging content, this may sound like a great outcome.

But this, too, is complicated. First, is it right to predict that ranked feeds will go away? Or will they just become a feature that only the richest and best-lawyered platforms can afford to offer? Might ranked feeds persist in the most anodyne and PG-rated possible version, eliminating controversial or legally risky speech entirely? That could reduce misinformation on platforms’ most-used features, but also prevent the next #MeToo movement. What would unranked news feeds look like if platforms could no longer use algorithms to reduce the presence of spam, coordinated “brigading,” or simply redundant and uninteresting content? And perhaps most importantly: Are changes to Section 230 even the right mechanism to address the problems Ms. Haugen has documented, or does the better path lie in long overdue changes to privacy law or reforms grounded in competition — both of which might avoid major constitutional problems?

We don’t pretend to know the answers to those questions. But Congress doesn’t know either. That’s precisely why transparency mandates and a government commission to assess the facts are so important. In the meantime, we will have to rely on whistleblowers like Haugen to provide the information we need to regulate wisely, address real problems and avoid hasty fixes that run headlong into the First Amendment.

Read more:

The stubborn, misguided myth that Internet platforms must be ‘neutral’

Facebook could easily make privacy the default. It still hasn’t.

Loading...