The Washington PostDemocracy Dies in Darkness

The stubborn, misguided myth that Internet platforms must be ‘neutral’

(iStock) (alexsl/iStock)

Lately, politicians and news sources have been repeating a persistent myth about, of all things, technology law. The myth concerns a provision of the 1996 Communications Decency Act, generally known as Section 230 or CDA 230.

The law protects intermediaries, ranging from Internet access providers to popular platforms like YouTube or Reddit, from being sued for things their users say. It is widely (if hyperbolically) credited with creating the Internet.

The “neutrality” idea, which has been raised by critics on the left and the right, seems to have gained particular currency among conservatives like Sen. Ted Cruz (R-Tex.), who has insisted that the law protects only sites that act as “neutral public forums.” Similarly, critics like Sen. Josh Hawley (R-Mo.) have claimed that this immunity is available only to platforms “providing a forum free of political censorship.” Platforms that are not “neutral,” Hawley says, face the same legal responsibilities as a publisher like The Washington Post.

That’s not what the law says. If it did, no one would like the results.

CDA 230 isn’t about neutrality. In fact, it explicitly encourages platforms to moderate and remove “offensive” user content. That leaves platform operators and users free to choose between the free-for-all on sites like 8chan and the tamer fare on sites like Pinterest.

If platforms couldn’t enforce content policies while retaining immunity, communications today would look a lot like they did in 1965. We could passively consume the carefully vetted content created by big companies like NBC, and we could exchange our own views using common carriers like phone companies, but we wouldn’t have many options in between. That historical division between publishers and carriers is probably why many assume that “be a publisher or be neutral” must be the law on the Internet.

The call for neutrality shouldn’t be a surprise, though. It stems from legitimate anxieties about whether major platforms, which the Supreme Court has called the “modern public square,” are unfairly silencing certain speakers due to political bias. Republicans are far from alone in worrying about that. Internet users including Black Lives Matter and Muslim rights activists, as well as human rights groups all over the world, raise nearly identical concerns.

I worked for Google for many years, and I don’t think its takedown policies are biased — but I also don’t expect anyone to take my word for that. People have every right to ask whether platforms are taking down too much speech.

How Trump put himself in charge of Twitter’s decency standards

Requiring platforms to address these concerns by carrying everything the law permits won’t solve our problems, though. After all, platform users and policymakers of all political stripes often call for platforms to take down more content — including speech that is legal under the First Amendment. That category can include Holocaust denial, bullying, anti-vaccine material and encouragement of teen suicide.

U.S. law permits people to post the horrific video of the March 15 massacre in Christchurch, New Zealand, and the doctored video of Nancy Pelosi. There may be ethical or policy reasons to urge platforms to ban such content, but there aren’t legal reasons. If we want platforms to enforce values-based speech prohibitions in cases like these, they’re going to have to choose and apply some values. By definition, those values won’t be neutral.

If platforms with insufficiently neutral policies were “legally responsible for all the content they publish,” as some critics have proposed, no one would like that either. A platform held to the legal standards of publishers like The Washington Post would have to vet everything users post before the public could see it. Users would have to wait while lawyers decide if its political opinions or cat videos break the law. If the lawyers thought any speech exposed the platform to liability, or even the expense of litigating groundless claims, they wouldn’t let the content be shared.

The drafters of CDA 230 recognized this problem. They created a law that let the wide array of Internet intermediaries shape their own policies, without facing the binary choice between becoming traditional publishers or remaining entirely passive.

By immunizing platforms from most suits, Congress enabled them to stand up for users’ speech rights and not give in to the mistaken or even fraudulent allegations that often lead platforms to remove legal speech under other laws. That immunity when users post defamatory or otherwise unlawful material also makes content moderation feasible: Without it, companies that tried to moderate would risk being treated as editors or publishers and held liable for user speech.

That happened to a platform called Prodigy in 1995. A court let a $100 million defamation claim go forward, saying Prodigy’s rules against offensive content gave it editorial responsibility for a user’s post. (The post alleged financial wrongdoing by Stratton Oakmont — the now-notorious firm depicted in “The Wolf of Wall Street.”) Congress specifically passed CDA 230 to overrule that decision and encourage moderation. It also spelled out in CDA 230 that platforms can take down “objectionable” material without facing liability.

In theory, Congress could have done this differently in 1996, and still could today. Platforms could be allowed to moderate user speech as long as they do so neutrally. But it’s not clear what a rule like that would even mean. The idea of “neutral moderation” rules usually comes from people who are new to the topic or else so certain of their own moral precepts that they can’t imagine anyone disagreeing about what neutral speech rules would be.

“Neutral moderation” requirements, which would presumably be enforced by a greatly expanded regulatory agency like the Federal Communications Commission, would almost certainly violate the First Amendment. To enforce a law like that, the government would have to set new rules for ordinary Internet users’ lawful speech, picking winners and losers and deciding who is heard and who is silenced. It’s hard to imagine anyone being happy with that arrangement — but the threat of it may be useful in scaring companies into changing their content policies.

None of this means people concerned about platform power are wrong. It doesn’t mean platforms can’t be regulated. But replacing CDA 230 with new rules about neutrality is not a way forward. We should stop having that unproductive conversation and have a different one.

If the problem is that we don’t know what content platforms are taking down, we should start by demanding real transparency and protections for people whose speech disappears, such as the ability to appeal.

If our concern is that giant Internet companies are chokepoints on the flow of information, we should be talking about competition law: No single platform’s speech policies would be so important if we had a lot more platforms, or if new competitors could build on data that major companies hold.

If the problem is illegal speech, there are a dozen legal knobs and dials lawmakers can adjust. If we want to protect users from harmful or offensive legal speech, giving them more individual control over what content they see would be a good start. In fact, that’s exactly what the drafters of CDA 230 wanted to happen; they said so in the text of the law.

There’s a lot of good stuff in the statute. Looking at what it actually says would help us move past today’s unproductive posturing and on to the serious policy discussions we deserve.

Loading...