Internet companies’ prized liability shield law has been consistently thrust into the limelight, both politically and legally, as lawmakers and leaders on both sides of the aisle consider whether it gives social media companies too much leeway.
Most recently, it’s been a part of the conversation over state laws that seek to regulate how social media companies moderate content on their services.
This month, Florida’s attorney general asked the U.S. Supreme Court to rule on whether states are allowed to regulate companies such as Facebook, Twitter and YouTube over their content moderation policies. The Florida law, as well as a similar regulation in Texas, would prohibit social media sites from blocking or limiting certain types of political speech.
Although debate around the new laws centers on First Amendment issues, Section 230 has become part of the discussion because of the way it helped to form content moderation online. Tech companies have argued that Section 230 preempts state content moderation laws.
Though many Democrats and Republicans — as well as some tech companies — agree that Section 230 needs revising in an increasingly digital age, revoking it entirely could have damaging effects on free speech online.
Critics say Section 230 gives tech companies too much power over what is and is not allowed on their sites. Supporters — including a wide range of internet companies, free-speech advocates and open-Internet proponents — say that without the law, online communication would be stifled and social media as we know it would cease to exist.
So what is this law, anyway?
What does Section 230 actually say?
Section 230, a provision of the 1996 Communications Decency Act, says that companies that operate online forums — everything from the billions of posts made on Facebook to restaurant reviews on Yelp to comment sections on Twitter or recipe blogs — cannot be considered the publisher of all the posts others put on their sites. Therefore the forum operators can’t be held liable for what others choose to share on their platforms, even if those posts could break a law. In other words, it means that Facebook can’t be held legally responsible for a user whose post, say, defames their sixth-grade math teacher.
The key portion of Section 230 is only 26 words long and reads, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
Why does Section 230 matter for social media?
Section 230 “gave companies the go-ahead to launch every single technical intermediary that you depend on for internet communication,” said Daphne Keller, the platform regulation director at the Stanford Cyber Policy Center. With few exceptions, it gives companies the right to police content on their websites as they see fit. That means companies don’t have to sift through millions of posts to make sure they are not violating any laws before allowing them to appear online. It also means people can post pretty much whatever they want and companies can duck responsibility for the effects.
But it was not designed to keep online forums neutral, Keller said. In fact, she said, it was meant to encourage companies to keep an eye on the conversations on their sites.
Section 230 “was very specifically crafted to get platforms to moderate content,” she said.
But it also means that companies are allowed to moderate that content however they see fit, with little regulation to confine them.
Why should we care about Section 230 now?
Section 230 allows tech companies to leave up pretty much any posts that others make. It also gives those companies broad ownership of what they decide to remove from the sites, as long as the companies follow a few rules.
It’s this part of the provision that has been thrust into the spotlight in recent years as Trump and others, including prominent Republican politicians, accused social media sites of censoring conservative voices. The companies have denied those charges.
Who wants Section 230 to be changed?
Members of both parties have expressed concern over the limits of 230.
Some Democrats have pushed back on how tech companies moderate hate speech or other objectionable comments on their platforms, saying the companies don’t go far enough in curbing hurtful language. But that content is generally protected by the First Amendment.
“Congress can’t actually require companies to take down lawful speech,” said Emma Llansó, director of the Free Expression Project at the Center for Democracy and Technology. “That’s one of the really big challenges happening in the U.S. and around the world when it comes to online content regulation.”
Some Republicans have pushed back on the immunity tech companies have to take down most types of content, asserting that companies are acting with bias toward conservatives.
Sen. Ted Cruz (R-Tex.), questioning Facebook CEO Mark Zuckerberg at a hearing in 2018, suggested the law requires companies to provide “neutral” forums. But the law does not require companies be neutral. In fact, it was originally conceived to encourage them to step in and moderate.
How did Section 230 come to be?
Section 230 bloomed out of two lawsuits against early internet companies, in the days long before social media. One court found that Prodigy Services could be held liable for speech made on its site because it tried to set standards and moderate content. Another court found that CompuServe, which took a hands-off approach, was merely a distributor and not a publisher and therefore not liable.
That seemed to suggest that companies could protect themselves by taking a hands-off approach, said Jeff Kosseff, a cybersecurity law professor at the U.S. Naval Academy and the author of a book on Section 230, “The Twenty-Six Words That Created the Internet.”
Wanting to circumvent that and encourage companies to moderate their sites, Section 230 was created.
“The idea behind 230 was that the platforms were much better suited to come up with the rules of the road than the government,” Kosseff said.
The idea was that people would use whichever sites suited them and had rules they agreed with. Of course, that was years before the rise of dominant social media sites with billions of users.
Some have criticized tech companies for hiding behind the law and taking a hands-off approach to moderating content on their sites, rather than using the law to be more engaged in moderating, as its creators had envisioned.
Does Section 230 protect users?
The law protects users in much the same way it protects platforms — its key 26-word passage says that no user “shall be treated as the publisher or speaker of any information provided by another information content provider.”
In other words, users cannot be held liable for what other users say online. For example, users would likely not be legally responsible for the contents of an email they forwarded that was written by someone else.
But, users are not “protected from their own content,” Kosseff said, so are still responsible for what they post online.