When the World Wide Web went public in 1991, its enthusiasts proclaimed a new era of unfiltered free expression. Thirty years later, the debate is over how, not whether, to filter what’s said online. In the U.S., home to the biggest social media companies, the challenge came into greater focus during the presidency of Donald Trump, who used his accounts to attack opponents while blaming social media for -- as he put it in a tweet -- “suppressing voices of conservatives and hiding information and news that is good.” In a defining moment for internet moderation, Trump himself was kicked off major platforms for things he said.

1. How free is speech on the internet?

Quite free in the U.S. compared with China and Russia, which actively censor the internet, and compared with countries that apply more vigorous rules to social media, such as Germany’s ban on hate speech and new regulations in India designed to crack down on “mischievous information.” But when people talk about free speech on the internet, they generally mean something different: the degree to which social media platforms -- private companies -- moderate (or “censor,” critics say) what their users post. That’s a commercial issue governed by each company’s terms of service. There’s no First Amendment right to speak on social media, since that’s a right guaranteed against government censorship. In fact, courts have ruled that platforms have a First Amendment right to ban people they wish to ban.

2. Why filter online speech at all?

The internet in general and social media platforms in particular have proven to be effective places to spread misinformation about important matters such as Covid-19 and vaccines, disinformation (intentional falsehoods) about politics and elections, plus all manner of character assassinations and conspiracy theories.

3. How do the companies moderate speech?

Facebook, Twitter, Instagram and YouTube routinely remove posts deemed to violate standards on violence, sexual content, privacy, harassment, impersonation, self-harm and other concerns. Most of those actions happen automatically, through decisions by artificial intelligence. (Especially during the pandemic, companies became more reliant on machine learning to police their platforms, often resulting in over-enforcement, or content coming down that may not have violated rules.) More rarely, the platforms ban users (such as radio provocateur Alex Jones, removed from Facebook, YouTube and Apple for engaging in hateful speech) or entire topics, such as QAnon, the fringe conspiracy theory that imagines a globalized pedophilia cabal infiltrating the U.S. Democratic Party. Also, Facebook Inc. and Alphabet Inc.’s Google partner with third-party fact-checkers to vet posts and news items that may be suspect. Twitter Inc. labels posts that contain misleading or disputed claims.

4. Why is Trump so central to this debate?

The presidential election of 2016, in which Twitter served as Trump’s megaphone and Facebook proved fertile territory for a pro-Trump Russian disinformation campaign, led to a torrent of criticism of social media companies and what many saw as their anything-goes policies for politicians. That criticism grew as Trump, once president, used Twitter to issue threats, mock opponents and stretch truth. Cornell University researchers found that Trump “was likely the largest driver” of misinformation about the pandemic, for instance.

5. How have they moderated Trump’s speech?

For most of his presidency, Facebook and Twitter left Trump’s accounts alone. That changed in 2020. First Twitter labeled as “potentially misleading” two Trump tweets about voting by mail. Then Facebook removed from Trump’s page an interview in which he said, incorrectly, that children are “virtually immune” to Covid-19. A big test came in October, three weeks before the presidential election, when the New York Post ran a story about allegedly damaging photos and emails on a laptop belonging to the son of Trump’s opponent, Joe Biden. Twitter banned links to the story and locked accounts of users -- including journalists -- who tweeted it, before backtracking a day later. Facebook reduced how widely visible the article was. Conservatives howled that the social media companies were suppressing a story that could help Trump win re-election. Finally, Twitter and Facebook froze Trump’s accounts following the Jan. 6 riot by his supporters at the U.S. Capitol.

6. What have those complaining of censorship done?

Many conservatives migrated to Parler, which calls itself “the world’s premier free speech platform.” Following the insurrection at the Capitol, however, Parler’s app was removed from the iPhone App Store and Google Play, making it almost impossible to download the service to a mobile device. Florida, where Republicans hold the governor’s office and a majority in the legislature, enacted a law imposing fines on social media companies that bar a candidate for state office for more than 14 days. Trump and his conservative supporters have pressed to repeal Section 230 of the Communications Decency Act of 1996, which gives tech companies broad protection from the kinds of liability publishers traditionally face for defamatory content, along with broad leeway to moderate discussions and remove posts or leave them alone. Section 230 has its share of liberal critics as well. They say it allows tech companies to ignore the damage caused by users’ bad behavior, including Trump’s provocative tweeting while in office.

7. What would repealing Section 230 mean for free speech on the internet?

The foundational model of social media -- providing an unedited platform for vast amounts of user-generated content -- could be imperiled by the specter of legal actions. Victims of online revenge porn, sex harassment and privacy breaches could seek restitution. So could restaurant owners claiming rivals posted harmful fake reviews. Facing the prospect of large judgment awards, social media platforms could clamp down harder on what users post. There are proposals in Congress that would stop short of repealing 230. One idea is to empower the Federal Trade Commission to take over enforcement of rules set down by the platforms.

More stories like this are available on bloomberg.com

©2021 Bloomberg L.P.