When the World Wide Web opened for public use in 1991, its enthusiasts proclaimed a new era of unfiltered free expression. Thirty years later, the debate is over how, not whether, to filter what’s said online. In the U.S., home to the biggest social media companies, Facebook Inc. is under particular scrutiny over which content it silences, and which it amplifies, as moderator of a discussion involving 1.9 billion people on a typical day.

1. Isn’t there a right to free speech on the internet?

The First Amendment to the U.S. Constitution prohibits censorship by government, not censorship by private companies like those that run social media platforms. In fact, those companies enjoy First Amendment privileges of their own. Like newspapers, book publishers and television stations, they have constitutional protections to decide on their own what to moderate and filter. And Section 230 of the Communications Decency Act of 1996 gives them broad protection from the kinds of liability publishers traditionally face for defamatory content, along with broad leeway to moderate discussions and remove posts or leave them alone.

2. So why filter online speech at all?

The internet in general and social media platforms in particular have proven to be effective places to spread misinformation about important matters such as Covid-19 and vaccines, disinformation (intentional falsehoods) about politics and elections, plus all manner of conspiracy theories and hate speech. “Our commitment to expression is paramount, but we recognize the internet creates new and increased opportunities for abuse,” Facebook says as a way of explaining the community standards it enforces. 

3. How do the companies moderate speech?

Facebook, Twitter, Instagram and YouTube routinely remove posts deemed to violate standards on violence, sexual content, privacy, harassment, impersonation, self-harm and other concerns. Most of those actions happen automatically, through decisions made by artificial intelligence. (That’s led to complaints of over-enforcement, or the removal of content that may not have violated rules.) Facebook and Alphabet Inc.’s Google partner with third-party fact-checkers to vet posts and news items that may be suspect, while Twitter labels some posts that contain misleading or disputed claims in certain categories, like Covid-19 or elections. Google recently pledged to ban advertisements that contradict the established science on climate change. More rarely, the platforms ban users, such as radio provocateur Alex Jones, removed from Facebook, Twitter, YouTube and Apple for engaging in hateful speech. Then-President Donald Trump’s Facebook and Twitter Inc. accounts were frozen following the Jan. 6 riot by his supporters at the U.S. Capitol. Twitter has barred him permanently; Facebook says it could reinstate him in 2023 if “the risk to public safety” has subsided.

4. Who’s unhappy about this?

Lots of people, across the political spectrum. The presidential election of 2016, when Trump used Twitter as a megaphone, led to a torrent of criticism of social media companies about what many saw as anything-goes policies for politicians. That criticism grew as Trump, as president, used Twitter to issue threats, mock opponents and stretch truth. (Cornell University researchers found that Trump “was likely the largest driver” of misinformation about the pandemic.) Trump himself condemned the social media platforms for “suppressing voices of conservatives and hiding information and news that is good.” After Trump was silenced by Twitter, Facebook and YouTube, many of his supporters migrated to Parler, which calls itself “the world’s premier free speech platform.” Trump says he is starting a social media company of his own, to be called Truth Social. Recent accusations by a whistle-blowing insider -- Frances Haugen, who worked as a Facebook product manager for almost two years, mostly on a team dedicated to stopping election misinformation -- provided fresh ammunition for critics of all political persuasions.

5. What did the whistle-blower say? 

In disclosures to the Wall Street Journal and testimony in Congress, Haugen said a tweak Facebook made in 2018 to its proprietary algorithm boosted the visibility of toxic, disputed and objectionable content that stirred outrage and anger among readers, leading to more interaction with the service. She said Facebook needs to take steps to make its main social network and its Instagram photo-sharing platform safer, healthier, and less polarizing.

6. How do other countries handle this issue?

China and Russia actively censor the internet. Even some democracies -- those without a First Amendment -- apply more vigorous rules to social media than the U.S. does. Germany in 2017 banned hate speech on social networks. India this year put Twitter, Facebook and the like under direct government oversight, enacting regulations requiring internet platforms to help law enforcement identify those who post “mischievous information.”

More stories like this are available on bloomberg.com

©2021 Bloomberg L.P.