The Washington PostDemocracy Dies in Darkness

How Online Speech Is Moderated in the US

Fiber optic cables, center, and copper Ethernet cables feed into switches inside a communications room at an office in London, U.K., on Monday, May 21, 2018. The Department of Culture, Media and Sport will work with the Home Office to publish a white paper later this year setting out legislation, according to a statement, which will also seek to force tech giants to reveal how they target abusive and illegal online material posted by users. Photographer: Jason Alden/Bloomberg (Photographer: Jason Alden/Bloomberg)

When the World Wide Web opened for public use in 1991, its enthusiasts proclaimed a new era of unfiltered free expression. That was before the internet in general, and social media platforms in particular, proved to be such effective places to spread misinformation about important matters such as Covid-19 and vaccines, disinformation (intentional falsehoods) about politics and elections, plus all manner of conspiracy theories and hate speech, including harassment and bullying. Social media platforms have faced enormous scrutiny over which content they silence and which they amplify. Cases before the US Supreme Court could transform the legal landscape for social media companies, with potential implications for political discourse and the 2024 elections.

1. Doesn’t the First Amendment give everyone a right to free speech on the internet?

No. The First Amendment to the US Constitution bans censorship by the government, not by private companies such as providers of social media platforms. Plus, Section 230 of the Communications Decency Act of 1996 gives broad protection to online gathering places such as Twitter Inc. and Meta Platform Inc.’s Facebook from liability for defamation or harassment, along with leeway to moderate discussions and remove or leave up posts.

2. Why was Section 230 adopted?

It was marketed by its bipartisan sponsors in the 1990s as a “Good Samaritan” law for the internet, which was then in its infancy. Its key provisions shield internet companies from liability for most of the material their users post and give the companies legal immunity for “any action voluntarily taken in good faith” to remove materials from their platforms. There are some exceptions: Section 230 doesn’t block criminal prosecution over child pornography shared on social media, for instance.

3. What is the Supreme Court doing?

It took up two cases on whether social media companies can be sued for hosting and recommending terrorism-related content. One case involves a suit against Alphabet Inc.’s Google by the family of Nohemi Gonzalez, a 23-year-old US citizen killed in coordinated attacks by the Islamic State group in Paris in 2015. Gonzalez’s family says that Google’s YouTube service, through its algorithms, violated a federal antiterrorism law by recommending Islamic State videos to other users, and that Section 230 doesn’t prevent it from being sued. (Google, in its brief to the court, says “recommend” is the wrong word for what it does, which is “to make constant choices about what information to display and how, so that users are not overwhelmed with irrelevant or unwanted information.”) In the second case, Twitter and other social media companies are seeking to narrow the scope of the same antiterrorism law in a case stemming from a 2017 shooting in an Istanbul nightclub. In addition, Florida and Texas each want the Supreme Court to bless laws they enacted in 2021 to sharply restrict the editorial discretion of the largest social media platforms.

4. How do social media companies moderate speech?

Twitter, Facebook, Instagram and YouTube routinely remove posts deemed to violate standards on violence, sexual content, privacy, harassment, impersonation and self-harm. Much of this happens automatically, via artificial intelligence, though social media companies also have thousands of employees and contractors who help them sift through potential violations. Google and Facebook partner with third-party fact-checkers to vet posts and news items that may be suspect. Twitter labels some posts that contain misleading or disputed claims in certain categories, like Covid-19 or elections. More rarely, the platforms ban users, such as radio provocateur Alex Jones, removed from Facebook, Twitter, YouTube and Apple for engaging in hateful speech. Then-President Donald Trump’s Facebook and Twitter accounts were frozen following the Jan. 6, 2021, riot by his supporters at the US Capitol. Twitter barred him permanently, but after buying the company, Elon Musk, the Tesla Inc. chief executive officer and self-described “free speech absolutist,” reinstated him. Facebook, too, said it would reinstate Trump’s access to his much-followed account.

5. Who’s unhappy about moderation of social media?

Lots of people, on both sides of the issue. The presidential election of 2016, when Trump used Twitter as a megaphone, led to a torrent of criticism of social media platforms about what many saw as anything-goes policies for politicians. That criticism grew as Trump, as president, used Twitter to issue threats, mock opponents and stretch truth. (Cornell University researchers found that he “was likely the largest driver” of misinformation about the pandemic.) Frances Haugen, who worked as a Facebook product manager for almost two years, provided fresh ammunition for critics when she stepped forward as a whistleblower in 2021. She alleged that Facebook had tweaked its proprietary algorithm in 2018 in a way that boosted the visibility of toxic, disputed and objectionable content that stirs outrage and anger among readers, leading to more interaction with the service. Trump and other US conservatives have their own set of complaints.

6. What are conservatives unhappy about?

Trump condemned social media platforms for “suppressing voices of conservatives and hiding information and news that is good.” Post-presidency, he started his own platform, Truth Social. A lingering bone of contention among conservatives is how Twitter and Facebook handled an unflattering 2020 New York Post article on Hunter Biden, the son of US President Joe Biden. Citing concerns over the private nature of the material and whether it had been hacked, the two social media giants restricted the ability of users to share the Post story. Subsequent reporting by other news organizations backed the authenticity of the material cited by the Post, fueling criticism that social media platforms and mainstream media had suppressed legitimate news. Musk bought Twitter in part because he disagreed with content restrictions that were in place. 

7. How do other countries handle this issue?

In China, Russia and other countries subjected to authoritarian rule, governments actively censor the internet, including blocking or greatly restricting access to American-owned social media sites. Some democracies are moving quicker than the US to apply more vigorous rules to social media. India put Twitter, Facebook and the like under direct government oversight, enacting regulations requiring internet platforms to help law enforcement identify those who post “mischievous information.” The European Union’s Digital Services Act gives member states new power to take down illegal content such as hate speech and terrorist propaganda and make platforms do more to tackle harmful material. Companies like Twitter must submit annual reports to the EU detailing how they’re handling systemic risks posed by content such as racist slurs or posts glorifying eating disorders.

--With assistance from Sarah Frier and Maxwell Adler.

More stories like this are available on bloomberg.com

©2023 Bloomberg L.P.

Loading...