The anti-Semitic online screeds tied to the man police say killed 11 people at a Pittsburgh synagogue are rekindling a debate in Congress over the role that social media companies should play in policing their platforms — and the penalties they should face if they fail.
For lawmakers already concerned about incendiary, extreme content online, the posts offered the latest reason to consider new regulation of the tech industry writ large. Some questioned whether Silicon Valley’s prized legal shield — a decades-old law that protects social media giants from lawsuits — might be in need of an overhaul.
“I have serious concerns that the proliferation of extremist content — which has radicalized violent extremists ranging from Islamists to neo-Nazis — occurs in no small part because the largest social media platforms enjoy complete immunity for the content that their sites feature and that their algorithms promote,” said Virginia Sen. Mark R. Warner, the top Democrat on the Senate Intelligence Committee.
But regulating sites like Gab is no easy task. The First Amendment grants people the right to say repugnant things, whether online or in the real world, making it hard for Congress to define and outlaw hate speech on social media.
“The government can’t pick the viewpoints it likes and discriminate against the viewpoints it doesn’t like,” said David Greene, a senior staff attorney at the Electronic Frontier Foundation. “I know that’s an unsatisfying answer especially in the immediate aftermath of tragedies.”
Major Internet companies, meanwhile, have fiercely resisted changes to a law granting them immunity from lawsuits — called Section 230 of the Communications Decency Act. Adopted in 1996, it generally spares Facebook, Google-owned YouTube and Twitter from being held accountable for what their users post on their sites. And it helps them to maintain nuanced policies prohibiting certain kinds of content, including violence and hate speech, without the threat of liability if they remove a user’s post — or kick someone off entirely.
“On balance, I think these platforms are doing good, and it’s easy to point fingers and say, ‘Let’s change 230,’ ” said Michael Beckerman, president of the Internet Association, which represents Facebook, Google, Twitter and other Web companies.
Facing immense public pressure, those three tech giants have invested heavily over the past year in adopting new rules, creating powerful artificial-intelligence tools and hiring thousands of employees who keep watch over what happens on their sites. Twitter, for example, has more clearly prohibited hateful tweets, imagery and profile features. But the company also has struggled with anti-Semitism. And it failed to take down an account belonging to Cesar Sayoc, whom users had reported for making violent threats months before authorities say he mailed pipe bombs to prominent Democrats and critics of President Trump.
“Violent threats, harassment, and abuse do not enable free expression, they stifle it. We continue to invest in our technology, policies, and product to improve people’s experience of our service,” Twitter said in a statement.
Gab represents an even greater extreme: a site that claims to put “free speech first,” with seemingly no limits on what one can say — even though its terms of service prohibit posts that call for “acts of violence against others.” Companies that hosted the website for Gab and processed its payments have severed their ties since this weekend, forcing it offline — though it has pledged to return.
Gab did not respond to requests for comment. Earlier, though, news reports that Pennsylvania’s attorney general plans to investigate the company prompted its leaders to reply on Twitter: “have you ever heard of CDA 230?”
Digital threats — and their real-world consequences — have unnerved lawmakers like New Jersey Rep. Frank Pallone Jr., the top Democrat on the tech-focused House Energy and Commerce Committee. He said companies need to “do more to ensure that they are not being manipulated into being the megaphones of hatemongers. If companies fail to act, the public will demand lawmakers step in.”
A fellow committee member, Rep. Anna G. Eshoo, more explicitly called for regulation — including a broad rethinking of Section 230. “When it was written, we were not facing these issues,” said Eshoo, a Democrat whose California district includes Silicon Valley. “You have to draw it up carefully. You have to work with stakeholders. But I don’t think we’re in a time or era where we can simply overlook this.”
In the meantime, victims have been disappointed to discover that federal law leaves them little recourse when social media sites facilitate real-world violence. Lawsuits brought by victims of previous shootings — including the terrorist attack in San Bernardino, Calif., in 2015 and the Pulse nightclub shooting in 2016 — against companies such as Facebook, Google and Twitter have faced difficulty or been dismissed.
Lawmakers last chipped away at the tech industry’s legal shield earlier this year, when Democrats and Republicans united to pass a law to curb sex trafficking online. In a sign of the difficulties that the government faces in regulating the Web, however, the new rules drew a sharp rebuke from victim advocates — who said they would make life more unsafe for sex workers and push the most nefarious acts to websites overseas and out of the reach of U.S. law enforcement.
In Europe, meanwhile, the tech industry’s regulators in Brussels have proposed a more aggressive tack: They pitched legislation that would levy major fines on tech giants if they are notified about terrorist content and fail to take it down within one hour. European Union officials said at the time that the industry’s voluntary efforts had proved insufficient.
“There are real-world consequences,” Eshoo said in an interview. “We saw last week domestic terrorism that was unleashed against some of the most prominent public servants in our country and the slaughter at the synagogue. If those aren’t consequences, I don’t know what are.”