Leaders from around the globe, including British Prime Minister Theresa May, Canadian Prime Minister Justin Trudeau and Jordan’s King Abdullah II, signed the “Christchurch Call,” which was unveiled at a gathering in Paris that had been organized by French President Emmanuel Macron and New Zealand Prime Minister Jacinda Ardern. Amazon, Facebook, Google, Microsoft and Twitter also signed on to the document, pledging to work more closely with one another and governments to make certain their sites do not become conduits for terrorism. Twitter CEO Jack Dorsey was among the attendees at the conference.
The document was nonbinding, but reflected the heightened global frustration with the inability of Facebook, Google and Twitter to restrain hateful posts, photos and videos that have spawned real-world violence.
The governments pledged to counter online extremism, including through new regulation, and to "encourage media outlets to apply ethical standards when depicting terrorist events online.”
The companies agreed to accelerate research and information sharing with governments in the wake of recent terrorist attacks. “It is right that we come together, resolute in our commitment to ensure we are doing all we can to fight the hatred and extremism that lead to terrorist violence,” Amazon, Facebook, Google, Microsoft and Twitter said in a joint statement. (Amazon CEO Jeff Bezos owns The Washington Post.)
But the White House opted against endorsing the effort, and President Trump did not join the other leaders in Paris. The White House felt the document could present constitutional concerns, officials there said, potentially conflicting with the First Amendment, even though Trump previously has threatened to regulate social media out of concern that it’s biased against conservatives.
Hours after declining to sign the document, the White House escalated its war against social media by announcing an unprecedented campaign asking Internet users to share stories of when they thought they were censored by Facebook, Google’s YouTube and Twitter, companies the president frequently takes aim at for alleged political censorship.
Still, in a statement about the Christchurch Call, the White House said it stands “with the international community in condemning terrorist and violent extremist content online,” and supports the call’s goals. But the United States is “not currently in a position to join the endorsement.”
The White House’s decision against supporting the Christchurch Call drew criticism from some experts who’ve called for stronger regulation across the Web. Alistair Knott, a computer-science professor at the University of Otago in New Zealand, said the absence of a U.S. endorsement potentially would undercut the global argument for controlling how hate and violence spread online.
“It seems insufficient to say that free speech prevents the U.S. from doing something about violent extremist attacks,” said Carl Tobias, a professor at the University of Richmond law school. “Congress should consider carefully crafted legislation that both protects core First Amendment interests and public safety.”
But others worried the Christchurch document could potentially blur the lines between government power and free expression.
“It’s hard to take seriously this administration’s criticism of extremist content, but it’s probably for the best that the United States didn’t sign,” said James Grimmelmann, a Cornell Tech law professor. “The government should not be in the business of ‘encouraging’ platforms to do more than they legally are required to — or than they could be required to under the First Amendment.”
“The government ought to do its ‘encouraging’ through laws that give platforms and users clear notice of what they’re allowed to do, not through vague exhortations that can easily turn into veiled threats,” Grimmelmann said.
For its part, the White House stressed it would continue to be “proactive in our efforts to counter terrorist content online while also continuing to respect freedom of expression and freedom of the press.”
The call is named for the New Zealand city where a shooter killed 51 people in a March attack that was broadcast on Facebook and posted afterward on other social media sites. Facebook, Google and Twitter struggled to take down copies of the violent video as fast as it spread on the Web, prompting an international backlash from regulators who felt malicious actors had evaded Silicon Valley’s defenses too easily. Before the attack, the shooter also posted a hate-filled manifesto online that included references to previous mass killings.
New Zealand’s Ardern said in a statement that the document was intended to help head off a repeat of the Christchurch attacks. “We’ve taken practical steps to try and stop what we experienced in Christchurch from happening again,” Ardern said.
Fewer than 200 people watched the live stream during the attack, which Facebook said it removed 29 minutes after it began. But within 24 hours, users had attempted to re-upload the video onto Facebook more than 1.5 million times. About 300,000 of those videos slipped through and were published on the site before being taken down by the site’s content-moderation teams and systems designed to automatically remove blacklisted content.
Tech companies on Wednesday said they’d pursue a nine-point plan of technical remedies designed to find and combat objectionable content, including instituting more user-reporting systems, more refined automatic detection systems, improved vetting of live-streamed videos and more collective development of organized research and technologies the industry could build and share.
The companies also promised to implement “appropriate checks on live-streaming,” with the aim of ensuring that videos of violent attacks aren’t broadcast widely, in real time, online. To that end, Facebook this week announced a new “one-strike” policy, in which users who violate its rules — such as sharing content from known terrorist groups — could be prohibited from using its live-streaming tools. The company has said such a restriction might have prevented the Christchurch shooter from broadcasting the attack using his account.
“Terrorism and violent extremism are complex societal problems that require an all-of-society response,” Amazon, Microsoft, Facebook, Google and Twitter said in their joint statement. “For our part, the commitments we are making today will further strengthen the partnership that Governments, society and the technology industry must have to address this threat.”
The Christchurch Call reflects heightened global frustrations with Silicon Valley, which has struggled around the world to stop malicious actors from weaponizing social media platforms to deadly ends. Facebook has been linked to ethnic violence in Sri Lanka, for example, and the company has admitted it failed to prevent the platform from becoming a tool to foster genocide in Myanmar.
In response, regulators have introduced or adopted tough new rules over the past year that require social media sites to take down offensive content faster or face tough fines. French regulators, meanwhile, positioned a top government official at Facebook to study the company’s efforts to combat hate speech for six months.
U.S. officials also have struggled with the rise of online extremism and its ability to incite real-world violence. Self-proclaimed neo-Nazis used Facebook as an organizing tool ahead of the deadly 2017 rally in Charlottesville, for example, and the shooter who opened fire on a synagogue in Pittsburgh last year had long posted anti-Semitic screeds on fringe websites.
But even federal policymakers who have grown furious with Silicon Valley have struggled to rein in the industry without violating the First Amendment, which protects even repugnant speech. The issue loomed large over U.S. officials as they decided whether to endorse the Christchurch Call, White House officials told The Post, even though Trump has expressed openness to regulating social media sites in other contexts, including in response to concerns that they are politically biased against conservatives.
The disagreement over the Christchurch Call highlighted a long-simmering tension between officials in Europe, which has traditionally shown a greater willingness to rein in and regulate Internet firms, and the United States, where companies are given broad leeway to police themselves.
Adrian Shahbaz, a research director at Freedom House, a think tank partially funded by the U.S. government, said he was “alarmed by the vague call for governments to ban more speech” in a way that could have “negative consequences for human rights.”
Greater regulation on tech companies is needed, but “we shouldn’t be calling on tech companies to remove content without also demanding that they act with far more transparency and accountability,” he said. “Otherwise, companies will censor first and ask questions later, leaving users with little recourse to appeal poor decisions and uphold their right to free expression.”
Signers included Australia, Canada, the European Commission, France, Germany, Indonesia, India, Ireland, Italy, Japan, Jordan, the Netherlands, New Zealand, Norway, Senegal, Spain, Sweden and Britain.