Two months after deadly attacks inspired by online hate left dozens dead in Christchurch, New Zealand, the country’s prime minister will urge other governments and top tech giants to commit to combating the spread of extremism on social media.
The “Christchurch call” — a voluntary pledge presented on the sidelines of a G-7 gathering in Paris — reflects heightened global fears that Facebook, Google and Twitter have become conduits for terrorism, unable to keep pace with malicious actors who’ve proven deft at evading Silicon Valley’s efforts to prevent harmful content from going viral on the Web.
New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron organized Wednesday’s meeting in Paris as a response to Ardern’s plea in March for greater social media accountability after a shooter killed 51 people at two mosques and streamed the attack live for millions to see online.
Officials from countries including Canada, Britain and the United States are expected to attend, along with representatives of Facebook, Amazon, Google, Twitter and other tech giants. Twitter CEO Jack Dorsey is making the trip to Paris, the company said this week, while the White House is dispatching President Trump’s top tech adviser. (Amazon CEO Jeff Bezos owns The Washington Post.)
Ahead of the summit, Facebook on Tuesday announced two efforts to address regulators’ concerns and stop the spread of harmful content on its services. Now, users who violate Facebook’s “most serious policies” — such as sharing a link to statements from a known terrorist group — would be banned from broadcasting live videos on the platform for set periods of time. Facebook said the policy, if implemented sooner, might have stopped the Christchurch shooter from using the company’s live-streaming feature to stream the attacks on the mosques.
The tech giant also pledged to commit $7.5 million to work with researchers at three universities to improve its ability to detect photos and videos that have been manipulated. The new investment follows an admission from Facebook that malicious users evaded its censors by uploading slightly altered versions of the Christchurch attack video.
Facebook, Google and Microsoft intend to sign the Christchurch agreement, the companies confirmed. Twitter did not indicate its plans, but it described the meeting as a “critical opportunity to listen and learn from various heads of state and digital ministers from across the world.”
The White House has not said whether it will sign the document, and a spokesman declined to comment Tuesday. Behind the scenes, though, Trump administration officials said they have been actively negotiating with French and New Zealand officials over the document’s text. While the administration supports the document’s goals, it fears some of its language would run counter to the Constitution’s First Amendment guarantees of freedom of speech, according to two people familiar with the deliberations but not authorized to discuss them on the record.
Spokespeople for Macron and Ardern did not respond to requests for comment.
Around the world, the Christchurch attack sparked renewed scrutiny of social media. Facebook, Google and Twitter each have hired thousands of reviewers and created new artificial-intelligence tools with the goal of thwarting hate speech, extremism and terrorism online. Despite those efforts, the tech giants were unable to stop the spread of the Christchurch videos, prompting government officials across the world to call for more regulation.
Fewer than 200 people watched the live stream during the attack, which Facebook said it removed 29 minutes after it began. But within 24 hours, users had attempted to re-upload the video onto Facebook more than 1.5 million times. About 300,000 of those videos slipped through and were published on the site before being taken down by the site’s content-moderation teams and blacklist algorithms.
Beyond Facebook, the Global Internet Forum to Counter Terrorism — a trade group formed by Facebook, Microsoft, Twitter and YouTube — has said more than 800 visually distinct videos of the attack have been “fingerprinted” for its automatic ban list.
In response, Ardern, New Zealand’s prime minister, has urged the tech giants to take more responsibility and ensure they “are not perverted as a tool for terrorism.” She has said the Christchurch call is not about limiting free expression, but rather “preventing violent extremism and terrorism online.”
“I don’t think anyone would argue that the terrorist on the 15th of March had a right to live-stream the murder of 50 people,” she said last month.
Regulators in the European Union have proposed rules requiring tech giants to take down terrorist content within an hour or face steep fines. In the United Kingdom, government leaders last month put forward a proposal that could see social-media sites penalized for failing to combat a wide array of harmful content, including hate speech and cyberbullying. In Sri Lanka, authorities there grew so concerned about social media — and the spread of potential violence — that they shuttered access to Facebook and other sites after attacks on churches there last month.
Last year, France struck a six-month deal with Facebook that allows regulators unprecedented access to study the tech giant’s approach to fighting posts and photos that attack people on the basis of race, religion, sexuality or gender. Macron since then has pursued new rules targeting tech companies’ efforts to combat hate speech and met with Facebook CEO Mark Zuckerberg last week.
The flurry of activity stands in stark contrast to the United States. Facebook served as an organizing tool for the deadly neo-Nazi rally in Charlottesville in 2017, for example, and lesser-known fringe websites hosted hateful screeds penned by the man accused of opening fire on a Pittsburgh synagogue last year. But the First Amendment’s protections for free speech have left many policymakers reluctant to regulate social media even when those companies have erred.
Alistair Knott, a computer-science professor at the University of Otago in New Zealand, said companies and governments can combat violent content in a way that does not violate rights to free expression by focusing on how the sites bring it to users, through Internet searches, live-streaming videos and social-media feeds.
The systems would not have to actively remove dangerous content, but could filter them out so they are not recommended or shown in people’s search results and news feeds, limiting their spread.
Knott said the United States should be involved in the effort, and worried that its refusal to engage could undermine an issue of global importance — particularly because virtually all of the major social-media companies are based there.
“Companies like Facebook are becoming more aware and more willing to make reforms than the U.S., and they’re doing it purely on grounds of public opinion,” Knott said. But “ultimately the regulation of these tools that transmit information should be a matter for governments, not just the whims of private companies.”