Ctrl + N
Policymakers around the globe are demanding social media companies be held accountable for the spread of hateful content on their platforms as the tech giants struggle to remove violent video footage of the New Zealand terrorist attack.
Sen. Richard Blumenthal (D-Conn.) wants Congress to hold an immediate hearing with Facebook and other technology platforms to address the “abject failure” to stop the spread of graphic videos and messaging:
Facebook & other platforms should be held accountable for not stopping horror, terror, & hatred—at an immediate Congressional hearing. They must answer for an apparent abject failure to stop shock video & hate messaging.— Richard Blumenthal (@SenBlumenthal) March 16, 2019
After Facebook removed 1.5 million videos of the shooting rampage at two mosques in Christchurch within the first 24 hours of the attack -- and there were still many more available online -- New Zealand Prime Minister Jacinda Ardern said she wants answers.
"This is an issue that goes well beyond New Zealand but that doesn’t mean we can’t play an active role in seeing it resolved," Arden said. "This is an issue I will look to be discussing directly with Facebook."
U.K. Home Secretary Sajid Javid said on Twitter that “enough is enough”:
The U.K. lawmaker who leads the Digital, Culture, Media and Sports Committee in the House of Commons said there needs to be “a serious review” of why the companies’ attempts to police the content weren’t more effective:
It's very distressing that the terrorist attack in New Zealand was live streamed on social media & footage was available hours later. There must be a serious review of how these films were shared and why more effective action wasn't taken to remove them.https://t.co/lk9UYWhIp4— Damian Collins (@DamianCollins) March 15, 2019
The growing international outcry could be a game-changer for Silicon Valley companies wary of more regulation.
Other countries, particularly in Europe, have been adopting tougher rules when it comes to hate speech -- and it’s likely that the toughest restrictions on the technology companies' content moderation practices will continue to be outside the United States.
Countries such as Germany and United Kingdom are setting penalties for the companies when they fail to remove harmful content. In Germany, regulators can fine companies if they fail to remove illegal content in less than 24 hours. In the United Kingdom, ministers are planning to establish a new technology regulator that could dole out fines in the billions if companies such as Facebook or Google (which owns YouTube) fail to remove harmful content from their platforms. Actions regulators take in those countries take could set the tone globally for how governments should address the proliferation of violent content on social media.
There could also be action in the U.S. The extremely broad volume of videos spread across various social networks could reignite debate over whether Congress needs to update a decades-old law that shields companies from legal liability for content posted on their platforms.
Less than six months ago, in the wake of the massacre at a Pittsburgh synagogue, hate speech linked to the attack rekindled debate in Congress over whether Section 230 of the Communications Decency Act needed to be updated.
The provision generally protects tech companies from legal action related to content that people have posted on their websites. Sen. Mark R Warner (D-Va.), said last year this law might need an overhaul.
"I have serious concerns that the proliferation of extremist content — which has radicalized violent extremists ranging from Islamists to neo-Nazis — occurs in no small part because the largest social media platforms enjoy complete immunity for the content that their sites feature and that their algorithms promote,” Warner, the top Democrat on the Senate Intelligence Committee, told my colleague Tony Romm in the fallout of the Pittsburgh shooting. He did not comment this weekend on whether he would renew this charge after the New Zealand attack.
The industry has largely resisted any changes to the law. As my colleague Tony said on Twitter in the hours following the New Zealand shooting:
At what point will US lawmakers just say “enough” and strip these platforms of CDA 230 protections in response to the mass proliferation of videos from a shooting? I mean that — like what is it actually going to take for that convo to happen despite the intense industry lobbying— Tony Romm (@TonyRomm) March 15, 2019
The companies have already made investments to better police harmful content, ranging from improved algorithms to expanding their ranks of human content moderators under previous political pressure. But expect renewed questions from policymakers across the world over whether these investments were enough.
Tech companies “have a content-moderation problem that is fundamentally beyond the scale that they know how to deal with,” Becca Lewis, a researcher at Stanford University and the think tank Data & Society, told my colleagues Friday. “The financial incentives are in play to keep content first and monetization first.”
BITS: YouTube took unprecedented steps to quell the spread of videos of the New Zealand massacre. But even its attempts to temporarily disable search functions and speed up automated review functions were outmatched by a flood of repackaged and recut videos designed to outsmart the company's systems, my colleagues Elizabeth Dwoskin and Craig Timberg report. As soon as the company's "incident commanders" removed one video, another would appear, as quickly as one per second in the hour after the shooting.
“This was a tragedy that was almost designed for the purpose of going viral,” Neal Mohan, YouTube's chief product officer, said in an interview with The Washington Post. “We’ve made progress, but that doesn’t mean we don’t have a lot of work ahead of us, and this incident has shown that especially in the case of more viral videos like one there’s more work to be done.”
The videos came much more quickly and in greater volume than in previous mass shootings, such as the fall Pittsburgh synagogue massacre last year. While video from the point of view of the victims has spread in previous shootings, the New Zealand attacks were unique because the shooter apparently live streamed itself from a GoPro camera.
NIBBLES: The data scientist known for developing the quiz app that allowed the political consultancy firm Cambridge Analytica to sweep up vast information about Facebook users is suing Facebook for defamation, according to the New York Times's Matthew Rosenberg.
Facebook has repeatedly tried to shift the blame onto Aleksandr Kogan in the year since the scandal first broke, claiming he says the data the app was sweeping up was only being used for academic purposes. But Kogan says that the fine print in his app said that the data could be used commercially. That violated Facebook's rules, but the company did not appear to be checking whether apps were complying.
"Alex did not lie, Alex was not a fraud, Alex did not deceive them, this was not a scam,” Steve Cohen, a lawyer for Mr. Kogan, told the Times. “Facebook knew exactly what this app was doing, or should have known. Facebook desperately needed a scapegoat, and Alex was their scapegoat.”
In a statement, Liz Bourgeois, a spokeswoman for Facebook, called the action a “frivolous lawsuit” from someone who “violated our policies and put people’s data at risk.”
BYTES: In the absence of action in Washington, some of the country's most powerful state attorneys general are signaling that they're prepared to take action against Google and Facebook, my colleague Tony reports. These law enforcement officials are concerned about the vast troves of personal information Silicon Valley businesses have collected -- and its impact on privacy and competition.
“I think what we’ve found is that Big Tech has become too big and that, while we may have been asleep at the wheel, they were able to consolidate a tremendous amount of power,” Jeff Landry (R), Louisiana’s attorney general, told Tony in an interview.
States such as Arizona and Mississippi are taking aim at Google's data collection practices, while Washington is challenging Facebook's business practices in court.
“We are in a moment where the federal government’s level of effectiveness and engagement on a range of issues, on technology, consumer protection and privacy, is limited,” Phil Weiser (D), Colorado’s attorney general, told my colleage. "Absent federal intervention, he said, “states in general or state AGs are able to act.”
The New Zealand massacre highlights the unique challenges technology companies face when it comes to policing live online broadcasts -- which effectively can't be cut off, reports the Wall Street Journal's Yoree Koh. Only ten people were tuned into the shooter's broadcast on Facebook Live, according to an archive of the page, but it has now likely been viewed millions of times as it was copied across the internet in various forms.
"Artificial intelligence software isn’t powerful enough to fully detect violent content as it is being broadcast, according to researchers. And widely available software enables people to instantly record and create a copy of an online video," Koh reported. "That means the footage now lives on people’s phones and computers, showing how little control the major tech platforms have over the fate of a video once it airs."
— More technology news from the private sector:
— More technology news from the public sector:
— Tech news generating buzz around the Web: