During the second and third quarter of 2019, Facebook said it removed or labeled more than 54 million pieces of content it deemed violent and graphic, 18.5 million items determined to be child nudity or sexual exploitation, 11.4 million posts that broke its rules prohibiting hate speech and 5.7 million uploads that ran afoul of bullying and harassment policies.
The company also detailed for the first time its efforts to police Instagram, revealing that it took aim at more than 1.2 million photos or videos involving child nudity or exploitation and 3 million that ran afoul of its policies prohibiting sales of illegal drugs over that six-month period.
In all four categories, Facebook took action against more content between April 1 and Sept. 30 than it did in the six months prior: Previously, the company targeted nearly 53 million pieces of content for excessive violence, 13 million for child exploitation, 7.5 million for hate speech and 5.1 million for bullying. Facebook attributed some of the spike in violations to its efforts to tighten its rules and more actively search and find abusive posts, photos and videos before users report them.
Speaking to reporters Wednesday, Facebook CEO Mark Zuckerberg warned against concluding that “because we’re reporting big numbers, that must mean there’s so much more harmful content happening on our service than others.
“What it says is we’re working harder to identify this and take action on it,” he said.
Still, Facebook’s latest transparency report arrives as regulators around the world continue to call on the company — and the rest of Silicon Valley — to be more aggressive in stopping the viral spread of harmful content, such as disinformation, graphic violence and hate speech. A series of high-profile failures over the past year have prompted some lawmakers, including Democrats and Republicans in the United States, to threaten to pass new laws holding tech giants responsible for failing to police their sites and services.
The calls for regulation intensified after the deadly shooting in Christchurch, New Zealand, in March. Video of the gunman attacking two mosques spread rapidly on social media, including Facebook, evading tech companies’ expensive systems for stopping such content from going viral. On Wednesday, Facebook offered new data about that incident, reporting that it had removed 4.5 million pieces of content related to the attack between March 15, the day it occurred, and Sept. 30, nearly all of which it spotted before users reported it.
Facebook also touted recent improvements in its use of artificial intelligence. Facebook detected 80 percent of the hate speech it removed before users did, a lower rate than other areas but still an improvement for the tech giant, which has struggled to take swift action against content that targets people on the basis of race, gender, ethnicity or other sensitive traits.
In presenting the data, Zuckerberg took a shot at other tech companies for their decision to publish far less data about the content they take down and the means by which they remove it. The Facebook chief didn’t mention Google, which owns YouTube, and Twitter by name. But his proposed solution — new regulation around transparency reporting — would affect those two competitors and the rest of Silicon Valley.
“As a society, we don’t know how much of this harmful content is out there and which companies are making progress,” he said.