In the new report reflecting the company’s activities between April and September, Facebook said it had found and removed roughly 1.5 billion fake accounts, while targeting 12.4 million pieces of terrorist propaganda, 2.2 billion pieces of spam and 66 million pieces of content that ran afoul of rules barring adult nudity and sexual activity.
In doing so, Facebook said it had made progress at deploying its thousands of newly hired reviewers — and powerful artificial-intelligence tools — to enforce its community standards more aggressively. The company said that it catches more than 95 percent of nudity, fake accounts and graphic violence before users report it to Facebook.
But for hate speech and a related category, bullying, the company catches 51.6 percent and 14.9 percent of incidents before they are flagged by Facebook users.
On a call with reporters, chief executive Mark Zuckerberg said that the company was making changes to how it handles decisions about what pieces of content to take down. Facebook will soon create an independent board to review people’s appeals of Facebook’s decisions, and will also publish the minutes of its meetings when new content policies are decided.
Facebook has long faced complaints about lack of transparency in its decision-making around removing this post. Earlier this year, the company published some of its guidelines for the first time.
Zuckerberg on Thursday also sought to battle back fresh criticism about his company’s political activities, a day after The New York Times found that Facebook had hired a Republican opposition research firm to discredit its critics. Zuckerberg stressed to reporters he did not know that Facebook had hired the organization, called Definers Public Affairs, with which the company since has severed ties.
“This type of firm might be normal in Washington,” he said, “but it’s not the kind of thing I want Facebook associated with.”