The Washington PostDemocracy Dies in Darkness

Facebook removed 3 billion fake accounts over a six-month period

The company also revealed new data about efforts to remove prohibited content around guns and illegal drug sales.

Logo of Facebook is seen at the startups' and tech leaders' gathering, VivaTech, in Paris on May 16, 2019. (Charles Platiau/Reuters)

Facebook said it had removed more than 3 billion fake accounts between October and March, a spike in banned activity that underscores the social-networking company’s ongoing struggle to clean up its platform.

Facebook revealed the new figure Thursday as part of its updated transparency report, which also detailed the prevalence of hate speech, graphic photos and videos, and other abusive content on its platform.

Facebook said that the billions of accounts it removed over the six-month period, which it attributed to “unsophisticated” bad actors that are caught while creating spam profiles, were “never considered active,” so they did not count toward the company’s total number of monthly active users. In the first quarter of this year, Facebook reported that it had 2.3 billion monthly active users, a figure that investors track closely to chart Facebook’s popularity and growth. Facebook estimates about 5 percent of those active accounts are fake.

Facebook bans extremist leaders including Louis Farrakhan, Alex Jones, Milo Yiannopoulos for being ‘dangerous’

Facebook chief executive Mark Zuckerberg on Thursday also mounted a fresh defense of his company at a time when critics — including his fellow co-founder, Chris Hughes — have called for its breakup. In a widely read op-ed published earlier this month, Hughes had pointed to Facebook’s failures in dealing with the viral proliferation of disinformation and other ills as he called on federal antitrust regulators to probe and penalize the social-networking giant.

Zuckerberg pointed to Facebook’s heightened investments in safety and security. “We’re able to do things that are not possible for other companies to do,” he said during a call with reporters to discuss the transparency report. “When you look at it, we really need to decide what issues we think are the most important to address. In some ways, some of the remedies cut against each other.”

Between October and March, Facebook reported it removed or labeled 11.1 million pieces of terrorist content, 52.3 million instances of violent or graphic content and 7.3 million posts, photos or other uploads containing hate speech. In each case, the takedowns marked an increase from recent months, which Facebook attributed to its heightened efforts to deploy artificial intelligence tools and more human reviewers to spot, and potentially remove, posts, photos and videos that violate its rules.

Inside Facebook, the second-class workers who do the hardest job are waging a quiet battle

Facebook for the first time also detailed its efforts to combat prohibited posts about guns and drugs, removing about 1.4 million pieces of content that violated its rules against selling guns, gun parts or ammunition, and 1.5 million items about drugs, including marijuana.

Facebook’s latest transparency report reflects its heightened efforts to demystify its practices for spotting and removing the most harmful content online. Its well-documented missteps have peeved regulators around the world, who banded together last week to call on Facebook and other social-media companies to police their platforms more aggressively against the rise of online extremism. The “Christchurch call” followed the March attacks on two mosques in New Zealand, which were broadcast live on Facebook.

Facebook also has fielded considerable criticism for its treatment of workers who served as the site’s content moderators. The vast majority aren’t employees, and many long have complained about low wages and inadequate benefits despite serving as the company’s first line of defense against graphic, troubling content.

On Thursday, Zuckerberg acknowledged there’s “a lot of work ahead, not just on these specific challenges but on content issues more broadly.” The Facebook chief executive added the answer likely would include government regulation. “I don’t think companies by themselves should be making all the decisions . . . so I am fully behind regulation,” Zuckerberg said.

Going forward, Facebook pledged to publish its transparency report quarterly and start including data about its photo-sharing service, Instagram. The tech giant also said it continues to refine its artificial-intelligence tools, which have aided its efforts to take down certain kinds of content, such as suspicious accounts, but remain less capable of tackling the rise of hate speech.

But Zuckerberg acknowledged that Facebook may pose a challenge to itself, as he pivots the company away from public sharing and toward more private, impermanent messaging that’s encrypted, so that neither the company nor governments can see it. Finding abusive content, Zuckerberg said, could be “harder to find . . . without being able to look at the content itself.”