Over the next five to 10 years, he said, artificial intelligence would prove a champion for the world’s largest social network in resolving its most pressing crises on a global scale — while also helping the company dodge pesky questions about censorship, fairness and human moderation.
“We started off in my dorm room with not a lot of resources and not having the AI technology to be able to proactively identify a lot of this stuff,” Zuckerberg told the lawmakers, referring to Facebook's famous origin story. Later in the hearing, he added that “over the long term, building AI tools is going to be the scalable way to identify and root out most of this harmful content.”
But Facebook’s AI technology can’t do any of those things well yet, and it’s unclear when, if ever, it will be able to. Tech experts had a different opinion on why Zuckerberg spent so much time offering tributes to the much-hyped but largely unproven tech advancement: The shapeless technology could help the company pawn off blame from the humans creating it.
“AI is Zuckerberg's MacGuffin,” said James Grimmelmann, a law professor at Cornell Tech, using the film term for a mostly insignificant plot device that comes out of nowhere to move the story along. “It won't solve Facebook's problems, but it will solve Zuckerberg's: getting someone else to take responsibility.”
To many of us, AI is the amorphous super-technology of science fiction, animating friendly computers and killer robots. Silicon Valley leaders often promote that vision of AI by suggesting it will serve humanity as a quasi-magical force for good. But today’s artificial intelligence is being used in far more basic forms: driving cars, tracking cows and giving a voice to virtual assistants such as Siri and Alexa.
Facebook uses AI in understated but important ways, such as for recognizing people’s faces for tagging photos and using algorithms to decide placement of ads or News Feed posts to maximize users' clicks and attention. The company is also expanding its use, such as by scanning posts and suggesting resources when the AI assesses that a user is threatening suicide.
Facebook has pointed to early AI successes in detecting problematic content. The company has said that advances in AI have helped it remove thousands of fake accounts and “find suspicious behaviors,” including during last year’s special Senate race in Alabama, when AI helped spot political spammers from Macedonia, a hotbed of online fraud.
Facebook, Zuckerberg said Tuesday, has also been “very successful” at deploying AI to police against terrorist propaganda. “Today, as we sit here, 99 percent of the ISIS and al-Qaeda content that we take down on Facebook, our AI systems flag before any human sees it,” he said. (Nonprofit groups such as the Counter Extremism Project have argued that Facebook has exaggerated its achievement and failed to crack down on well-known Islamist extremists.)
Zuckerberg said he was optimistic that Facebook’s AI would, within five to 10 years, be able to comprehend the “linguistic nuances” of content with enough accuracy to flag potential risks. But those limited cases, experts said, were helped by geography and required human moderators to make the final ruling. Real-world cases of violent videos, hate speech and dangerous content — the ones plaguing Facebook every day — are much more subtle, widespread and difficult to police.
“AI is extremely far off from being able to understand social context and nuance,” Grimmelmann said. “Even humans have a hard time distinguishing between hate speech and a parody of hate speech, and AI is way short of human capabilities.”
Zuckerberg both understated the problem and overstated AI’s abilities, experts said. For example, Facebook’s AI has been deemed technically incapable of spotting discriminatory housing ads, which violate the federal Fair Housing Act and were a problem on the social network until the site changed its policy late last year.
The limitations of AI go much further than Facebook, experts say. No AI on the market is trained well enough to understand the social dimensions and verbal eccentricities of human speech, slang and dialect. (Anyone who uses Siri can attest to that.)
The worst-kept secret to Silicon Valley’s AI push, experts said, has been the tech industry’s army of human moderators. Those often-low-wage contract workers spend their days screening posts for offensive or disturbing content, indirectly helping train the artificial intelligence program on what problems and patterns to look for.
Facebook, Google and other tech giants are ramping up their content-moderation staffing, and Zuckerberg said his company aims to have more than 20,000 people working on security and content review by the end of the year. Today’s AI, experts said, is still miles away from a responsible alternative to a human looking at a screen.
Facebook’s plan is “continuing to grow the people who are doing review in these places with building AI tools, which — we're working as quickly as we can on that, but some of this stuff is just hard,” Zuckerberg said. “That, I think, is going to help us get to a better place on eliminating more of this harmful content.”
Robyn Caplan, a researcher at the think tank Data & Society, said Zuckerberg’s optimism seemed to clash with the more pragmatic conversations she’s had with representatives from platforms similar to Facebook, who have stressed that AI can help flag questionable content “but cannot be trusted to do removal.”
At a major conference on content-moderation in February at the Santa Clara University law school, where executives from Facebook, Google and Reddit detailed their operations, Caplan said nearly everyone agreed that human moderators were still the reigning gatekeepers for information on the Internet’s most popular sites.
“AI can’t understand the context of speech and, since most categories for problematic speech are poorly defined [by necessity], having humans determine context is not only necessary but desirable,” she said.
Not every senator on Tuesday was so welcoming to Zuckerberg’s AI-as-savior talking point, including Sen. Patrick J. Leahy (D-Vt.), who questioned whether Facebook’s AI was already failing by not stemming the spread of hate speech during the crackdown on Rohingya in Burma, also called Myanmar.
“You say you use AI to find this,” Leahy said, pointing to a poster that showed Facebook posts calling for the murder of Muslim journalists. “That threat went straight through your detection systems. It spread very quickly, and then it took attempt after attempt after attempt, and the involvement of civil society groups, to get you to remove it. Why couldn't it be removed within 24 hours?”
Zuckerberg said the company was working on hiring more Burmese-language content reviewers and deleting the accounts of “specific hate figures.” “What's happening in Myanmar is a terrible tragedy, and we need to do more,” he said.
But tech experts said AI could also create new problems. The same bad actors behind viral hoaxes and fake accounts may be able to use AI to evade Facebook’s filters, make fake videos or spearhead targeted harassment campaigns, they said.
AI also “can't solve political problems; it can't make people agree,” Grimmelmann said. “What's ‘fake’ news depends on who you ask: Kicking the question over to AI just means hiding value judgments behind the AI.”
The real issue, experts said, is that the problems plaguing Facebook may be battles that no one can truly win. Instead of acknowledging that grim fact, though, Zuckerberg has taken to gesturing vaguely at the future — and at a version of AI that could eventually save the day.
As Ryan Calo, an assistant professor at the University of Washington law school, tweeted Tuesday, “ ‘AI will fix this’ is the new ‘the market will fix this.’ ”