Lawmakers, human rights activists and the United Nations have criticized the role Facebook has played in Myanmar’s crisis. Facebook’s pledge to be more involved is part of its broader defense against the spread of controversial or false information on its network globally. Chief executive Mark Zuckerberg has pledged to hire more staff to review posts for hate speech.
But Facebook product manager Sara Su said people alone are not able to catch all bad content. Much of Facebook’s effort relies on artificial intelligence, which Zuckerberg has pointed to as a tool that social media companies can use to parse a high volume of posts and flag potential problems.
However, AI is far from capable of monitoring and evaluating hate speech or false information. Zuckerberg has said it will take five or 10 years to train AI to recognize the nuances.
The technology is being tested in Myanmar, where a low rate of posts are flagged by Facebook users for potential policy violations. Facebook on Wednesday said artificial intelligence is able to flag 52 percent of all the content it removes in Myanmar before it is reported by users.
Facebook did not give an estimate of how many pieces of content it has removed, making it difficult to assess the scale of the problem. But in an independent investigation, Reuters found more than 1,000 posts, comments, images and videos calling for violence against the Rohingya people in the last week.
The company said it is also enforcing in Myanmar a recently updated policy addressing “credible violence,” which sets standards to remove content that has the “potential to contribute to imminent violence or physical harm.”
Facebook largely hesitates to remove misinformation across its network, preferring to demote false information in the news feed using its algorithms. But it is willing to take a stronger hand in Myanmar due to the violence linked to misinformation. Facebook said it is undertaking similarly focused enforcement strategies in Sri Lanka, India, Cameroon and the Central African Republic.