Mark Zuckerberg, chairman and CEO of Facebook, at the CEO summit during the annual Asia Pacific Economic Cooperation (APEC) forum in Lima, Peru, Nov. 19, 2016. (Esteban Felix/AP)

Facebook plans to use artificial intelligence to identify posts that might promote or glorify terrorism, a move that follows growing concern about terrorists’ efforts to recruit on social networks.

In a 5,700-word missive posted Thursday, Facebook CEO Mark Zuckerberg wrote that the technology “will take many years to fully develop” because it requires software sophisticated enough to distinguish between a news story about a terrorist attack and efforts to recruit on behalf of a terrorist organization.

Currently, Facebook largely relies on users to flag questionable content.

Zuckerberg also expounded on the company’s efforts to build a global community through Facebook, and wrote that its success will depend on “whether we’re building a community that helps keep us safe — that prevents harm, helps during crises, and rebuilds afterwards.”

Critics have taken aim at Facebook, along with other social networks, for what they see as insufficient efforts to police the content transmitted across its network. From propaganda shared by suspected terrorists to suicides streamed live to friends and family, social networks have inadvertently become a breeding ground for the unsavory sides of the Internet.

Terrorism has been a particularly sensitive topic. A Department of Justice official told CNBC last October that most cases of domestic terrorism begin with communication on social media and that terrorism groups are targeting their messages at young people.

“Looking ahead, one of our greatest opportunities to keep people safe is building artificial intelligence to understand more quickly and accurately what is happening across our community,” Zuckerberg wrote.

Facebook has attempted to tamp down potential terrorist propaganda for more than a year. A Wall Street Journal report from February 2016 said pressure from the government prompted the company to remove profiles of those suspected of supporting terrorism and scrutinize their friends’ posts more carefully.

In December, CNN reported that Facebook, Twitter, Google and Microsoft would create a shared database to track and delete “violent terrorist imagery or terrorist recruitment videos.”

Facebook’s attempts to filter (or not filter) content on its network has led to controversy before. The company came under fire for failing to stop fake news stories from circulating during the recent presidential campaign, but also alarmed conservatives who expressed concern their views were being suppressed. Zuckerberg acknowledged the fake news problem in his letter Thursday, though he did not say whether artificial intelligence might help solve that challenge as well.

The letter states:

“There are billions of posts, comments and messages across our services each day, and since it’s impossible to review all of them, we review content once it is reported to us. There have been terribly tragic events — like suicides, some live streamed — that perhaps could have been prevented if someone had realized what was happening and reported them sooner. There are cases of bullying and harassment every day, that our team must be alerted to before we can help out. These stories show we must find a way to do more.

Artificial intelligence can help provide a better approach. We are researching systems that can look at photos and videos to flag content our team should review. This is still very early in development, but we have started to have it look at some content, and it already generates about one-third of all reports to the team that reviews content for our community.

Read more from The Washington Post’s Innovations section