MENLO PARK, Calif. — For years, Facebook has built its algorithms to maximize engagement and clicks — a strategy that has helped the company garner 2.7 billion users across its family of apps, including Instagram and Messenger. But increasingly, the company is willing to go up against the way its software is designed to combat the spread of harmful content.
On Wednesday, the company announced a slew of new features and incremental product updates that counter the core engineering of its own systems by tweaking them to do more to reduce the spread of misinformation and sensational news — borderline content that the company won’t remove entirely but is taking a more active role in policing.
For example, the company will update its scrolling news feed algorithm by reviewing little-known websites whose articles get sudden surges of traffic on Facebook — a pattern that Facebook says internal tests showed were a red flag for misinformation and clickbait. The new metric does not mean the problematic articles will be taken down, but their traffic will be reduced in news feed, the primary screen Facebook users seen when they open the app.
The question is whether these changes are tweaks on the margins or more fundamental fixes to a service that--while massively profitable--has experienced a precipitous loss of public trust. The newsfeed algorithm alone takes in hundreds of thousands of behavioral signals when it evaluates which posts get promotion — and it’s tough to assess the impact any single fix might have on such a complex system.
The company will also expand fact-checking features for images, add privacy features to Messenger, and do more to take action on posts, images, hashtags and other content or behavior that it calls “borderline” — material or actions that don’t technically violate the company’s rules but can lead to harmful outcomes.
“As content gets closer and closer to the line of community standards, we actually see that it gets more and more engagement,” said Facebook Operations Specialist Henry Silverman at an event at the social network’s headquarters in Menlo Park, Calif. “It’s the reason for the old newspaper adage, if it bleeds, it leads.”
Facebook may be making some progress in reducing the viral spread of false information in the U.S. Outside researchers have found there were significant drops in the proportion of Americans who visited fake news websites during the 2018 congressional elections compared to the presidential election in 2016.
In Messenger, one of Facebook’s chat apps, the company is adding a verified badge, the blue check mark that already is included on Facebook profiles, intended to help people distinguish between real accounts and impersonators. People will also be able to be notified if they join conversations and video chats that include people that they have previously blocked.
On Instagram, the company says it will remove more sexually suggestive and other borderline content from its Explore and Discovery tabs, two features where people find new accounts and posts that they didn’t explicitly seek out. Users who seek out terms like #porn, #cocaine, and #opioids find a blank page on the company’s Explore Tab, and the company will be increasing the numbers of blocked terms.
“We have this higher bar when Instagram is recommending you content for accounts that you haven’t specifically chosen to follow or to search for,” said product manager Will Ruben.
Facebook is also enabling people to get more information about the groups that they join. People will able to see the history of the group’s name changes, because many junk political news sites frequently change their names. If people in a group repeatedly share information that has been rated false by fact-checkers, the company will reduce the reach of the group as a whole, cutting down on the number of Facebook users that the algorithms suggest should join a group.