Samantha Power is a former U.S. ambassador to the United Nations and the author most recently of “The Education of an Idealist: A Memoir.”

As a reporter years ago in Bosnia, I witnessed malicious actors spreading lies on TV that stoked fear and fomented mass violence. In Rwanda, they used radio broadcasts. Over the past decade, this pattern has repeated on a new medium, with even greater reach. In Brazil, Hungary, Myanmar, the Philippines and elsewhere, those aiming to justify human rights abuses, steal elections, or target ethnic and religious minorities have relied on Facebook.

In the United States, Facebook’s weaponization has been well documented. Despite having been overrun by foreign disinformation in 2016, and vowing to combat falsehoods this election cycle, the platform is still not doing enough to stem their spread. Since 2016, user engagement with content from outlets known to continually publish verifiably false information has more than doubled. Disinformation and conspiracy theories — whether smears of political candidates, phony images of discarded ballots or claims that “the left” deliberately infected President Trump with the coronavirus — are being used to deepen polarization, suppress voter turnout and delegitimize the election. Alarmingly, these falsehoods could also fuel civil unrest, ultimately threatening the fabric of American democracy.

Facebook founder and CEO Mark Zuckerberg recently announced steps aimed at protecting election integrity. These measures include adding correction labels to posts that ascribe victory before results are final and removing explicit misrepresentations about how or when to vote — such as announcements that “You don’t need to register to vote this year,” or misleading information about when ballots must be received. Facebook says it deleted more than 120,000 posts attempting “voter interference” between March and September, and that it affixed misinformation warnings to more than 150 million pieces of content viewed in the United States over that time.

Still, Facebook has done far too little to address a dangerous reality: Many posts will have already gone viral in the hours or weeks that it takes for a falsehood to be flagged, fact-checked and labeled as misinformation. It is critical that Facebook immediately go further and provide retroactive corrections to users who have unwittingly been exposed to false election-related information before it was labeled. Zuckerberg must also follow through on his company’s prior pledges to reduce the reach of pages or groups that serially circulate falsehoods.

Seven out of 10 American adults use Facebook, and over half of American adults say they get news on the platform. Given the platform’s power, addressing these two issues could make a critical difference in reducing the risk of misinformation-fueled chaos and violence before Inauguration Day.

Facebook brought third-party fact-checking partners on board after the 2016 election debacle. It began attaching warning labels to false posts a year ago. Although the company flags misinformation to users who have shared posts with falsehoods, it — remarkably — does not alert or supply retroactive corrections to the far larger number of people who have viewed or interacted with falsehoods on the platform (whether by “liking,” commenting on or watching such content). Presumably, Facebook fears that doing so would open it up to criticism, boycotts or regulation — or that correcting the record would draw further attention to the pervasiveness of misinformation on its platform.

Facebook has the technical know-how and staff capacity to notify people who have interacted with misinformation before it is labeled “false information.” In April, Facebook announced it would begin providing links to a World Health Organization myth-debunking website to users who had interacted with harmful pandemic falsehoods (such as claims that drinking bleach “cures” the virus or that social distancing is ineffective) before Facebook acted on the content itself. Yet this is not enough: Facebook urgently needs to provide relevant corrected information to all users who have unwittingly been exposed to election or other health falsehoods.

Misinformation is famously “sticky.” People who see a correction will not be able to “unsee” a falsehood. Nonetheless, a notification that both explains why something is false and offers true information can help unstick misinformation. “The Debunking Handbook,” updated this month by 22 academics from MIT, Cambridge and other institutions, confirms this. Facebook’s efforts to get WHO alerts to previously misinformed users shows that it recognizes that providing links to accurate information can reduce harm.

Facebook must also take far more drastic steps to “detox” its algorithm. This requires significantly scaling up enforcement of its 2019 commitment to prophylactically “reduce the overall distribution” of pages and groups that serially circulate misinformation so that they appear less frequently in users’ feeds. Although the company has said it can decrease the reach of misinformation posts by 80 percent, Facebook has not been transparent about how it handles recurrent purveyors of misinformation. Many still have enormous reach, proving that too little is being done. Indeed, using Facebook partners’ own fact-checks, researchers with the global nonprofit group Avaaz have identified pages and groups with more than 150 million collective followers — and an estimated 30 billion views in just the past year — that have repeatedly shared misinformation with American citizens.

These sorts of changes should have been made long ago. But as Zuckerberg has shown with his recent bans on Holocaust denial and QAnon-linked pages and groups, it is never too late to act in the public interest.

Read more: