The Washington PostDemocracy Dies in Darkness

Opinion How is Facebook doing on civil rights? Depends whom you ask.

Facebook CEO Mark Zuckerberg speaks in D.C. in 2019. (Nick Wass/AP)
Placeholder while article actions load

How is Facebook doing on civil rights? According to the company, it is moving forward on its “path to enhance protections for marginalized communities.” According to an article published in The Post this week, the starting line was even further back than previously believed.

Facebook in 2018 hired independent auditors to review racial issues on its platform; in 2020, those auditors issued a report criticizing the company’s policy decisions in the area as a “tremendous setback.” Now Facebook is touting revised rules on hate speech, improved representation of minorities in its workforce and a fortified civil rights team dedicated to designing products with vulnerable populations in mind. The company, which recently changed its corporate name to Meta, has also pledged to more effectively measure whether people’s experiences with its technology, including the machine-learning tools used for moderating content, differ across race. That’s all encouraging, especially in light of an investigation in this newspaper suggesting that Facebook previously withheld this sort of data — perhaps from the auditors themselves.

The Post reported that researchers at Facebook discovered the platform’s race-blind policies were disproportionately harming minorities. The “worst of the worst” language on the site was removed less often when it targeted these populations, and more often when it targeted White people and men. The problem, in part, is that, without intervention, algorithms tend to amplify the features of the society they’re learning from. So the researchers recommended that Facebook tailor the tool to focus only on five groups — those who are Black, Jewish, LGBTQ, Muslim or of multiple races — deemed to be in special need of protection. Executives pushed back, concerned about backlash from “conservative partners” but also worried that groups not afforded deliberate attention, such as women, would suffer because of the change.

These events have provided more fuel for Facebook’s critics. But the story is also an example of just how difficult it is to build artificially intelligent systems that treat people fairly. There’s always the question of fair to whom. Legislators have set their regulatory sights on the machine-learning models that power platforms: proposing to deprive companies of their immunity from being sued for material posted by third parties if it is algorithmically promoted, or otherwise to prevent algorithms from “discriminating.” Yet these rules could end up dissuading sites from just the sort of curation urged by those who desire more robust consideration for minorities — or from curation at all.

The point is not that Facebook has chosen correctly, but that choosing correctly is hard to do. Moreover, restricting firms’ freedom when it comes to content moderation can hurt as much allowing too much freedom. The treatment for harms of all types that it can do, including with regard to civil rights, must avoid blunt-force prohibitions and focus instead on careful and transparent study of automated systems’ effects. This is exactly what Facebook is already promising, but what in the past it has failed to deliver.