Facebook recently announced it will bar misrepresentations about the decennial survey from its site, whether they appear in everyday posts from ordinary users or in paid advertisements from politicians usually exempt from fact-checking. The policy is stricter than the platform is usually willing to be with falsehoods, and, paired with action on medical misinformation, it may be a sign that companies are increasingly grappling with how what happens in their online worlds affects what happens in what we used to call the real world.
Facebook’s census strategy is similar to its rules against voter suppression. The aim is to protect a system essential to democracy from malicious actors both here and abroad. If someone sees on Facebook a false deadline for filling out a census form, they may fail to fill out that form on time — and then go uncounted. Facebook has heard from civil rights advocates worried about that scenario playing out against minorities, and it is properly taking unilateral action.
Slightly more complicated is Facebook’s removal last week of ads purchased by pages affiliated with personal-injury lawyers that link the HIV prevention medication PrEP to severe bone and kidney damage. The claims are false, as LGBT activists have been complaining for months, and Facebook yanked the paid posts after some delay when its fact-checking partners said so. The harm is obvious, too, both from the PrEP ads and from countless other health hoaxes that have gone viral this past year. But Facebook is choosing to approach the threat largely case by case through its third-party fact-checkers (except when it comes to vaccines), rather than formulating a consistent policy about medical misinformation that could preempt future problems.
Admittedly, crafting such a policy is difficult. The PrEP ads contained a kernel of truth about the drug when it is used for treatment rather than prevention, which the advertisers warped to their advantage. And there are products that professionals agree are harmless but that juries have imposed penalties for nonetheless.
Facebook is muddling through the challenges of moderating a platform where 2 billion people speak about countless controversial subjects. When does it make sense to impose a bright-line policy prohibiting a category of content, as Facebook has done for census misinformation? When is it preferable to address dangerous posts ad hoc, as it has done for most medical misinformation? These questions aren’t easy to answer, but Facebook will have to answer them — transparently, thoughtfully and not only when critics start yelling — if the company wants to prove its newfound responsibility is more than just reactive.