FACEBOOK AND TWITTER don’t want to make the same mistakes that marred this country’s last presidential election, but righting old wrongs can introduce new obstacles.
Facebook justified its choice according to a year-old policy intended to prevent posts from spreading widely when the site detects “signals” of falsehood. This is a smart rule — a sort of circuit-breaker to stop the platform’s own internal mechanics from catapulting a lie to viral status. The problem is that no one knows what the signals in question are: whether they are based on objective measures such as the number of people who reshare a piece and then delete their resharing, or whether they are based on subjective surmises such as the possibility, in this case, that the New York Post article was part of a propaganda campaign against the former vice president. And though Facebook says it has applied this stricture before in sensitive situations, it’s not standard practice.
Twitter, on the other hand, based its more radical intervention on an entirely distinct standard: a prohibition on the sharing of hacked materials. Critics were quick to ask why this restriction didn’t also apply to the New York Times’s reporting on President Trump’s tax returns, or any number of prizewinning journalistic products from years past. Twitter backtracked, announcing that it will now only remove hacked content directly shared by hackers or accomplices, and that the inclusion of personal details in the New York Post story was actually responsible for the URL-blocking. But Twitter never shared its basis for believing the materials were hacked. And while the site doesn’t have a general policy against misinformation, it strains credulity to imagine that its action had nothing to do with doubting the legitimacy of the story it shut down.
The contradictions that came with these calls have stirred up a fresh firestorm of accusations of anti-conservative censorship. Allegations of partisan bias in content-moderation decisions have never been borne out by the evidence. Yet it’s much easier to launch such allegations when platforms aren’t clear about precisely what their rules are and precisely how they’re being applied. Twitter would do well to develop a comprehensive misinformation policy; Facebook should better explain how its current misinformation policy actually operates. And both must figure out how their existing policies interact with the concerns about hack-and-leaks haunting the upcoming election. One way of restoring trust in the public sphere is to stem the transmission of untrue tales — but that can create distrust of its own unless it is done forthrightly.
The Editorial Board on tech
Read some of the Washington Post Editorial Board’s recent opinions on technology policy and tech’s role in society:
- Spyware is thriving, dangerous and unrestrained. It’s time to change that.
- Don’t want the FTC to act on antitrust? Tell Congress to get moving.
- Biden said we’d ‘find out’ whether Putin would act on ransomware. Now we have.
- Want to know how federal law enforcement uses facial recognition? Tough luck.
- The United States can’t keep ignoring India’s Internet abuses
- Your calendar should be as safe from government snooping in the cloud as in your desk
- The plans need work, but it’s good Congress is finally bringing substance to the Big Tech debate
- Farewell, TikTok ban. The White House has introduced a better approach.
See more opinions from the Editorial Board.
Sign up to get editorials, along with other Opinions pieces, in your inbox six days a week.