FACEBOOK AND TWITTER don’t want to make the same mistakes that marred this country’s last presidential election, but righting old wrongs can introduce new obstacles.

Last week, Facebook reduced the distribution of a dubious story by the New York Post that smeared Democratic nominee Joe Biden, pending third-party fact-checking. Twitter blocked the URL from being shared altogether. Both platforms made the correct decision to slow what so far seem to be baseless accusations backed up by leaked emails of murky origin — yet the way the sites made that decision matters, too. The confusing and opaque process that accompanied the positive outcome threatens to render pyrrhic any victory over the forces of misinformation and meddling.

Facebook justified its choice according to a year-old policy intended to prevent posts from spreading widely when the site detects “signals” of falsehood. This is a smart rule — a sort of circuit-breaker to stop the platform’s own internal mechanics from catapulting a lie to viral status. The problem is that no one knows what the signals in question are: whether they are based on objective measures such as the number of people who reshare a piece and then delete their resharing, or whether they are based on subjective surmises such as the possibility, in this case, that the New York Post article was part of a propaganda campaign against the former vice president. And though Facebook says it has applied this stricture before in sensitive situations, it’s not standard practice.

Twitter, on the other hand, based its more radical intervention on an entirely distinct standard: a prohibition on the sharing of hacked materials. Critics were quick to ask why this restriction didn’t also apply to the New York Times’s reporting on President Trump’s tax returns, or any number of prizewinning journalistic products from years past. Twitter backtracked, announcing that it will now only remove hacked content directly shared by hackers or accomplices, and that the inclusion of personal details in the New York Post story was actually responsible for the URL-blocking. But Twitter never shared its basis for believing the materials were hacked. And while the site doesn’t have a general policy against misinformation, it strains credulity to imagine that its action had nothing to do with doubting the legitimacy of the story it shut down.

The contradictions that came with these calls have stirred up a fresh firestorm of accusations of anti-conservative censorship. Allegations of partisan bias in content-moderation decisions have never been borne out by the evidence. Yet it’s much easier to launch such allegations when platforms aren’t clear about precisely what their rules are and precisely how they’re being applied. Twitter would do well to develop a comprehensive misinformation policy; Facebook should better explain how its current misinformation policy actually operates. And both must figure out how their existing policies interact with the concerns about hack-and-leaks haunting the upcoming election. One way of restoring trust in the public sphere is to stem the transmission of untrue tales — but that can create distrust of its own unless it is done forthrightly.

Read more: