A forthcoming study about misinformation on Facebook will say that false stories earned more clicks than facts six times over during the 2020 election, The Post reports. Facebook will say the study itself misinforms.

This story is familiar: Researchers repeatedly publish findings on the far and fast spread of sensational material on social media sites, and the sites reply that the data the researchers are using is flawed — all the while withholding access to numbers they claim are more elucidating. Indeed, Facebook in August cut off access for the New York University team responsible for the study in question, purportedly to protect users’ privacy. The White House has also clashed with Facebook over its unwillingness to provide certain data on covid-19-related misinformation, and this week, Facebook replied to a lengthy letter from two congresswomen requesting similar data, including about advertisers, with a 79-word note that it had “nothing to share.”

Facebook has a point when it argues that misinformation is a society-wide scourge that doesn’t start or end on its products. It also has a point that the workarounds researchers routinely employ provide an incomplete picture. The NYU study focused on engagement with posts, as measured by the company’s publicly available CrowdTangle tool, and discovered that those its partners categorized as “untrustworthy” garnered substantially more likes, shares, and interactions than those categorized as trustworthy. But that doesn’t reveal anything about how many people see this content without hitting a button — Facebook’s preferred metric of impressions, or reach.

On the flip side, of course, focusing only on impressions and ignoring engagement altogether doesn’t speak to the full scope of the issue either. Just as important is data on where, when and by whom misinformation spreads, so that observers can look, ideally in real time, at the off-platform context surrounding a viral on-platform post. There is no way for researchers to produce the complete picture Facebook protests they are missing if Facebook won’t let them see half of what they’re trying to paint.

Facebook is unlikely to cough up all this data on its own. The transparency report the company released in August to combat charges of opacity, as it turned out, was a tweaked version of an earlier effort that the company deemed too damaging to its image. The privacy concerns Facebook raises about data sharing are also more than a convenient excuse; current law makes it difficult for firms to give access to academics without risking a hefty fine.

Congress can do more than express frustration about platforms’ stinginess: crafting a system that compels companies to produce specific types of data under specific circumstances, as well as shields them from privacy repercussions when they comply. The alternative is more studies all saying the same thing, and more responses from Facebook saying just the opposite.