In the Wall Street Journal on June 2, an article headlined “The myth of systemic police racism” argued the “charge of systemic police bias was wrong during the Obama years and remains so today.” Like many others making this case, the piece cited an article published last year in the Proceedings of the National Academy of Sciences (PNAS), by researchers at Michigan State University and the University of Maryland, who concluded, “We did not find evidence for anti-Black or anti-Hispanic disparity in police use of force across all shootings, and, if anything, found anti-White disparities … ” Before its retraction, the study received widespread, and largely unquestioning, coverage by news outlets across the political spectrum.
But the study was fundamentally flawed, and the authors have admitted as much — which is why they took the extraordinary step of withdrawing it. It’s important to grasp how the paper went wrong, because some people, including Manhattan Institute fellow Heather Mac Donald, the author of that Wall Street Journal opinion piece, continue to claim it was retracted only because it had become politically controversial (“I Cited Their Study, So They Disavowed It”). The authors deny this explicitly.
What did this debunked study do? Drawing on new databases assembled by The Washington Post and the Guardian newspapers, the study focused on a tiny, but important, fraction of police-civilian encounters: more than 900 fatal police shootings in 2015. Of the killings, 501 involved white people, 245 involved black people and 171 Latinos. The authors gathered additional information on the race, sex and experience of the officers involved. The study promised to answer two questions: Which groups of civilians were more likely to be shot by police, and which groups of officers were more likely to shoot them.
But the analysis went wrong from the start. To begin to measure racial bias in police killings, careful researchers must ask: How often do officers use fatal force out of all encounters between minority civilians and the police? They should then compare this with the same analysis for white civilians, accounting for relevant differences between minority and white encounters.
That’s not what the paper did. Instead, it looked only at fatal encounters and asked, in average circumstances, which group of civilians appears more often among victims? In other words, they analyzed how often fatally shot civilians were black and Hispanic. But they confused this with a much more important question: How often are black and Hispanic civilians fatally shot. It’s a basic statistical error (violating a centuries-old tenet of statistical analysis called Bayes’ theorem). These quantities can differ enormously: When officers encounter many more white civilians (due to whites’ majority status, for example), the proportion of killings involving black civilians can be small, even if encounters with black civilians are more likely to end in shootings.
The study indeed found, unsurprisingly, that white shooting victims outnumbered black and Hispanic victims in various circumstances. The five authors wrote that “in the typical shooting … a person fatally shot by police was 6.67 times less likely … to be Black than White and 3.33 times less likely … to be Hispanic than White” and concluded: “Thus, in the typical shooting, we did not find evidence of anti-Black or anti-Hispanic disparity.” Put another way, the authors mistakenly claimed to find no evidence of racial bias simply because, among “typical” fatal shootings, there were more white civilians than minorities — thereby committing the same logical fallacy as President Trump.
The paper also sought to test whether white officers were more likely to shoot minority civilians, compared with minority officers. Again, it lacked the data to do so. Instead, it looked at the civilians killed by each racial officer group and asked whether the proportion of minority victims differed, adjusting for features of the counties where killings occurred, such as income and whether the county was rural or urban. Because they did not find a strong relationship between the races of officers and the people they shot, the authors concluded police hiring reforms aimed at “increasing racial diversity [of officers] would not meaningfully reduce” racial bias in police shootings.
Instead, the study attributed shootings of minorities to “violent crime committed by Black [or Hispanic] civilians” in the counties where shootings occurred. But once again, the study failed to justify its provocative claims: In their main statistical analyses, the authors did not account for criminal behavior at the county level. Instead, they substituted data on violent crime victimization of minorities, then misleadingly presented it as evidence of criminality among minorities.
Even if they had measured crime correctly, the problem is that the overarching analysis fundamentally makes no sense: How often fatally shot civilians were minorities is simply not the same question as how often minority civilians were shot when interacting with officers. This problem is further magnified when comparing across officer racial groups, because minority officers are often assigned to patrol in co-racial neighborhoods. Without knowing how often each officer group encounters black, Hispanic, and white civilians, the analysis is completely uninformative.
To be clear, accurately estimating racial bias in police shootings nationwide is a difficult task. Comprehensive records of lethal force — the numerator in the deaths-per-encounter ratio — only recently became available through open records requests, crowdsourcing, and enterprise journalism. Subsequent research shows black males face a roughly one in a thousand lifetime chance of being killed by police, 2.5 times more than white males. The denominator — how often racial groups encounter police — is largely unknown, because police are not required to report many kinds of encounters. But this difficulty is no excuse for shoddy work, especially on a life-or-death policy matter.
The errors in this article are indisputable. We laid them out in detail one year ago; the authors acknowledged their mistake, and now they have retracted the study entirely. In a statement about the retraction, two authors — Joseph Cesario, an associate professor of psychology at Michigan State, and David J. Johnson, a postdoctoral fellow in psychology at the University of Maryland — acknowledge the disconnect between their “careless” claims about “the probability of being shot by police” and their far more limited statistical evidence, which, they add, “does not speak to these issues.”
“We take full responsibility for not being careful enough with the inferences made in our original article,” they continue, “as this directly led to the misunderstanding of our research.”
Even the editors of PNAS now agree, writing upon further investigation, “the authors poorly framed the article, the data examined were poorly matched, and … unfortunately, address a question with much less public policy relevance than originally claimed.” The errors are so glaring and fundamental that in an unusually broad demonstration of scholarly consensus, more than 800 academics and researchers from an array of fields — including computer science, criminology, political science and statistics — condemned it for scientific malpractice.
But ideologues now seek to resuscitate this discredited work, claiming the retraction was politically motivated. It was not. Cesario and Johnson write that their “decision had nothing to do with political considerations, ‘mob’ pressure, threats to the authors, or distaste for the political views of people citing the work approvingly.”
In today’s polarized climate, it can be difficult to separate genuine scientific disputes from opinion, but as we’ve shown, the work simply does not demonstrate what it claims. Academic disputes often live in gray areas, but in some cases, research is simply objectively wrong.
Slipshod inferences have no place in such a sensitive debate. As America considers policing reforms, we must appeal to rigorous research. When we lack data, we must acknowledge uncertainty. We need to improve the nationwide collection and sharing of policing data. But gaps in knowledge must not be filled with faulty science.