On Monday the Justice Department announced plans to collect data in five pilot cities on police stops, searches, arrests and case outcomes in a bid to ferret out the "possible effect of bias within the criminal justice system."
The program comes with a grand name (it launches the National Center for Building Community Trust and Justice) and a decent pot of money ($4.75 million, not an insignificant amount for research in the social sciences). With it, Attorney General Eric H. Holder Jr. is after the objective core of any policy debate: “Of course, to be successful in reducing both the experience and the perception of bias," he said in announcing the program, "we must have verifiable data about the problem."
Data of the kind he is talking about, however, is exceptionally difficult to obtain. Perhaps you think you've already seen something like it. Holder quoted two statistics Monday, the first from a recent study widely covered by the media: By the age of 23, it found, half of black men have been arrested at least once. Similar data suggest that black men are six times more likely to be incarcerated than white men.
But while that data points to overwhelming disparities in who is impacted by America's criminal justice system, it doesn't tell us much about police bias, about whether those disparities exist at least in part because officers on the street are stopping and arresting black men more often than whites for no reason other than their race.
Perhaps you think you've seen data on this point, too. The New York Civil Liberties Union produced this graphic from stop-and-frisk data in New York City in 2011, when 87 percent of all people stopped were either black or Latino:
Last fall, the Web site BKLYNR followed up with an even more compelling visualization of 2012 stop-and-frisk data in New York City, by race:
What we really want to know, though, is not whether minorities are stopped at a higher proportion than their share of the local population would suggest. We want to know if they're stopped in higher proportions than the rate at which they commit crime.
"What you’d want to know is this: Since African-Americans are six times more likely to be stopped and frisked, are they six times more likely to be in possession of something criminal when they’re stopped?" says John Roman, a senior fellow at the Urban Institute's Justice Policy Center.
If you were to stop people on the streets of New York entirely at random, would that still be true?
This is a particularly difficult question to answer because we are fundamentally trying to compare stop and arrest rates (about which we have data) with criminal behavior (about which we seldom do). We know, for instance, who and how many people are arrested and convicted within a given year in any city for burglary. But we don't know how many people -- or which people -- in that city committed a burglary, with or without getting caught. That larger group by definition evades data.
We can't even simply compare the universe of people who are stopped against the universe who are ultimately arrested, because police make both stops and arrests. Any bias would be built into both samples.
For these reasons, police data alone will never yield the kind of insight that a randomized control trial might. But a true experiment in this context would be extremely difficult to design. The ultimate question is this: How would a police officer respond differently to two people on the street who come from similar economic backgrounds, set in otherwise identical contexts, when one is white and the other not? Perhaps we could test police departments the same way that researchers hunt for racial bias in the housing market: by sending two otherwise identical but racially different candidates to tour a home or apply for a mortgage (or to walk down a street, carrying some contraband, past a police officer).
But that still doesn't answer the question of who commits crime (in the universe of people who are both caught and not caught).
The research that we already have on police bias struggles with all of these limitations. A 2007 RAND study of stop-and-frisk in New York, for example, tried to partially solve these problems by comparing different officers serving in the same neighborhood to each other, and by using Census data to reconstruct who might be stopped by an officer in a given neighborhood if his decisions were racially unbiased. But both of these tactics are imperfect. Acknowledging this, that study concluded that racial disparities were smaller -- but still exist -- than we might think looking at the kind of raw data that civil liberties groups have used.
Roman suggests a few other imperfect solutions.
"We don't have a study that says 'these are the proportions by which different genders and races commit crimes,'" he says. "But we do know how often they self-report using drugs. And we do know how often they are arrested for drug violations."
Research consistently says that blacks are not more likely -- they may even be less likely -- than whites to use drugs. But they are four times as likely to be arrested for some drug possession charges. "When you see similar rates of use and completely disparate rates of arrest," Roman says, "that’s evidence of racial bias that’s sort of hard to refute."
But there's also one other big challenge here. Even if we managed to produce great data despite all of these limitations, it would be descriptive, not prescriptive. Even if the Justice Department's program identifies the presence of racial bias in policing, that doesn’t begin to tell us why it exists, or how to eliminate it.
That's not necessarily a barrier to policy solutions for a criminal justice system that disproportionately impacts minorities. We can move forward with reforming drug laws regardless of whether the disparities caused by those laws were intentional or not, based on animus or not. Bias embedded in a system, though, is a much harder thing to understand than the disparities created by it.