Many different fields of forensics have come under attack in recent years, including blood-spatter analysis, hair-fiber analysis, ballistics testing and fingerprint analysis. Even outside of forensics, there has long been research showing that eyewitness testimony is far less reliable than most people think, and that juries give it far too much consideration. A skeptic might wonder: What, other than single-source DNA testing, can be used in a criminal trial? Are critics of modern forensics saying that none of these fields has any value in front of a jury? In the case of fingerprints, for example, current research suggests that, while full prints are useful for identification, we can’t be sure just how unique they are. How do we convey a concept like that to a jury, and how do we ensure that juries are accurately accounting for the shortcomings in these fields? Is it merely a matter of choosing the right words — like “reasonable certainty” or “exclude” versus “include?” Do juries understand the difference?
Simon A. Cole, Department of Criminology, University of California at Irvine; Law & Society; National Registry of Exonerations
I think that fingerprint associations have probative value and that we can use fingerprint evidence in a just way. But for a century we have not done so. We first mislead juries about the probability of error (“zero error rate”). Such statements are now largely considered discredited by leading fingerprint analysts, but they continue to appear in court because there is no organization that currently really controls the discipline. More recently, the discipline has been misleading juries about the probability that someone other than the person of interest is the source of particular print (“identification” or “individualization”). But many in the fingerprint discipline recognize this, and are trying to develop ways of reporting fingerprint associations that don’t mislead the fact-finder. There is a lot of disagreement about how to do this, but certainly, at least in theory, it can be done.
The question of whether they jury can understand all of this is more difficult. I know not everyone agrees with this, but I think the forensic disciplines should focus on getting themselves to the point where they think their forensic reporting and interpretation is scientifically defensible, as a first step, and worry about jury comprehension later. Jury comprehension is going to be a high bar to meet.
John Lentini, fire/arson expert
I submit that this question contains a faulty premise. Pattern matching does not have to be subjective all the time. It depends on the discipline. Bite marks are bad science, but for firearms identification and fingerprints, pattern matching can be quite persuasive. With fingerprints, it is important to conduct the analysis in the proper way, first by examining the evidence from the scene and documenting the important features. This should be accomplished before the examiner sees the suspect’s 10-print card. With respect to all pattern-matching evidence, it should be carefully documented so that the jury can see what the analyst sees. They can then reach their own conclusions. It is important that statements like “this bullet came from this gun and no other gun in the world” be excluded, first because there is no scientific basis for saying that, and second because the jury doesn’t care about a similar gun in Australia if the shooting took place in California.
Frederic Whitehurst, FBI crime-lab whistleblower; Forensic Justice Project
Following the common-sense put into law by Daubert v. Merrell Dow Pharmaceuticals, Inc. [the 1993 Supreme Court ruling that established the procedure for determining the reliability of expert testimony], we simply ought to require an understanding of certainty and/or uncertainty that we can assign to our opinions when we present them in court. A number of fields that have been criticized present data, but it isn’t clear what the data mean. For example, comparative bullet-lead analysis was used for decades before it was called into question. The question wasn’t about the technology used to analyze the lead, but the interpretation of that data — that each batch of ammunition had a unique chemical signature. Do we abandon the practices that provided the data just because they don’t allow for individualization? I don’t think so. We just need to work harder to understand the limitations of subjective interpretations of data.
Sandra Guerra Thompson, University of Houston Law School; Houston Forensic Science Center
There are many forensic disciplines, and they range widely in terms of reliability, from true junk science (like bite mark evidence and traditional arson investigation methods) to extremely reliable (like simple DNA testing not involving mixtures). This makes any discussion of “forensic science” more complicated. Obviously, courts should not admit junk science. As for the rest, there is clearly a sense of urgency, not only in the United States but internationally, to make the case for the reliability of the comparative pattern-matching disciplines such as latent print analysis and firearms examination.
Reports in the last decade by the National Academy of Sciences and the President’s Council of Advisors on Science and Technology brought to light the fact that we currently cannot say as a statistical matter how often these disciplines produce erroneous results. Do these forensic tests get it right virtually all the time, or are there far more false results than we imagine? Whether by means of computerization of various parts of the process, or through other scientific testing mechanisms, scientists have taken up the challenge of providing a statistical foundation for the comparative disciplines, but it will certainly be a long-term project.
In the meantime, we should treat the comparative forensic disciplines as areas of nonscientific expertise. Many of these disciplines employ scientific processes and equipment, but the conclusions drawn by practitioners involve subjective, expert opinions. The field of art authentication is a good analogy. To authenticate a painting, individuals with substantial scientific credentials and experience apply scientific knowledge and processes and may conclude that a certain painting is a Rembrandt. However, at the end of the day, we cannot say how often these experts are wrong. Likewise, we may never be able to say how often people share the same fingerprints and, at least for now, we cannot say how often fingerprint analysts erroneously conclude that two prints belong to the same person.
Nonetheless, these types of expertise are important. Museum curators rely on the opinions of art authentication experts, and police investigators rely on the opinions of fingerprint examiners. As such, fingerprint analysts and firearms examiners have demonstrated investigative value in their fields. Databases containing fingerprints and cartridge casings attest to the investigative helpfulness of these disciplines. Thus, courts should allow fingerprint analysts and firearms examiners to share their opinions with juries, so long as jurors are made to understand that it is at present not possible to say what weight to give this nonscientific expert opinion.
Chris Fabricant, Innocence Project
No witness, expert or otherwise, is permitted to proffer false or misleading testimony. Period. Jurors need to be given the appropriate tools to weigh the probative value of any evidence. With subjective techniques with relatively firm scientific underpinnings, such as fingerprints, the jurors must be provided with accurate error rate information — both the field’s and the individual expert’s — and the experts must not be permitted to “individualize,” [claim that a print could only have come from a given person] since there is no statistical basis for those claims.
We must also ensure that the justice system uses methods for generating eyewitness evidence that are accurate tests of witnesses’ memory, rather than mere confirmation of the identity of the police suspect. Police should be required to implement appropriate identification procedures — no “show-ups” [when, instead of a lineup, police merely show their suspect to the eyewitness and ask if he is the perpetrator], no blind administration of lineups, no biasing feedback following an identification procedure. Once those precautions are in place, jurors must be given scientifically valid jury instructions on the weakness of eyewitness identification evidence, and the defense must be allowed to call experts on memory and perception to help jurors understand and appropriately evaluate eyewitness identification evidence.
Itiel Dror, University College London; Cognitive Consultants International
First, even DNA mixture interpretation is subjective and susceptible to bias! Second, I do not think that evidence should be allowed only if it is ‘foolproof,’ ‘objective,’ etc. All evidence, even scientific evidence, has weaknesses and limitations. That does not mean it is not useful and important, and should not be presented in court. What must happen is that these limitations need to be transparent and presented in court, and jurors must be aware of the weaknesses of each domain — see my answer to the previous question as to how this can be achieved.
Jules Epstein, Temple University Beasley School of Law; National Commission on Forensic Science
This is unbelievably complex. Let’s assume a latent print that has many features that correspond to the suspect’s finger — the less common the features are, the more probative the proof. That is separate from a conclusion — e.g., it could only have come from this person. So where there is probative feature correspondence that can be accurately determined, a fact finder should not be deprived of that information.
But we are still stuck with the issue of the jury over-valuing the proof: since no one can say how rare certain features are, the jury might put to much weight on feature correspondence. But it is still relevant and may be useful.
A second issue is whether there is an open or closed universe of suspects. If there are three people in a car, the fingerprint evidence on the steering wheel may be more easily and properly associated with one of the three than if there is a larger or wide-open universe of potential contributors/sources.
Barbara A. Spellman, University of Virginia Law School
For every kind of evidence, we can ask two especially important questions:
1. How good is the evidence?
2. How good do jurors (or judges) think the evidence might be?
In drafting the rules of evidence, and deciding which evidence would be admissible for what purposes, legislators likely often considered those questions. Evidence with a huge mismatch — that is, that isn’t very good but that fact finders might weigh heavily — is typically not allowed. There are many evidence rules that involve particular types of evidence, but that how-good-versus-how-misleading balance is captured in the familiar TV-courtroom phrase “Objection! More prejudicial than probative.”
This mismatch is a huge problem with forensic evidence. Jurors — and judges — seem to think forensic evidence is extremely valuable. But the criticism of the forensic fields in recent years finds that the evidence is far, far less good than anyone imagined.
However, that doesn’t mean the evidence is not at all useful. What people expect too often from forensic evidence is that it will provide “individuating” information — that is, that experts will be able to use it to say “It comes from that guy.” And that is what we assume that single-source DNA evidence can do.* But we admit plenty of different kinds of useful evidence that do not point to one specific individual. For example, suppose that a defendant was one of a dozen people who had a key to a building. It is relevant evidence because it increases the chances that he was, in fact, the person who committed the crime. But it is far from individuating. Similarly, think back to pre-DNA blood evidence, when blood would be group-typed as A, B, AB, O, with a +/- for antigens. Non-matching blood types would rule out a suspect; matching would rule him in, but would not individuate.
People need to stop assuming that all types of forensic evidence are, or need to be, individuating. It is not known whether every individual has different fingerprints and we certainly cannot assume, given the incomplete messy latent fingerprints obtained at crime scenes, that examiners should be able to individuate based on the information they get even if it were the case that all pristine print are different. Still, fingerprint information could be used to rule in or rule out particular people — with the important caveat that information be conveyed to the jury such that the limitations are understood. Limitations include explaining things like potential biases, laboratory error rate, and not using the term “match.” Creating a language to explain how diagnostic forensic evidence is — without resorting to complicated mathematics — is necessary to help forensic evidence be used correctly.
Of course, scientists still need to develop a deeper technical understanding of how good the forensics are, but not having the capacity for individuation does not mean we should rule them out entirely.
(*Someone more knowledgeable than I could give a longer answer about DNA here. I’ll let slide the views that everyone except homozygotic siblings has unique DNA, that the differences can be reliably detected by our methods, and that DNA tests are not subjective.)
Roderick Kennedy, retired judge, New Mexico Court of Appeals
I’d encourage readers to look at the American Bar Association’s Resolution 108B from the mid-year meeting in February 2018, together with its committee commentary. I was on a working group that came up with final language. It urges all judicial systems in the U.S. to recognize a defendant’s substantive right to post-conviction relief on a credible showing that the science used to convict them has changed to the point the point that its validity or effect has been seriously questioned.
Look, I no longer think that fingerprints require proof of the theory of uniqueness to be proper evidence. First, that proof of uniqueness can never happen. But second and more importantly, fingerprint databases are large enough and work on the question of what constitutes valid process for matching and verification. They provide maybe the best example of current progress in forensic science. Enough points of similarity and perhaps the probity of the evidence can be established to a point that it can fairly support inferences of origin.
Eyewitness testimony hit a legal rule that still stymies a lot of judges — a witness cannot testify that another witness is a liar. That idea has successfully provoked a lot of judicial resistance. The true explanation that is often looked over is that the eyewitness is telling the 100 percent truth as it is known to them. The problem isn’t that the eyewitness is lying, it’s all of the psychology that goes into perception, imprinting, storing, revisiting, analyzing, being asked about, and then recalling — not to mention intrusion of suggestion, partisan identification, cross-racial identification and other variables. All of this factors into the eyewitness’s ability to accurately remember what they actually saw. The focus has to switch to permit the introduction of all the research showing this multitude of factors that can influence an eyewitness. The point here is not to “impeach,” a fraught concept that implies untruth. The point is get the jury to understand that there’s a solid scientific basis for fairly questioning witness’s perception, memory and recall. Such questions should be a right, and the frequency with which eyewitnesses have contributed to wrongful convictions should prove its necessity.
Roger Koppl, Forensic and National Security Sciences Institute, Syracuse University
It helps to have two sides of the story presented. That could discourage “over-claiming,” which is drawing too strong or too large a conclusion from the evidence. Rather than trying to regulate the speech of witnesses, who will be addressing all sorts of things we can’t foresee, let the two sides make their opposing cases!
Unfortunately, we can’t even exempt DNA typing from the charge of subjective judgment. When the DNA sample is small, corrupted, or contaminated, DNA examiners can sometimes make subjective judgments about it. Nor do we have universally recognized procedures for addressing mixed samples, to which two or more persons contributed.
Michael Risinger, Seton Hall School of Law; Last Resort Exoneration Project
We should acknowledge that there is little controversy about the fundamental validity of many forensic science fields and techniques, such as forensic chemistry, which tend to utilize scientific knowledge developed in academic settings for non-forensic purposes. While unwarranted courtroom testimony can occur even in those fields, the scandals usually result from failure to perform the process properly (or failure to perform it at all, as in the “dry labbing” cases), or in exaggerating the meaning of results, rather than disputes about fundamental validity issues.
Most (but not all) of the problem disciplines are the so-called “pattern matching” disciplines, which typically draw conclusions about a common source for a known-source exemplar and something found at a crime scene or in some other relevant location. The foundational assumptions of these disciplines were developed mostly in the late 19th century by a process which would not today be easily classified as “science.” Nevertheless, they may have some reliable applications warranted by critical common sense, where there appears to be a lot of correspondence of features that there is good reason to believe are randomly distributed.
However, knowing how many such correspondences are enough for a reliable inference of common source is the central issue of many such disciplines, and the difficulty of that issue varies from discipline to discipline. Suffice it to say that under some circumstances, the correspondences may be so striking as to be almost self-validating (as with the perfect puzzle-fit of two pieces of broken glass, indicating that they were once part of the same mass). But most such assertions in pattern matching areas are not so obvious, and these should require validation studies, which are often lacking.
Even if validation studies were done to show the average level of diagnosticity of the subjective judgment of examiners in such areas, when applied to a case-specific task, how this should be communicated to the jury remains a problem. The expression should neither over-value nor under-value the reliability of the conclusion, but no good system of expression agreed upon by practitioners and critics alike has been worked out.
I prefer expressions based on levels of potential surprise — for example: “If the known print and the crime scene print were not from the same source, I would be very surprised.” I think it is easier for jurors to understand the subjective nature and limits of this form of testimony.
Judy Melinek, forensic pathologist; author of “Working Stiff: Two Years, 262 Bodies, and the Making of a Medical Examiner”
The problem is one of both jury-pool education and courtroom semantics. When I testify as an expert witness, I always ask the attorney about the educational level of the jurors. I try to speak to the lowest educational level, but I sometimes find it difficult to explain complex scientific concepts to a lay jury who may have no scientific training. I know that it’s the job of the lawyers to weed out jurors who don’t have the educational basis to assess the evidence, but the rules that limit jury selection make this impossible given the shallowness of the jury pool. It’s a failure overall of the scientific education system in this country.
It may be worthwhile to consider requiring jurors to pass a test of basic scientific literacy before sitting on a panel that involves scientific testimony. Additionally, basic definitions of science terms and elemental statistics and probability should be included prior to any testimony that includes scientific terminology.
Brandon L. Garrett, Duke University School of Law
Crime labs and forensic analysts should be self-policing. They have not been, in part, because few do any research. Fortunately, outside research is being done into the reliability of these fields of forensics. For example, I’m part of the research team at the Center for Statistics and Applications in Forensic Evidence, which is working to establish a scientific and statistical foundation for the use of forensic testimony.
Keith A. Findley, Center for Integrity in Forensic Science; University of Wisconsin Law School
The central flaw with most pattern-matching areas of forensics is that they are fundamentally subjective, and, therefore, subject to influence by contaminating or biasing information. These are always probabilistic determinations, but the analyst has no data to tell her what the probabilities are. That is, how likely it is that any perceived similarities between crime-scene evidence and evidence from a suspect are merely coincidental (the product of chance) rather than because they share a common source?
The analysts have typically testified in ways that mask these uncertainties — they often declare matches as if they are the product of rigorous scientific testing rather than of subjective judgment, and they declare matches to degrees of near certainty, or to some statistical likelihood level that has no basis in data.
Pattern-matching disciplines can be helpful in helping juries understand how the patterns are created, what the process is for discerning them, and what congruence or dissimilarities a trained analyst can see (and show the jury) between the crime scene evidence and the suspect’s sample. Ideally, with more research, rigorous processes and standards will be validated for identifying patterns and linking them to known samples, and some method will be developed for developing databases from which statistical data can be drawn so that, as with DNA analysis, there is a scientific basis for explaining to the jury the statistical significance of perceived similarities or “matches.”
But until then, part of the solution must be to limit what analysts claim, and to require them to be clear about the meaning of the terms they use. But it’s about more than just using the right words; juries need explanations about the process, about the lack of data to permit statistical claims, and about the risks of error.
Read more from Radley Balko: