A friend of mine was suffering such severe back pain that it was difficult for him to walk or stand. He consulted three doctors about the best course of treatment. The first was adamant that he needed surgery right away. The second advised my friend that he didn’t need surgery and that if he continued physical therapy, his condition would improve gradually over the coming months. The third prescribed strong steroids and recommended that, if his condition didn’t improve in a month, then he should have surgery. My friend followed the third doctor’s guidance, and it seems to be working. But he was mighty upset and confused by all those clashing perspectives. And he is still unsure whether that third doctor’s approach is the right one.
This undesirable variability in professional judgment is an example of noise, the ubiquitous and often-ignored human failing that is the focus of this well-researched, convincing and practical book. “Noise: A Flaw in Human Judgment” was written by the all-star team of psychologist and Nobel Prize winner Daniel Kahneman, former McKinsey partner and management professor Olivier Sibony, and productive legal scholar and behavioral economist Cass Sunstein. Kahneman won the Nobel Memorial Prize in Economic Sciences for his pathbreaking work with Amos Tversky on systematic biases in judgment. It prompted armies of psychologists and behavioral economists (including Sibony and Sunstein) to study the causes and remedies for many such faults, including overconfidence, stereotyping and confirmation bias — or seeking, remembering and placing excessive weight on information that supports our beliefs.
The authors kick things off by distinguishing between bias (systematic deviations) and noise (random scatter). The book then sustains a relentless focus on explaining and documenting the wallop packed by the simple and omnipresent error of noise — and what decision-makers can do about it. It blends stories, studies and statistics to make a compelling case that noise does at least as much damage as bias: undermining fairness and justice, wasting time and money, and damaging physical and mental health.
Kahneman and his colleagues show how unwanted variation in judgments (evaluations) and decisions (choices) creates “noisy systems” — which plague professionals including criminal judges, insurance underwriters, forensic scientists, futurists and physicians, who routinely make wildly varied judgments and decisions about similar cases. Systems are noisy, in part, because different professionals apply different standards. There is disturbing evidence, for example, that when multiple physicians evaluated identical cases for evidence of heart disease, tuberculosis, endometriosis, skin cancer and breast cancer, they agreed on diagnoses only about two-thirds of the time. In such noisy systems, errors add up rather than cancel each other out. As the authors put it, “If two felons who both should be sentenced to five years in prison receive sentences of three and seven years, justice has not, on average, been done.”
Systems are also noisy because, over time, the same professionals apply inconsistent standards. To illustrate, a study of 22 physicians who each examined the same 13 angiograms two times, several months apart, found that they disagreed with themselves between 63 percent and 92 percent of the time. To explain such swings, the authors use research on “occasion noise”: Fluctuations in a person’s mood, fatigue, physical environment and prior performance that are (objectively) irrelevant, yet shape judgments. Like the study titled “Clouds Make Nerds Look Good,” which examined 682 actual decisions by college admissions officers: They weighted applicants’ academic strengths more heavily on cloudier days and applicants’ nonacademic strengths more heavily on sunnier days.
“Noise” digs deep into the details of unwanted variation, including its causes and components, how to measure it, and the interplay between noise and bias. The authors tackle why groups (vs. individual decision-makers) can amplify noise and how guidelines, rules and algorithms can reduce it. And they provide a well-stocked toolbox to help decision-makers identify and reduce system noise. They suggest that conducting a “noise audit” is a useful first step. When an insurance company did one, executives were stunned — estimates by multiple underwriters who evaluated identical claims were five times noisier than expected. The executives calculated that such noise cost the company hundreds of millions of dollars each year.
Kahneman, Sibony and Sunstein devote eight chapters to methods for reducing noise, plus three appendixes to help readers conduct noise audits, develop checklists to improve group decisions and improve predictions. We learn about hallmarks of people who dampen rather amplify system noise. This includes people prone to slow and careful “system 2” thinking, rather than to jumping to conclusions — a central theme in Kahneman’s bestseller “Thinking, Fast and Slow.” And it includes actively open-minded people, who constantly search for new information and update their beliefs.
The authors suggest that, to reduce noise in the decision process, it is best to first ask multiple people to make independent judgments and then bring them together to resolve differences. And they explain how guidelines and constraints that limit intuition and idiosyncratic preferences, long known to diminish bias, also cut down on noise. For instance, they urge organizations to use structured rather than unstructured interviews to select employees. Most interviewers love the freedom to ask job candidates their favorite questions. But there is strong evidence that when multiple interviewers each ask the same questions, in the same order, agreement about whom to hire is higher — and selected candidates perform better.
The book also proposes that groups can tackle noise and bias by appointing a “decision observer,” a leader or specialist charged with tracking and guiding interactions. It provides a lengthy checklist of questions to help such observers — or anyone else — diagnose when groups are avoiding or injecting errors that will undermine their decisions. As the authors suggest, this solution won’t work in dysfunctional groups where it is unsafe to speak up. But it can assist healthy teams that are determined to make sound judgments.
“Noise” is long and nuanced. The details and evidence will satisfy rigorous and demanding readers, as will the multiple viewpoints it offers on noise. I was distracted, however, by shifts in writing style at times. Some sentences and sections read like a psychology or statistics textbook, others like a scholarly article, and still others like the Harvard Business Review. But that is a minor complaint. Every academic, policymaker, leader and consultant ought to read this book. It convinced me that we already know how to turn down much of the systemic noise that plagues our organizations and governments. People with the power and persistence required to apply the insights in “Noise” will make more humane and fair decisions, save lives, and prevent time, money and talent from going to waste.
A Flaw in Human Judgment
By Daniel Kahneman, Olivier Sibony, Cass R. Sunstein
Little, Brown Spark.
454 pp. $32