Daniel Kahneman demonstrates forcefully in his new book, “Thinking, Fast and Slow,” how easy it is for humans to swerve away from rationality, how our hard-wired biases lead us time and again to make dumb (or, more politely) unreasonable choices.You may think less of many people after reading this work, among themstar CEOs, sportswriters, economists, professional investors, Malcolm Gladwell (whose “Blink” covered some of the same ground), and pretty much every ordinary Joe who has confronted a statistical problem harder than a coin flip.
In 2002, Kahneman shared a Nobel Prize in Economics for his work on decision theory (though he’s a psychologist). His papers on the subject, notably those co-written with Amos Tversky, his longtime collaborator, are among the most cited in the social sciences. (Tversky died in 1996 and was therefore ineligible for the Nobel.) Popularizers have mined Kahneman’s findings for years, but now he has come forward to present his life’s work to the public. He wants to help us mend our fuzzy thinking and change the way we talk about decision-making; he takes very seriously the language we use to Monday-morning quarterback ourselves: “There is a direct link from more precise gossip at the water-cooler to better decisions,” he writes.
His book is partly an intellectual autobiography, with an affecting portrait of his collaboration with Tversky, and it’s enlivened with anecdotes drawn from his years in the Israeli army and advising the Israeli government. (Born in France in 1934, Kahneman and his family moved to Israel after World War II; now he’s an emeritus professor at Princeton.) But “Thinking, Fast and Slow” is mainly a methodical march — a bit too much of a march — through what psychologists know about how the brain analyzes situations and retrieves information.
When you see a sketch of two human eyes, abnormally wide open, it takes only a fraction of a second to realize you’re looking at fear. That all-but-automatic style of thinking helped us avoid predators in the past and today answers 2+2 in a flash, governs our driving on easy roads with little traffic, and connects stereotypes with their targets (meek people with a passion for order = librarians). It smooths our way through daily life. A second, more reflective, part of the brain can kick in — but doesn’t always — when deeper analysis is called for: It helps us solve an equation like 27 x 32 in our heads, fill out tax forms and bite our tongue when we’re inclined to tell off the boss. It can figure out that meek, orderly person are more likely to be factory workers than librarians, given the relative sizes of the populations.
Faced with a complex question (how much should an oil company pay for environmental damage it caused?), the automatic system will substitute a simpler question (how upset am I by a photograph of a bird drenched in oil?). On the other hand, the analytical mind has its own problems: It weakens when the physical body is tired, it’s lazy, and it, too, has problems with statistical thinking. Kahneman calls the two sides of the brain System 1 and System 2, though he stresses they aren’t anatomical places or pathways but metaphors to help us grasp the processes.
If the brain is a “a machine for jumping to conclusions,” as Kahneman writes, it’s System 1 that yells “Geronimo!” Autopilot thinking explains the popular opinion that tornadoes kill more people than asthma, although in fact asthma kills 20 times as many: We fixate on scenes from TV showing homes turned to matchsticks and overestimate the representativeness of such scenes.
The auto-brain also gets caught on what Kahneman calls anchors, arbitrary reference points. One academic study he cites examined the response to a sale on Campbell’s soup. When shoppers saw a sign reading “Limit of 12 Per Person,” they bought an average of seven cans, twice as many as they bought when there was no sign. Twelve was the anchor. Kahneman believes anchors could affect public policy. For instance, if punitive damages were capped at, say, $1 million, that figure would exert a gravitational force on jurors, perversely causing them to award sums near that figure when they’d otherwise dispense smaller ones.
The automatic mind creates causal stories out of dubious raw material. When Kahneman studied the records of managers at one investment firm, over 25 years, he found there was zero link between managers earning an above-market return one year and repeating that kind of performance the next — although, of course, pay was pegged to annual returns and managers with good years were richly rewarded. Unsurprisingly, the firm’s executives refused to believe that variation in performance was random. “I’ve done very well for the firm and no one can take that away from me,” one told Kahneman. “I took it away from you this morning,” Kahneman recalls thinking.
One of Kahneman and Tversky’s most famous ideas is what they callprospect theory: our inclination to fear possible losses more than we value possible gains. Would you take a bet on a one-time coin flip that paid $200 if you won but cost you $150 if you lost? Most people wouldn’t, though it’s tilted in your favor. Pro golfers tend to make a higher proportion of their putts when they’re trying to avoid a bogey (which would result in losing a stroke) than when they have a chance for a birdie (for the possible gain of a stroke); here, too, avoiding a failure is more crucial than achieving a triumph.
Most recently, Kahneman has turned his attention to the study of human well-being, which involves our evaluations of pleasure and pain. One unsettling conclusion from this work is that our memory of events is shaped by pain-pleasure peaks and final experiences, rather than what we were really feeling at the time. In an experiment, people rated a long, painful colonoscopy as less painful than a shorter, painful one — if the pain peaks were the same and the lengthier procedure ended with a period of lesser discomfort. Here’s the ethical conundrum: Should doctors focus on limiting real-time pain or the memories of pain?
There’s a self-helpish aspect to the book, because Kahneman offers some end-runs around common errors. Faced with an opposing negotiator’s unfair opening gambit, you should make a scene, he advises, if that’s what it takes to erase the anchor your opponent has just dropped. He also recommends that you make one-time decisions involving risk and reward on the assumption you’ll face many such bets in your life. (Professional traders always take the $150/$200 bet.)
Persistent cognitive errors have profound philosophical and political implications, but Kahneman doesn’t spend much time on these. He does say that his work undercuts libertarianism: Humans make choices that go against self-interest too often for freedom to be the highest good. He’s in favor of nudges to steer people toward wiser retirement plans, or healthier lunches, while leaving the ultimate decision to those nudged. But these wider-view thoughts are left to a sketchy, 11-page concluding chapter.
So, if you read this book, will you make better decisions? I’m hopeful, but Kahneman’s the expert, he’s in his eighth decade, and despite the self-help advice, he’s skeptical about our ability to change — and skeptical about his own improvement, too. As he puts it, “I have made much more progress in recognizing the errors of others than my own.”
THINKING, FAST AND SLOW
By Daniel Kahneman
Farrar Straus Giroux. 499 pp. $30