The capacity to envision that future relies partially on the hippocampus, a brain structure that is crucial to memory. People with damage to the hippocampus are unable to recollect the past; they are also unable to construct detailed images of future scenarios. The rest of us constantly voyage back and forth in time; we might be thinking of a conversation we had with our spouse yesterday and then immediately jump to our dinner plans for later tonight.
But the brain doesn’t travel in time randomly. It tends to engage in specific types of thoughts: We consider how well our kids will do in life, how we will obtain that desired job, whether our team will win, and we look forward to an enjoyable night on the town. We also worry about losing loved ones, failing at our job or dying in a plane crash. But research shows that most of us spend less time mulling over negative outcomes than we do over positive ones. When we do contemplate defeat and heartache, we tend to focus on how these can be avoided.
Why do we maintain this rosy bias even when information challenging our upbeat forecasts is so readily available? We experience both positive and negative events in our lives. We know the economy is unstable, for example, but still we remain optimistic about our own future. When expectations are not met, we alter them. This should eventually lead to sober realism, not blind optimism.
Underestimating bad news
Only recently have we been able to decipher this mystery. My colleagues and I at University College London recently scanned the brains of people as they processed both positive and negative information about the future.
Among other things, we asked them to estimate how likely they were to encounter 80 different negative events in their life, including developing cancer, having Alzheimer’s disease and being robbed.
We then told them the likelihood that a person like them would suffer these misfortunes; for example, the lifetime risk of cancer is about 30 percent. Then we asked again: How likely are you to suffer from cancer? We wanted to know if people would change their beliefs according to the information we provided. It turns out they did, but mostly when the information we gave them was better than they had expected.
If someone had estimated that their risk of cancer was 50 percent and we told them, “Good news: The average likelihood is much better, only 30 percent,” the next time around they would say, “You know what? Maybe my likelihood is only 35 percent.” So they learned easily and quickly.
However, if someone started off estimating their cancer risk was 10 percent and we told them, “Bad news: The average likelihood is about 30 percent,” they would scale up only gradually. The next time, they might say that their likelihood of contracting cancer was only 11 percent. It is not that they did not learn at all. They simply decided that the figures we provided were not pertinent to them.
Where do these irrational beliefs come from? This disconnect is related to something scientists call prediction errors, which describe the difference between what you expect and what actually happens.