Over at Wonkblog, Ezra Klein suggests that partisanship and biased reasoning have depressing implications for democratic politics.
Oftentimes when we think we’re engaged in reasoned policy discussion we’re actually engaged in complex efforts to rationalize the direction in which our tribal affiliations are pushing us. Psychologists call this motivated reasoning … The problem is that human beings are incredibly good at rationalizing their way to whatever conclusion their group wants them to reach. And most policies can be supported — or opposed — on many grounds. It’s all about which parts people choose to emphasize. A conservative who emphasizes individual responsibility and loathes government coercion can find good reasons both to support and oppose the individual mandate. A liberal who believes both in security and civil liberties can decide to believe the FISA courts are an effective check on the NSA or totally insufficient. There are more than enough validators out there who’re willing to arm a partisan with information for whatever conclusion they prefer. “Once group loyalties are engaged, you can’t change people’s minds by utterly refuting their arguments,” political psychologist Jonathan Haidt once told me. “Thinking is mostly just rationalization, mostly just a search for supporting evidence.”
But other work in psychological theory gives us reason to be cautiously optimistic. In a landmark article in Behavioral and Brain Sciences, Hugo Mercier and Dan Sperber propose an “argumentative” theory of human reason which both directly acknowledges the problems of motivated reasoning and suggests that motivated reasoning in the right social contexts can be incredibly valuable and powerful.
Mercier and Sperber have a straightforward account of why human beings reason. They reason to win arguments by convincing others that they are right. This means that human beings suffer from “confirmation bias.” They are far, far better at finding justifications for why they are right than they are at thinking carefully about reasons why they might be wrong. More generally, they are terrible judges of the value of their own arguments and ideas, and are (as multitudes of experiments show) usually rotten at reasoning in isolation from each other.
However, where Mercier and Sperber depart from the skeptics is in pointing to the social value of reasoning. People are terrible judges of the flaws and weaknesses of their own arguments. However, they are much, much better at identifying weaknesses in the arguments of others. Furthermore, confirmation bias gives them good reason not only to try to confirm their own arguments, but also to try to demolish the arguments of people who disagree with them. This in turn means that groups — under the right conditions — are likely to be able to reach better judgments than any individual within the group. Real, substantial argument allows a kind of cognitive division of labor, in which different arguments get tested against each other.
When one is alone or with people who hold similar views, one’s arguments will not be critically evaluated. This is when the confirmation bias is most likely to lead to poor outcomes. However, when reasoning is used in a more felicitous context – that is, in arguments among people who disagree but have a common interest in the truth – the confirmation bias contributes to an efficient form of division of cognitive labor.
When a group has to solve a problem, it is much more efficient if each individual looks mostly for arguments supporting a given solution. They can then present these arguments to the group, to be tested by the other members. This method will work as long as people can be swayed by good arguments, and the results reviewed . . . show that this is generally the case. This joint dialogic approach is much more efficient than one where each individual on his or her own has to examine all possible solutions carefully
Put differently, Mercier and Sperber suggest that we are all better at seeing the motes in our brothers’ and sisters’ eyes, than at noticing the whopping big beams sticking into our own. But by arguing with each other, we can use our brothers’ and sisters’ insights to reduce the size of our own beams, while they can use our insights to deal with their own ocular obstructions.
This doesn’t imply that partisan blinders are good. Some common interest in the truth is still necessary to problem solving, as is real argument between people with different perspectives. It is plausible that we don’t have nearly as much of either in American democracy as we would like.
But what it does mean is that one cannot jump straight from arguments about flaws in the ways that individuals think to the conclusion that partisan democracy is necessarily a useless shouting match between warring tribes. If Mercier and Sperber are right, reasoning has two fundamentally social facets — making arguments, and evaluating the arguments of others. And under reasonably realistic social conditions, a good argument between people with different points of view can transform the individual vices of motivated reasoning and confirmation bias into the collective virtue of better problem solving and decision making.
Which suggests that political reform should aim less to replace partisanship with technocracy and expert decision making, and more to figure out better ways to harness partisanship, with all its messiness and rancor, for the public good.