The following post recounts a conversation I had with Dan Kahan, a law professor who does experiments on psychology and political attitudes. I think our discussion should be of interest to many Monkey Cage readers, as it touches on one of the central political issues of our time: that extreme polarization extends even to issues such as science that one might think should fall outside of partisanship.

Kahan had looked at some data on religiosity and attitudes on scientific questions, such as evolution and climate change, and asked:

So what is prediction then on rich guy & climate change? How does the chain of reasoning go? Rich guy thinks that republicans will advance “common good” — & believes that “what’s good for GM” — wait; that’s obviously anachronistic — “what’s good for GS [Goldman Sachs] is good for America.” How does that get him to being a climate skeptic? Actually, I’m not sure how it even gets him to favor Chicago School fiscal policy rather than Keynesianism.

I replied that it’s complicated. Rich Texas guy is much more conservative (in expectation) than rich New York guy, especially on social issues but also on economic issues. Rich guy, like poor guy, can decide whom to trust based on media and other signals. If Fox News or Republican politicians start talking about climate risk, maybe rich guy decides it’s a problem.

Kahan continued:

My view is that positions are pretty much randomly distributed — by the card-shuffle of history — across cultural types (which themselves were created in some parallel card shuffle). At that point, everyone engages evidence (from empirical data to brute sense impressions) in an aggressively identity-motivated fashion. Believing “Romney is better for US than Obama” is dealt to one of the groups in the same hand as “climate change isn’t happening/is a product of ‘natural cycles'” & “permitting people to carry concealed weapons reduces crime.” Does your position link up w/ mine? Or is there something on which we should have different expectations/predictions? BTW, I *don’t* believe there isn’t “best available evidence” independently of the card shuffle. Or that the players can’t “fold” their hands under the right conditions. Most importantly of all, there’s no necessity in what positions even get dealt in this process. *We* should be using reason to assume the role of dealer, and make sure that facts that are of consequence to human wellbeing and admit of empirical investigation are kept “in the deck” rather than dealt to any one.

To which I replied (based on my research with Delia Baldassarri) that correlations across issue attitudes are pretty low, typically 0.2 or 0.3. Sure, correlations are higher among more educated people, and correlations are higher now than they were a few decades ago, but there’s still a lot of variation. One of my favorite examples was when someone asked, “How is it that people who oppose the death penalty support abortion? That makes no sense!”—and I checked the numbers and found that correlation between attitudes on those two issues is close to 0. That was a few years ago; correlation is probably a bit higher now, but still.

My theory is that lots of people have internally-coherent frameworks linking their different issue attitudes, but different people have different frameworks.

Kahan then threw some data at me that suggest that I’m underestimating the coherence of people’s attitudes. Here’s Kahan:

Wait a sec! I feel like we had at least part of this discussion before!

Yes,  If one has a pollster’s view– that answers to questions are to be taken at face value — then one will find tons of noise & also tons of opportunities to call pollsters out for pretending that the same are signals of anything coherent.
But if one views survey items as *indicators* of latent attitudes, then one can actually find *tons* of coherence — & then start to investigate the genuinely interesting puzzle of how that can be given how ill-informed & unpolitical people are.
Here, I’m sure you remember this!
ScreenHunter_1664 Nov. 28 22.07
The average inter-item correlation was 0.66 (N = 1600, diverse national sample, summer before last)  for responses to these on a six-point “how strongly . . . you support or oppose” response measure.
When these items were aggregated into a scale and regressed on another scale that aggregated the same survey participants’ responses to a 5-point “liberal-conservative” ideology item & a 7-point “party identification” one, the R^2 was 0.60.  It went up to 0.64 when “political knowledge” was added as a predictor — but still!
Also, there was an additional set of items that had a comparable level of coherence and indicated a “liberatrian” vs. “paternalistic” political orientation.
Now here are some new morsels of “coherence” for you to chew on (& at this point, I can’t even remember how this conversation started, but who cares?!).
two_risk_perception_dimensions
Again from a US general population sample (N = 2000), I’ve extracted 2 orthogonal factors that reflect distinct latent risk-perception dispositions. I’ve turned them into scales & am calling one the “public safety risk perception” index and the other the “social deviancy risk perception index.”
 
Who are these guys, though? And why do they have these views?  What can we predict about how they might see other things?
 

 
What do you think alpha, beta, gamma & delta’s political outlooks (or “cultural worldviews,” if you would like a scheme that has 1/2 a chance at getting the right answer!)  are?
 
How religious are they?  How about science comprehension — do they vary & in what way?
 
And what do you think “they” think of vaccines? Which of these dimensions will pick up variance in childhood vaccine risks? Or how about nanotechnology or synthetic biology–things they’ve never heard of; will they react in predictable or predictive ways as they learn about them?
 
I can actually tell you the answers, I think, to some of these questions.   Because for sure there is coherence.  Numbers; patterns; everywhere in nature…