Facebook chief executive Mark Zuckerberg speaks at the company’s headquarters in Menlo Park, Calif., on Jan. 15. (Jeff Chiu/AP)

Since the relatively early days of the Internet era, theorists have been worried about what the ability to choose your own stream of intellectual content would do to our politics. One core concern is that people would self-select what kind of news content to consume, and then repeatedly reinforce their own beliefs.

This could trigger a spiral of self-reinforcing polarization in which partisans become more and more sure of themselves — not only of their opinions, but of their facts. And indeed, we live at a time of record polarization — and also one in which people divide even over reality itself, disputing whether the Earth is really warming (it is), and humans are really causing it (they are).

It’s not just the Internet that drives this — if anything, the ability to select your ideologically preferred flavor of cable news is just as powerful. But the effect is certainly epitomized by a social media environment like Facebook. Here, if you’re skeptical of vaccines (for example), you’ll naturally tend to friend people who have a similar mindset, and you’ll all share each new bit of information you can find seeming to bash vaccines.

Such is what we tend to assume happens, anyway — which is why a massive new study of informational self-selection by three researchers from Facebook (published in the journal Science, no less) is likely to create such sparks. For the researchers find that Facebook itself actually isn’t the only reason, or even the biggest one, that people choose to consume information they already agree with (on Facebook).

Rather, they find, we humans largely put the blinders on ourselves.

Granted, the study finds that Facebook’s algorithms, which determine which items appear in your news feed on the site, do lead us to encounter “slightly less cross-cutting content,” as Facebook’s Eytan Bakshy and two colleagues put it.

Yet the study of over 10 million Facebook users says that’s nowhere near as big a factor as who our Facebook friends are to begin with (mostly people we agree with) and what we actively decide to click on out of all of the items in our Facebook feeds (mostly things we agree with).

“The composition of our social networks is the most important factor limiting the mix of content encountered in social media,” write the authors. Later, they further note that “individual choices more than algorithms limit the mix of content encountered on social media.” (People do choose to click some information that is “cross cutting” on Facebook, the study found, but much less than what they would have received in a random sampling of what people share on the site.)

So is it really true that we’re all to blame for the informational polarization that occurs not just on Facebook, but in other social media outlets as well?

To answer the question, it helps to dip back into the history of thinking about this problem of people self-selecting information to humor what they already believe. As it turns out, the concern about the Internet putting us on a diet of ideological junk food hails from a tradition of psychological research that long predates the web.

It was none other than the psychologist Leon Festinger, who famously came up with the idea of “cognitive dissonance” in the 1950s, who also touched off a still growing body of research on “selective exposure,” or how people selectively choose to expose themselves to ideologically friendly or otherwise congenial information.

The two concepts were closely intertwined. Cognitive dissonance was about how people rationalize or explain away uncomfortable facts about themselves, or about the world, in order to maintain core beliefs and their senses of identity. Selective exposure is a logical offshoot: It suggest that we not only rationalize away challenges to our beliefs, but we also avoid information that challenges us in the first place (and seek out information that is self-affirming).

So who is at fault, then, in a situation where an ever more massive array of media technologies — ranging from the proliferation of ideological talk radio, to the development of cable news, to the growth of social media — strongly empower people to do even more of what they tend to do anyway?

Clearly, the psychological mechanisms of self-defense and self-affirmation explored by Festinger are powerful enough to make people ignore even the most incontrovertible truths. And this would happen in any media environment.

But just as clearly, environments that allow for high levels of ideological selectivity in what kinds of media we consume make selective exposure easier. And in cases where that selectivity leads people to consume information that challenges mainstream science, then misinformation can also thrive and spread — as is the case not only in the climate change arena, but many others as well.

Which brings us back to Facebook. The new study is a tour de force. The incredible volume of data that the researchers analyzed is astounding. And their logic is impeccable: There are (at least) three separate factors that, once we’re on Facebook, determine how much we click on and read information that we already agree with. And two of them come down to our own choice — whom we friend (those who fill up our feeds with stuff they consider worth sharing) and what we click.

But the fact remains that without the march of media technology — of which Facebook is just one instance; cable news and talk radio are arguably just as ideologically polarizing — people today wouldn’t have such an easy time selectively exposing themselves to congenial information.

So social media outlets like Facebook are not entirely off the hook; but then, neither are they necessarily any more blameworthy than cable. All of these media technologies act, in effect, as enablers of who we already are. And given our psychologies, this kind of enabling can be a dangerous thing.