The Washington PostDemocracy Dies in Darkness

How party ID became partisan — and why it shouldn’t be

After the release of any -- and every -- swing state or national poll these days, the Fix Twitter feed and email inbox immediately fill up with messages that are some variation on this: "Party ID skewed! D+8!"

That's political shorthand for a belief that the party identification in the poll -- the composition of the sample of people who are being polled -- is misaligned to the actual partisan composition of either a state or the country and, therefore, is producing results that are not reflective of the actual state of the race.

The problem with that argument? It's based on limited information and a series of false assumptions none bigger than that because the country has been virtually evenly divided on partisan lines for the past decade or so that the party identification question should result in something close to a 50-50 split between Democrats and Republicans.  That's not right.

Before we get to the reasons why, let's start at the beginning. (It's usually the right place to start.)

The simple truth is that Democrats have long enjoyed an edge on the party ID question.   The Pew Research Center, widely considered the gold standard in non-partisan polling, has scads of data to prove that point.

Here's their trend on party ID since 1992:

And here's the Pew party ID trend line when people who lean to either the Republican or Democratic party are pushed to their respective partisan side:

Just by way of comparison, here are the party ID numbers from the American National Election Studies:

Those numbers are broadly in line with the party ID figures currently being used by major national pollsters.  Here's what those party ID splits look like:

* September Washington Post-ABC poll: 32% Democrat, 26% Republican, 37 percent independent

* September NBC/WSJ poll: 32% Democrat, 25 percent Republican, 38 percent independent

* September CBS/NYT poll: 35% Democrat, 22 percent Republican 43 percent independent.

What the Pew numbers also make clear is that even in very good Republican years nationally (2004 and 2010 being obvious examples) more people still identify themselves as Democrats than Republicans.

How can that be? Because turnout matters.  Republicans won in 2004 and 2010 not because their party identification numbers soared but rather because they turned out at a far higher rate than did Democrats.

Here's a chart on turnout by party since 1952 using data compiled from the American National Election Studies:

To put in simple terms: Let's say we have a pool of 100 voters.  45 of them are Democrats, 35 of them are Republicans and 20 of them are independents/other party.  If everyone votes -- and independents split their votes roughly between the two parties, Democrats would win.  But, if only 60 percent of those Democrats voted while 80 percent of the Republicans did (and independents again split evenly), Republicans would win.

But, why is party ID as determined by individual polling firms being used as the standard and not the party identification found in exit polling, people ask?

Let's first look at partisan identification at the presidential level in exit polls dating back to 1984 (chart courtesy of GOP pollster David Winston):

While the exit numbers are slightly less favorable to Democrats than the Pew party ID data, they still show clearly that Democrats have enjoyed an edge over time. (In only one race -- 2004 -- since 1984 have there been an equal number of people identifying as Republicans as Democrats.)

The one obvious difference between the exit polls and the current party compositions being used by national pollsters is that the percentage of independents is much lower in the exits. Why? Because in the immediate aftermath of making a partisan decision -- 99 percent of people cast a ballot for either a Democrat or a Republican -- people are far less likely to identify as an independent than they are when they field a call at their house in the middle of an election cycle.

And, perhaps most importantly, remember that the exit poll captures the composition of the last electorate -- not the composition of the next electorate.  If pollsters had assumed that the 2008 electorate would have looked similar to the 2004 electorate, they would have badly missed the massive surge to Obama.  Ditto if pollsters had used the 2006 midterm party composition (in which Democrats scored across the board victories) as the baseline for the 2010 midterm electorate, which much more strongly favored Republicans.

What all of the above points to is the reality that polling is equal parts art and science.  The best of the best -- like the folks at the Post -- understand that putting together the sample for any poll involves weighing what we know the electorate looked like in the past with what it looks like today and what it will look like on Nov. 6.

Different firms do that different ways. Some take the raw results of the poll and simply publish them. Others always weight the sample to reflect what they believe to be the likely composition of the electorate. Others weight when they think it's necessary -- in the case of a clearly skewed sample -- and don't at other times.

Alleging bias is, of course, easier than digging deep into the realities of why party ID in polling looks the way it does. But that doesn't make it right.