In yesterday’s New York Times, a story suggests that after this year’s election, the U.S. political parties might struggle over whether to re-design our primary system. But before we think about potential changes, let’s examine the unique system we have today — and expose two myths usually told about how we got here.
In Britain, for example, to be a Labour Party nominee for prime minister, you need to be nominated by 15 percent of Labour’s members in Parliament; the Conservative Party members nominate just two candidates. The wider party membership then chooses from this narrowed field, although only 1 percent of registered voters are party members (compared with 60 percent or so in the United States), because party membership entails more significant obligations.
But starting in the 1970s, the United States stumbled — and I do mean stumbled — into a system that eliminated any meaningful role for party figures. Instead, unmediated popular participation, through caucuses and primary elections, came to control the way we choose presidential nominees.
That uniquely populist system, which we now take for granted, has culminated in our current, stunning moment. Two essentially freelance, independent political figures — Donald Trump and Bernie Sanders — will either represent, or come surprisingly close to representing, the nation’s two major parties in the 2016 election.
Let’s consider two myths usually told about how we got to this point.
Myth 1: Once upon a time, party bosses alone decided on the nominee.
For most of the 20th century, from 1912 to 1972, we used one system for presidential nominations. The conventional story about this “old” system is that party bosses got together in smoke-filled back rooms and chose the parties’ candidates, without much popular input. But that old system was actually far more complex and would be more accurately described as a “mixed system.” Starting in 1912, when Teddy Roosevelt pressed for primary elections to enable him to challenge his own party’s incumbent president, this mixed system included some primary elections, but they didn’t dominate.
As late as 1968, only 16 to 17 states used primaries. Those primaries selected fewer than half the delegates. The other delegates were institutional party figures from the national, state and local level (some chosen because of their positions in government or the party organization, others chosen through party-selection processes).
In this mixed system, the popular primaries and the party leaders checked and balanced each other’s influence. No committee designed the system in a single moment to create the “perfect” mix of popular and party roles; as often happens with democratic institutions, the system emerged from competing pressures over time.
Nonetheless, primaries kept the system from being too closed. “Outsiders” could challenge existing party hierarchy and orthodoxy and force the parties to remain responsive, at least up to a point. Meanwhile, the institutional party figures had incentives to put their weight behind candidates likely to hold the party’s factions together, run a competitive election, govern effectively and reflect the party’s general ideology.
Primaries enabled candidates to show skeptical party leaders that they could win votes — as happened when John F. Kennedy won the West Virginia primary in May 1960 and proved that voters were ready to support a Catholic. Even an insurgent candidate, such as Barry Goldwater in 1964, could successfully work the mixed system.
But no candidate could succeed without also winning over enough institutional party figures throughout the country. In 1960, for example, Kennedy won only 10 primaries. To win the nomination, he therefore had to persuade enough party regulars to back him. When candidates ran in the primaries they were always constrained to keep party regulars on board, too. Although personal appeal mattered, so did the ability to put together coalitions within the party. And party figures could bring to bear more personal knowledge than voters of how candidates actually functioned in government, which potentially could weed out nominees temperamentally unsuited to governing.
Under this system, some candidates “ran” on the inside track. For instance, the Democrats nominated Adlai Stevenson in 1952, even though he had not run in any primary. Others, such as Kennedy, effectively took advantage of the outside track to demonstrate their popular appeal. Whichever path a candidate took, this system combined populist and party-centered features.
This mixed system collapsed after 1968
This old system was dismantled, of course, after the turbulent 1968 Democratic convention in Chicago, at which the Democratic Party was ripped apart by the Vietnam War. Outside the convention hall, there were violent confrontations between Mayor Richard Daley’s police force and tens of thousands of anti-war demonstrators. Inside, many Democrats were outraged that the convention chose the establishment candidate, Vice President Hubert Humphrey, who supported the war — despite the fact that he had not won any primaries, and the party caucuses he won were even more Byzantine than those we have today.
Remarkably, that dismantling was radical and immediate, and it took a path largely unintended.
Within two election cycles, the United States had a populist-dominated selection process virtually unlike anywhere else in the world. By 1976, the system had changed completely. More than 30 states were using presidential primaries, with a majority of delegates chosen, in effect, through these primaries (today, more than 40 states use primaries).
But strikingly, this change to one of our most important democratic institutions was not the intended aim of many reformers. Indeed, this change was exactly the opposite of their intent.
Myth 2: The post-1968 reforms were designed to create a populist, primary-dominated system.
This brings us to the second myth about the history of presidential nominations. The conventional story is that after the 1968 turmoil, the Democratic Party created the all-important McGovern-Fraser Commission, which aimed to open up the process and therefore created our modern primary system. The first half of that story is true, but the second is not.
The Democratic Party indeed had to respond to demands for change and a more open selection process. But the commission was not trying to create a purely populist, primary-controlled system that essentially eliminated the voice of the institutional party figures. In fact, the commission wanted to avoid that result through reforms that would preserve a critical role for the party itself.
One of the commissioners was Austin Ranney, a prominent political scientist who throughout his career had aimed to strengthen the parties, not to hollow them out. He described the mismatch between what the commission had meant to do and what actually happened when its recommendations were implemented:
I well remember that the first thing we members of the Democratic party’s McGovern-Fraser commission (1969-72) agreed on … was that we did not want a national presidential primary or any great increase in the number of state primaries. Indeed, we hoped to prevent any such development by reforming the delegate-selection rules so that the party’s non-primary process would be open and fair, participation in them would greatly increase, and consequently the demand for more primaries would fade away. … But we got a rude shock. … We accomplished the opposite of what we intended.
What had the commission actually intended to do? And how did we end up instead with our current primary-dominated system?
The recommended reforms aimed to make the caucus system more open, more transparent and more accessible to all Democrats. Until then, the caucuses were often open only to those who held party office. Some states chose delegates an entire year before the campaign began. Even when the caucuses were nominally open, anyone who wasn’t a party official had a hard time finding out where and when they were. In some cases, different parts of the state might caucus on different days.
The McGovern-Fraser Commission recommended ways to make the caucus process more open and representative. But it did not push for a greater role for primaries, nor for reducing the institutional party’s role. Indeed, the hope was that the recommended reforms would legitimize a continuing central role for the institutional party.
So why did states respond to these recommendations by shifting overwhelmingly to primary elections? That’s something of a mystery, as James Ceaser notes in his magisterial history of our presidential selection processes (from which I borrow heavily here). The new caucus rules were complex. Failing to follow them properly could lead a state delegation to be disqualified at the convention. Apparently, party leaders in many states thought primaries would be simpler and safer.
Republicans were pulled down the same path, partly because in many states in which Democrats controlled the legislature they passed laws creating a primary for both parties. And as more open and participatory Democratic processes attracted greater media attention, Republicans also felt the need to move in the same direction.
Despite its accidental birth, that’s the origin of the populist, primary-dominated system we have today — a system that has virtually eliminated any filtering or mediating role for the institutional party and made our current moment possible. As this “modern” system was taking shape, leading political scientists warned that it
might lead to the appearance of extremist candidates and demagogues, who unrestrained by allegiance to any permanent party organization, would have little to lose by stirring up mass hatreds or making absurd promises.
After this fall’s election, there might be pressure to recapture more of a role for the institutional parties, as well as competing pressure to make the system even more populist and participatory. The election’s outcome will have much to say about the future of our nomination process.