Lack of evidence prevented the court from weighing in on gerrymandering until now
Partisan gerrymandering — where state legislators draw district lines to entrench their power and weaken that of their opponents — has led to heated debate over the course of the United States’ history. On the one hand, political observers have long regarded it as a form of political “evil.” President Benjamin Harrison referred to gerrymandering as “political robbery.” On the other hand, Americans have also treated gerrymandering as a natural part of political life in a democracy. A handout for a 2011 National Conference of State Legislatures seminar refers to the idea of partisan manipulation of legislative districts as a “fact of life.”
The dual nature of gerrymandering has given the Supreme Court pause before wading into the “political thicket.” The last time the court took up the issue of partisan gerrymandering, in the 2004 case of Vieth v. Jubelirer, a plurality of four judges were vexed by the problem of “determining when political gerrymandering has gone too far” and “how much political motivation is too much.”
During the oral argument phase in Vieth, justices from across the court’s ideological spectrum asked pointed questions of attorneys about the quality of the social science on gerrymandering. Without data from multiple elections, it was difficult to be sure about the consequences of gerrymandering. One could not be sure that gerrymandering itself had caused underrepresentation of a political party, rather than other factors (such as weak voter mobilization, poor candidates or political geography). The lack of evidence also made it difficult for judges to discern a standard narrow enough to prevent an onslaught of partisan gerrymandering challenges in the courts.
Nevertheless, Justice Anthony M. Kennedy’s concurring opinion in Vieth held out the possibility that social science research could help solve the problem. As Kennedy put it:
Technology is both a threat and a promise. On the one hand, if courts refuse to entertain any claims of partisan gerrymandering, the temptation to use partisan favoritism in districting in an unconstitutional manner will grow. On the other hand, these new technologies may produce new methods of analysis that make more evident the precise nature of the burdens gerrymanders impose on the representational rights of voters and parties.
Political scientists have risen to Kennedy’s challenge
In the years since Vieth, political scientists have responded to Kennedy’s challenge by developing a variety of measures for identifying cases of extreme gerrymandering and isolating the effect of gerrymandering on parties’ ability to convert votes into seats. The plaintiffs in Gill rely on groundbreaking research by Eric McGhee and Nicholas Stephanopoulos. These political scientists have identified what they call the “efficiency gap,” a measure that captures the bias of a redistricting plan toward one party or another by examining the relative number of votes “wasted” by supporters of each party.
Yet, as Stephanopoulos notes, the “miracle drug” for solving gerrymandering is not the efficiency gap itself, but the kind of analysis it permits political scientists to carry out. Using efficiency gap analysis, Jowei Chen has isolated the effects of Wisconsin’s redistricting plan from other factors (such as the state’s political geography), showing the effects that gerrymandering has. Simon Jackman has carried out analyses that illustrate the extreme effect of the Wisconsin redistricting plan when compared to other hundreds of other plans from across the 50 states. Jackman also conducted sensitivity tests that show that Republicans would still win more than 50 percent of seats in the Wisconsin state legislature, even if Wisconsin Democrats were hypothetically able to swing the election by 5 points, the largest Democratic wave in more than 40 years.
Some judges don’t like the science
While there are a variety of ways to measure gerrymandering, the consensus among political scientists is that, by any measure, Wisconsin’s 2010 redistricting plan is extremely biased (see Barry Burden’s roundup of the briefs submitted by political scientists here). Yet lawyers for the state of Wisconsin argue that the variety of metrics for evaluating gerrymandering is a vice, not a virtue:
Plaintiffs would have this court instruct district courts to evaluate the effects of alleged partisan gerrymanders by applying an unbounded variety of metrics. … Better to leave it to lower courts to figure it out in “subsequent litigation”; presumably only after having subscribed to Political Research Quarterly, American Political Science Review and other essential journals.
At oral arguments, conservative justices picked up this line of attack. Chief Justice John G. Roberts Jr. suggested that he was wary of taking the issue of gerrymandering “away from democracy” and throwing it into the courts to make decisions using what he described as “sociological gobbledygook.”
. . . [If] you’re the intelligent man on the street and the court issues a decision, and let’s say, okay, the Democrats win, and that person will say: “Well, why did the Democrats win?” And the answer is going to be because EG was greater than 7 percent, where EG is the sigma of party X wasted votes minus the sigma of party Y wasted votes over the sigma of party X votes plus party Y votes. And the intelligent man on the street is going to say that’s a bunch of baloney. It must be because the Supreme Court preferred the Democrats over the Republicans. And that’s going to come out one case after another as these cases are brought in every state. And that is going to cause very serious harm to the status and integrity of the decisions of this court in the eyes of the country.
Distrust in experts is real but not inevitable
Roberts has a point. As research by Matthew Motta suggests, public distrust of experts has increased in recent years — particularly on the right. This distrust is associated with weaker acceptance of anthropogenic climate change and the safety of nuclear power — issues where the scientific consensus is broad. Nevertheless, Motta shows that distrust in experts is not inevitable or invariable. Rather, it can be mitigated by higher levels of verbal intelligence.
Motta’s evidence points to another important reality. As Dan Drezner points out, increasing inequality and political polarization have spawned an “ideas industry,” in which “thought leaders” poach the language of social science to make fundamentally ideological arguments. This is a particularly severe problem in the United States. Compared to democracies like Germany or Denmark, where state institutions either mediate or coordinate the production of policy knowledge, public and private research organizations in the U.S. compete with one another to shape the policy agenda. What this marketplace produces, as I have argued elsewhere, is not consensus, but doubt and cynicism.
Yet Roberts’s questions in oral arguments downplay how social science has helped the court understand complex and historically significant issues. In the Supreme Court’s 1954 Brown v. Board of Education opinion, Chief Justice Earl Warren relied heavily on social science research by Kenneth Clark to refute the Plessy v. Ferguson doctrine of “separate but equal.” This evidence illustrated how school segregation created a powerful sense of inferiority among black students. Recently, a study of judicial reasoning in same-sex marriage cases concludes that, “most judges appeared to be savvy consumers of social scientific evidence, not easily duped by the misleading operationalization of core concepts, analytic procedures that failed to include proper statistical controls, or easily disproved claims of lack of scientific consensus.”
Social science can indeed make the difference at the Supreme Court. The political science on gerrymandering in Gill v. Whitford will be particularly important for Kennedy, who is likely to be the swing vote in this case. Unlike the other conservative justices on the court, Kennedy refrained from criticizing the social science studies during oral argument. What his silence means for the future of gerrymandering and the future of social science at the Supreme Court, remains to be seen.
Philip Rocco is assistant professor in the department of political science at Marquette University.
This article is one in a series supported by the MacArthur Foundation Research Network on Opening Governance that seeks to work collaboratively to increase our understanding of how to design more effective and legitimate democratic institutions using new technologies and new methods. Neither the MacArthur Foundation nor the Network is responsible for the article’s specific content. Other posts in the series can be found here.