This is a guest post by Nathaniel Persily, James B. McClatchy Professor of Law at Stanford University and Senior Research Director for the Presidential Commission on Election Administration.
Earlier Wednesday, the Presidential Commission on Election Administration released its Report and Recommendations (pdf) to improve the voting experience in the United States. Unlike many others that have entered this fray, this commission was unanimous and bipartisan in its recommendations. Of particular interest to readers of this blog: the commission relied heavily upon the expertise of the nation’s top political scientists and election administration experts.
Although the most infamous problem that gave rise to the commission’s creation was the problem of long lines on Election Day, the Executive Order creating the commission tasked it with a wide range of election administration problems. The roughly 100 pages of recommendations and best practices in the report are equally broad ranging. The principal recommendations of the commission are:
- modernization of the registration process through expansion of online voter registration and state collaboration in improving the accuracy of voter lists;
- improving access through expansion of pre-Election Day voting, and selection of suitable, well-equipped polling place facilities, such as schools;
- endorsement of tools to assure efficient management of polling places, hosted at the Caltech-MIT Voting Technology Project and available through the commission’s Web site;
- reforms of the standard-setting and certification process for new voting technology to address soon-to-be antiquated voting machines and to encourage innovation and the adoption of widely available off-the-shelf technologies.
The report is the principal but not the only part of the ambitious project that the commission undertook in its six months of operations. As noted in the recommendations, the commission sought out and is publicizing on its Web site tools that administrators can use to deal with two issues. The first set of Web applications allows administrators to predict wait times and allocate resources accordingly. (Especially impressive is the tool developed by Mark Pelczarski.) The second set of tools are ones developed by Rock the Vote to help states transition to online voter registration, and to do so in a way that will allow any state-endorsed partner to facilitate direct voter registration through its Web site. The commission hopes that these tools will serve as a starting point and that local administrators will improve upon them and adapt them to fit their needs.
In addition, political scientists working closely with the commission conducted a nationwide survey of local election officials. In the remarkably tight timeframe in which the commission existed, Charles Stewart III (MIT), Stephen Ansolabehere (Harvard), and Daron Shaw (Texas) polled more than 3,000 local election administrators. The survey results and data are available in the report’s appendix.
The appendix also features more than 2,000 pages of testimony and political science research on election administration issues within the charge of the commission. In addition to the three lead researchers mentioned above, scholars such as Barry Burden (Wisconsin),Lisa Schur (Rutgers), Bob Stein (Rice), Paul Gronke (Reed), David Kimball (Missouri), Martha Kropf (UNC Charlotte), Brian Gaines (Illinois), Juan Gilbert (Clemson), Jeffrey Milyo (Missouri), Taeku Lee (Berkeley), Merle King (Kennesaw State), Michael Jones-Correa (Cornell), Donald Inbody (Texas State), and Ron Rivest (MIT) testified before the commission. Their research, as well as the testimony of an even greater number of election administrators, was critical in focusing the commission on the facts of election administration as we know them.
The last few pages of the report plead for greater data gathering and dissemination in this field. Elections are awash in data, not the least of which concerns the totals for winning and losing candidates. However, when it comes to systematic data concerning the mechanics and administration of elections, we tend to rely on surveys with sample sizes that limit inferences to the state level.
The issue of long polling place lines is an illustrative example. Media reports on long lines focus on battleground states, and usually the most populous counties within those states. National surveys, such as those conducted by Charles Stewart, have identified “problem states” where respondents report long wait times. What we need, however, are nationwide wait time data at the polling place level. As anyone who has visited Disney World with a child can attest, calculating wait times does not involve 21st century technology. But until we have every polling place in America using their stop watches for similar purposes, we will not have a clear picture of the location of the line problem and its multifarious causes.
The same can be said for any number of issues, such as the failure to count provisional, absentee and military ballots, accessibility of polling places, and perhaps the easiest and most important, the performance of voting technology. The lack of a nationwide data infrastructure hinders the kind of institutionalized learning and feedback loops that will allow our system of election administration to learn from itself. If the report of the Presidential Commission can push election officials to take even this minor, clearly nonpartisan step, it will have made a substantial, long lasting contribution to improving the voting experience.