HF — Your book looks at the crucial role that the U.S. News & World Report rankings play in shaping law schools. Why did these rankings come to be so important and how are they calculated?
WNE & MS — The emergence of rankings stemmed from their appeal to prospective students and their parents. The accessible comparative information provided by U.S. News was particularly valuable at a time when higher education was becoming more accessible, more people were going to college, and as students increasingly looked beyond their local or regional schools when considering their college choices.
The USN law school ranking is based on an algorithm composed of four composite factors: reputation (based on surveys of administrators and practitioners), selectivity, placement and faculty resources. While the general structure of the formula has stayed the same over the past 20 years, USN has made tweaks when it believed it could improve the formula or to discourage certain types of gaming strategies.
HF — Most law school deans seem to detest these rankings — you quote one who compares the rankings to a cockroach infestation, and another who wishes that al-Qaeda would go after U.S. News. Why do deans pay so much attention to the rankings if they hate them so much?
WNE & MS — The primary reason most deans pay attention to the rankings is that there are a number of external audiences — prospective students, current students, employers, boards of trustees — who either take the rankings at face value or use them to make decisions. Deans believe that rankings (no matter how questionable their methodology) can have real effects on their school as these external audiences decide where to go to school or whom to hire based on them. Many deans also fear losing their jobs if they don’t produce good numbers. This fear is warranted, given the number of deans and administrators who have lost their jobs as a result of not meeting expectations in the rankings.
HF — How do law schools game the rankings system and how is this creating a ‘race to the bottom’ with dubious employment statistics and the like?
WNE & MS — Law schools have developed a number of strategies to game the rankings. These strategies evolve as other schools mimic these practices (thereby reducing their benefits) or as U.S. News changes its methods to eliminate loopholes. Gaming strategies have included blatant misreporting of student statistics, hiring one’s own graduates to improve employment numbers, admitting more students to part-time or night programs to protect LSAT and GPA numbers, and even dictating the timing of faculty leaves to maximize student-faculty ratio measures.
One pernicious strategy involved changing how schools counted students as “employed.” Before rankings, schools typically counted as employed only those with jobs in law. As the pressure to rise in the rankings increased, some schools realized that neither the ABA nor USN specified what “being employed” meant. These schools then began to count any employment — taxi driver, fast-food worker, research assistant — in their statistics, and, not surprisingly, their numbers went up dramatically. Once other schools saw their rankings drop as a result of not adopting this diluted meaning of “employed,” they changed their reporting practices, even if they thought this violated the usefulness of the measure. Many schools began reporting employment rates close to 100 percent, even in tight job markets and even when they were not ranked highly. This issue became especially salient after the recession when the job market for lawyers was bad and when students with high debt were unable to find decent jobs. Many students began to see these numbers as false advertising and some even filed class-action lawsuits.
HF — Many people suggest that hard numbers and statistical information provide greater accountability. Your book suggests that the politics of accountability are more complicated than they look at first. How do they play out for law school employees e.g. in admissions offices?
WNE & MS — We often place great trust in numbers because we believe them to be more objective than other kinds of information. Quantitative data is an important form of knowledge, but we are often not curious enough about the terms of its production and its underlying assumptions. We often take numbers and their authority at face value. Our book suggests that we should be more critical consumers of numbers because any form of accountability is necessarily selective and therefore biased in some way.
Our book also draws attention to the unintended and sometimes undesirable consequences that accountability measures produce. Rankings have changed how schools make decisions, the type of work administrators and faculty are expected to do, and even how they see themselves. They have also produced a great deal of anxiety as people decide how to manage these consequential numbers and the moral implications of their actions. Numbers do not offer the easy recipe for accountability that their apparent simplicity and apparent objectivity imply.