For decades, U.S. News & World Report rankings distorted schools’ decisions about which students to admit and how to allocate their scarce aid dollars (often throwing them at richer kids with higher test scores).
Then, a second generation of rankings came along, one intended to measure how hard schools were working to enroll high-potential, low-income students who could benefit most from a college degree.
Publications such as the New York Times and Washington Monthly published their own rankings, emphasizing admission and outcomes for Pell Grant-eligible students. U.S. News & World Report eventually introduced its own “social mobility” measure, also pegged to Pell-eligible students. And, in 2017, a team of top-tier economists produced a college report card based on intergenerational mobility. Using tax data, they calculated which schools launched the most low-income students into higher income percentiles than their parents.
State and federal policymakers (including a bipartisan group of U.S. senators) have also pursued legislation that would link schools’ funding to these or similar measures.
Thanks to this public scrutiny, the share of Pell-eligible students has indeed been rising at a number of top schools. Which, in isolation, is a good thing. We want more poor kids getting recruited, matriculated, graduated.
But these well-intended new rankings have also produced some unintended consequences. For that, you can blame Goodhart’s Law: When a measure becomes a target, it ceases to be a good measure.
In particular, colleges appear to be gaming the Pell-based benchmarks to which they know the media and policymakers are paying attention.
Stanford University professor Caroline Hoxby and University of Virginia professor Sarah Turner
have found that the schools that have made the most progress in increasing their numbers of Pell-eligible students appear to be doing so partly at the expense of other low-income students — specifically, those whose families make just a few too many dollars to qualify for Pell grants.
Members of that latter group — virtually indistinguishable from their counterparts barely on the other side of that arbitrary federal Pell threshold — get substantially less generous school-awarded financial aid. Their representation on these campuses has also fallen over the past few years as that for the Pell-eligible kids has risen.
Meanwhile, rich students remain very much overrepresented.
“We’ve seen this hollowing out of the middle- and lower-middle income kids, who aren’t interesting to anybody anymore,” Hoxby said. “They’re not particularly well off and they still do need a lot of financial aid.”
Popular “economic opportunity” measures can cause other distortions, too — especially if tied to public funding.
For instance, public flagships in states whose populations are relatively low-income (such as Maine) would be rewarded for enrolling large numbers of poor kids, even though poor, academically qualified kids are still underrepresented on their campuses. The reverse is true for schools in states that are relatively affluent yet still manage to enroll a disproportionately high share of poor students (the University of Connecticut, for example).
Intergenerational mobility rankings can also penalize schools in states where there’s a lot of income equality.
By national standards, for instance, Wisconsin has few people who are either very poor or very rich. As a result, the University of Wisconsin looks bad on national income mobility rankings, even though it enrolls a lot of students from the lower end of the state’s own income distribution.
It’s easy to nitpick other people’s metrics, of course. Coming up with a better alternative is harder.
In an article for the scholarly journal Education Next, Hoxby and Turner propose the following: Look at the pool of students from which a college could plausibly draw, based on its academic mission and location. Then measure how well the “relevant pool’s” income distribution is actually represented on campus.
For flagship public schools, this is relatively straightforward. For other schools, such as selective private institutions that draw applicants from around the country (Harvard, etc.), it’s more difficult but not impossible. Especially if schools are transparent about what their mission is and what their standards are supposed to be.
The goal is not to produce a sexy new ranking; in fact, Hoxby and Turner explicitly say their aim is not to rank all universities, since different schools have different missions. Rather, the goal is to give educators, and policymakers, better tools to hold schools accountable — to the public, and to themselves.