(I note from the start that my 19-year-old daughter attends GWU, so you’d think I would have a personal interest in this as well as a professional one. I don’t; I am as unimpressed with rankings as a parent as I am as a journalist.)
Here’s what happened: The university, which this year was in a three-way tie for 51st place in the National Universities category (with Boston and Tulane universities), revealed that it had been giving the magazine exaggerated data on the percentage of freshmen who had graduated in the top 10 percent of their high school class.
The school said the figure for last year was 78 percent, but, in its mea culpa, said it was actually 58 percent. (You can read more about the disclosure here). But, as it turns out, 58 percent isn’t really 58 percent; it is 58 percent of the 38 percent of students for whom class ranking data was available. And there’s the rub: Many — probably a majority — of high schools don’t give class ranks, raising questions about the significance of this data point for the rankings, and about whether other schools have accurately reported on this. Montgomery and Fairfax county schools don’t, for example. Neither do many of the country’s most prestigious private schools.
My colleague Nick Anderson figured out that for 2011-12 freshmen at Yale, 97 percent were ranked in the top 10 percent of their high school classes; at Harvard, 95 percent, at Princeton, 93 percent. BUT, at Yale, the 97 percent is really 97 percent of the 31 percent of freshmen for whom there was class rank data. At Harvard, it’s 95 percent of the 60 percent of freshmen for whom class rank was available. At Princeton, it’s 93 percent of the 30 percent of freshmen had class rank information.
If this sounds cockamamie, that’s because it is.
Of course, the magazine’s ranking methodology overall gives rise to skepticism about the worth of the entire ranking enterprise. The U.S. News college rankings have taken on a life of their own, becoming all-important to families desperately looking for information about higher education and to colleges and universities that bend over backwards to get a higher ranking to attract students.
People on both sides of this ignore the fact that rankings are tremendously flawed. The largest factor in this supposedly objective statistical analysis — worth 22.5 percent for National Universities — is the combined subjective assessment of a school’s reputation by academics from rival institutions and by high school college admissions counselors. There are legitimate questions about other data points used in the analysis too, but the bigger issue about determining quality is captured in this bromide, “Not everything that counts can be counted.” (Albert Einstein is often said to be the author but he probably wasn’t.)
So what we have is flawed information in a flawed data category in a flawed overall methodology for college rankings that shouldn’t matter to anyone, but do. Correcting a flawed statistic that is used in a flawed methodology doesn’t correct that flawed methodology.
GWU officials thought the school would fall a few spots in the ranking and were surprised to be dropped from the list entirely. After all, Claremont McKenna College in California and Emory University in Atlanta acknowledged this year that they inflated SAT scores of incoming freshmen in public reports but their rankings weren’t affected.
It just may be that for George Washington University, going unranked for 10 months will turn out to be a good thing. Some students there are upset that their schools is now unranked — as if suddenly something has materially changed at the schools. Nothing has. Classes will still be taught, degrees will be awarded, research will be conducted, applications will be received. Life without college rankings. Where’s the flaw in that?