I may as well admit it: I love rankings. Whenever someone comes up with a new way to rank universities or political science departments (and this happens a lot) I go and check them out. The first thing I do, of course, is figure out where Georgetown (my university) ranks. So that’s what I did when I read this Chronicle of Higher Education article on a new ranking by the Faculty Media Project that “seeks to quantify how often professors engage with the public through the news media.”
Unfortunately, Georgetown was nowhere to be found. Strange, given that many of its faculty have a high media profile. Turns out, the study only ranked universities that have doctoral programs in anthropology. I’d say that is an odd selection criterion given the study but okay, so be it.
So what are the top ranking universities? The chart below gives them:
We would have expected MIT here, although not for its sociology department, which does not exist, as Kieran Healy points out. But the others are deeply puzzling. What’s going on here?
It turns out that after measuring media citations per faculty member, they divide that number by the percentage a department has of the total funding by the National Science Foundation for participating departments.
This is, to put it mildly, an odd choice. It supposedly tells us something about return to investment. So, University of Arkansas professors are on average cited 1.86 times but they get only .06 percent of NSF funding. This is, allegedly, a much better return on investment than Harvard professors, who are on average cited 13.93 times but get 1.4 percent of NSF funding (Harvard is ranked 17th). There are so many ways in which this doesn’t make sense, but let me just point out two. First, NSF funding is given for basic research, not commenting on local elections or other media presence. Two, the University of Arkansas is a public university, which gets much more public money in other ways than the NSF.
Below is the ranking based on citations per faculty member. Looks more familiar, no?
I really don’t mind new rankings based on alternative criteria. More information is better. Surely, there is a use for media citations in the broad array of things we use to evaluate universities. Yet, by dividing easily interpretable information (average media citations by faculty) by a fairly arbitrary number, the rankings obscure rather than enlighten.