That means more weight was given to college graduation rates and less to the rankings that incoming freshman earned in high school. (With fewer and fewer high schools actually ranking students, that data point has become unavailable.) The rankers also used data on college admissions exams, how well colleges retain their students, financial and faculty resources and more.
Somehow, though, the top 10 schools look mighty familiar: Last year, Princeton and Harvard were tied for No. 1 in the national universities category. This year Princeton is No. 1 and Harvard is No. 2. Do you really think anything changed at Harvard for that to have happened? In 2012, Yale was third and this year Yale is ranked third. Last year Columbia and the University of Chicago were tied at fourth. This year Columbia solely claims fourth, while Chicago shares fifth place with Stanford University, which last year was sixth, tied with MIT. This year MIT is tied with Duke AND the University of Pennsylvania for seventh place. Last year Duke tied for eighth place with the University of Pennsylvania. No. 10 on the list this year is a tie between Dartmouth College and the California Institute of Technology, the same as last year.
There were, it should be noted, bigger changes further down the list. My colleague Nick Anderson writes about that here.
U.S. News calls its methodology changes for 2014 “significant,” but yet, if you look at the explanation of how much weight each data point got, you find that the most subjective one, a school’s reputation, still carries 22.5 percent of the whole. The magazine says:
Undergraduate academic reputation (22.5 percent): The U.S. News ranking formula gives significant weight to the opinions of those in a position to judge a school’s undergraduate academic excellence. The academic peer assessment survey allows top academics – presidents, provosts and deans of admissions – to account for intangibles at peer institutions such as faculty dedication to teaching.
Really? Officials at competitive schools are supposed to accurately gauge faculty dedication to teaching at other schools? Do you think they can do that accurately for all faculty at their own schools? Over the years, I’ve spoken to presidents and provosts and deans of admissions and some believe they can do a decent job of evaluating competitors. Others don’t, but that doesn’t stop some of them from filling out the survey anyway. (Yes, the magazine allows those being surveyed to say that they “don’t know” but that raises the question about how many “don’t knows” there are vs. actual answers.)
To help with this data point, U.S. News says 2,202 counselors at public high schools were also surveyed. And wouldn’t you know it? Most of those counselors comes from a school that won a gold, silver or bronze medal winner in last April’s U.S. News rankings of Best High Schools (plus 400 counselors at private schools). “Nearly every state and the District of Columbia” are represented in the counselor pool.
The magazine doesn’t tell you that a 2012 report on the state of college admissions issued by the National Association for College Admission Counseling said that counselors don’t think the U.S. News rankings accurately represent information about colleges.
Here are the other factors and their weights in the overall U.S. News rankings calculations:
Retention 22.5 percent
Faculty resources 20 percent
Student selectivity 12.5 percent
Financial resources 10 percent
Graduation rate performance 7.5 percent
Alumni giving rate 5 percent
A lot of the weight in these calculations depends upon the wealth of a school. Is that fair?
Regarding a school’s selectivity rating, the magazine says:
A school’s academic atmosphere is determined in part by the abilities and ambitions of the students.We use three components: We factor in the admissions test scores for all enrollees who took the Critical Reading and Math portions of the SAT and the Composite ACT score (65 percent of the selectivity score); the proportion of enrolled freshmen at National Universities and National Liberal Arts Colleges who graduated in the top 10 percent of their high school classes or in the top quarter at Regional Universities and Regional Colleges (25 percent); and the acceptance rate, or the ratio of students admitted to applicants (10 percent).
Hmmm. SAT And ACT scores don’t do a good job predicting how well students will do in college, much less life. Why is that a measure?
Another hmmm: acceptance rates. In recent years, in part because the Common Application to colleges has made it somewhat easier to apply, students do so at more institutions. Schools are getting more applications that are not reflective of a growing applicant pool. So this year, Harvard University’s admission rate dropped to 5.8 percent, down from 5.9 the previous year. But are there really more super-qualified students in that pool or are there more students who don’t really have a chance sending in applications anyway?
I could keep right on hmmming but I won’t.