In a speech that day at the University at Buffalo, Obama made his intentions clear: The federal government would rate colleges in a way that would offer consumers something very different from the rankings published by U.S. News & World Report and other analysts.
“First, we’re going to start rating colleges not just by which college is the most selective, not just by which college is the most expensive, not just by which college has the nicest facilities — you can get all of that on the existing rating systems,” Obama said. “What we want to do is rate them on who’s offering the best value so students and taxpayers get a bigger bang for their buck.”
His vision quickly ran into numerous questions: How to define the “best value”? What data should be used? Would the data be reliable? The biggest question was whether it was fair for the U.S. government to inject itself so directly into a market with thousands of institutions, public and private, and myriad specialized niches. Everyone acknowledged it would be unfair to compare an Ivy League institution with a state university that serves a broader population of students.
But even within niches, there were variations that made federal comparisons problematic. A historically black college or university, for instance, could be public or private, large or small. It might have selective admissions, or it might allow almost any applicant to enroll. It might admit only women, or only men.
The Education Department canvassed the country for feedback. Much of it was blunt. University of California President Janet Napolitano, who had been Obama’s homeland security secretary, said in December 2013 she was “deeply skeptical” that the federal government could develop meaningful criteria for ratings. “There will be so many exceptions, once you get down to it,” Napolitano told The Post.
Education Secretary Arne Duncan defended the rating plan but acknowledged that there were challenges. “If it’s overly complicated, you add to the noise, not to the clarity,” Duncan said a few days after Napolitano’s remarks. “So we’re trying to come up with something that is simple and meaningful and adds greater transparency.”
A year later, the department disclosed a “draft framework” for ratings. It raised the possibility of including data on graduate employment and earnings. It also contemplated rating schools as high performers, low performers or “in the middle.”
Now the department is signaling a new approach: Give students and families tools to sort and compare colleges themselves.
“We have decided the best way to rate colleges is to put the information and the tools in the hands of people who want to make those comparisons,” Ted Mitchell, undersecretary of education, said Thursday. Mitchell said that by the end of summer the administration will unveil new Web sites that will allow “dynamic” interaction with federal data. Officials call it a “college ratings tool.”
“We really want it to be revolutionary,” Mitchell said.
There already are federal Web sites to help consumers and researchers navigate the market, including College Navigator and College Scorecard. To make a splash, the new effort would require giving consumers better access to existing data on such metrics as tuition, financial aid and graduation rates and, possibly, new access to data that has been hard to find or impossible to get. Many consumers want to know more about how much graduates from particular programs earn when they first get out of college and when they are at mid-career. College officials, though, are ambivalent about publication of such data.
Critics of the federal rating plan expressed relief Thursday at the shift.
“The department’s decision to abandon an arbitrary college rating system is a win for students and taxpayers,” Reps. John Kline (R-Minn.) and Virginia Foxx (R-N.C.) said in a statement. “This unprecedented scheme would have ultimately discouraged innovation, reduced access for disadvantaged individuals, and used limited taxpayer dollars to reward institutions that put the department’s priorities before students.”