The guru of college rankings stood up one afternoon this week in Washington to ask a few questions about President Obama’s college rating plan.
Robert Morse, director of data research for U.S. News & World Report and leader of its famous annual college rankings, wondered exactly how the government planned to meet what appears to be a September 2015 launch date for a new federal system to rate colleges on measures of value and access.
So far, the system is still under development. No draft plan has been released for implementing the initiative that Obama launched with a speech in August. Yet there are some 5,000 degree-granting institutions in the United States, and if they are all to be rated credibly and effectively, time is growing short.
“Nobody should doubt at all that this is a big, big, big, big data project,” Morse said Thursday at a federal symposium to examine the technical details of what is known as the Postsecondary Institution Ratings System, or PIRS. Morse knows whereof he speaks. He has been tilling these fields for decades.
More questions from Morse: Exactly who is in charge of the ratings? Who will decide if the methodology, the weighting of factors in formulas and the categories of institutions all make sense? How will those decisions be reviewed? The administration appears to be seeking to group and compare schools with “similar missions.” Why would it not use the well-vetted Carnegie classification of colleges that is the standard for higher education?
And another big question: What will the government do to ensure colleges do not misreport the raw data that feeds into the ratings? Will it be a surprise, Morse wondered, when the Department of Education digs into its own data for analysis “and quickly finds out that it is not the gold standard?”
After all, Morse pointed out, in those well-publicized instances in recent years when colleges have submitted erroneous data to the U.S. News rankers, they have generally submitted the same bad numbers to the federal government.
“This is a real opportunity,” Morse said, for the government “to take a tougher stance” on data integrity.
He said U.S. News welcomes the prospect of federal ratings, which would be “very different” from his annual college analyses. The U.S. News lists, Morse said, seek to determine which schools are tops in undergraduate academic quality. Obama’s goals are focused on access, affordability and outcomes. The president envisions using metrics on the number of students in financial need, average tuition and loan debt, graduation and transfer rates, graduate earnings and the number of graduates who obtain advanced degrees.
John Q. Easton, director of the federal Institute of Education Sciences and acting commissioner of the National Center for Education Statistics, said after Morse spoke: “I really took Bob Morse’s slides to heart, and I hope you all did, too.”
The symposium gathered a number of experts and advocates to chew on some of the nuances and challenges underlying the initiative. For starters, there is the problem that most students who are not affluent pay little attention to rankings or ratings when choosing a college.
Then there is the distinction the government is making between numerical rankings, of the sort U.S. News produces, and the ratings that Obama wants — without imposing any ordinal numbers on particular schools.
“It reminds me a little bit of casino owners insisting they are in the gaming business, not the gambling business,” joked Kevin Carey, director of education policy at the New America Foundation. Carey wondered whether journalists or other independent analysts might somehow use the data underlying the government’s rating system to produce numerical rankings despite the administration’s wishes.
Mark S. Schneider, vice president of the American Institutes for Research and former commissioner of the National Center for Education Statistics, is an advocate of producing more information about salary and career outcomes for college graduates. But he wondered about the appropriate federal role.
“My belief is the government should create the database and let everyone else figure out what to do with it,” Schneider said.
There was debate about whether a ratings system should be clear, simple and understandable or nuanced, and therefore more complex, to account for all of the differences in scale and mission of colleges and universities. Is the system primarily for educating consumers or for holding schools accountable? Or both?
How should community colleges be rated when many of their students are not really seeking degrees but instead are aiming for certificates or just taking a couple of random classes? And of those who are seeking degrees, many transfer to four-year schools without getting an associate’s degree. Shouldn’t that be considered a success? If so, how will the government track it?
If outcomes are not properly measured, said Patrick Perry, a vice chancellor of California’s huge community college system, “things start to get more dicey for community colleges.”
Tod Massa, director of policy research and data warehousing at the State Council of Higher Education for Virginia, said the federal government’s first priority should be to get better college data. He said Virginia and other states are ahead of Washington on that front. For instance, Virginia is publishing detailed information about salaries of college graduates. Virginia’s data on college graduation rates is also more complete than what the federal government has.
“We don’t need to do a lot here — other than collect better data and build on what’s being done in the states,” Massa said.