In a market as large, diverse and complex as American higher education, it is no surprise that consumers demand information to help make order out of chaos. However, in order for such information to be truly useful, it has to mean what it purports to mean.
"Best Colleges" leaves one important question unanswered: "Best for whom?" While many colleges and universities are ranked, there are exponentially more students, each of whom may derive a different benefit from a different college. In other words, what is best for one student might not be best for another.
Two years ago, the National Association for College Admission Counseling convened a committee of high school counselors and college admission officers to consider some long-running questions about rankings. Both groups, particularly school counselors, hold rankings in low regard. But both also find that the U.S. News publication offers useful information, rankings not-withstanding. The committee emphasized two long-standing criticisms:
1) The rankings, and the methodology on which they are based, report subjective conclusions as objective measures and result in broad misperceptions about college quality. Using "input" variables, like SAT/ACT scores, to assess a college's quality is like judging a person's character by the wealth of the people they associate with. The rankings' heavy reliance on peer review (the opinions of college presidents, admissions officers and counselors) yields assessments of institutions that are based on opinions; opinions that are, at times, un- or under-informed.
Moreover, academic research proved that we could produce an almost infinite array of rankings by tweaking the weights assigned to each variable in the U.S. News methodology. Among the top 75 schools, we may get 75 different lists just by changing the amount of emphasis we place on a single variable.
2) The tendency to conform to the rankings methodology creates incentives for colleges to focus disproportionate resources on strategies to manipulate data (also known as "gaming" the rankings) that can affect a college's position without necessarily changing the quality of the institution. Indeed, recent examples of colleges getting caught manipulating the data they report to U.S. News illustrate – without absolving the colleges of their responsibility to act with integrity – that the pressures created by the rankings are great.
NACAC understands that students and families are justified in wanting reliable, digestible information about college. For this reason, we recommend that rankings be more transparent, accessible and focused on college attributes rather than student characteristics. Reducing or eliminating the use of SAT/ACT scores in the rankings methodology would be a good place to start.
Our committee also recommended that U.S. News de-emphasize (or democratize), without entirely abandoning, its own ranking in favor of allowing the public to arrange the data in ways that serve their own interests. Doing so would yield a gold mine of new information, with endless opportunities to publicize lists created by students, families, educators and others.
About David Hawkins Director of Public Policy and Research at the National Association for College Admission Counseling
Paul Glastris Editor in Chief of the Washington Monthly
Ben Miller Senior Policy Analyst at the New America Foundation
Brian Kelly Editor of U.S. News & World Report
Philip Altbach Director of the Center for International Higher Education at Boston College