The decline in importance of high school class standing in admissions decisions was confirmed in the National Association for College Admission Counseling's 2012 "State of College Admission" report. The report states that since 1993, "the factor showing the largest decline in importance is class rank." For fall 2011, just 19 percent of colleges rated it as considerably important, down from 42 percent in 1993.
This same research shows that SAT and ACT scores are growing in importance in admissions decisions. To better reflect this reality, the student selectivity indicator in our ranking model was adjusted so that the weight of high school class standing dropped from 40 percent to 25 percent, and the weight of SAT and ACT scores increased from 50 percent to 65 percent.
At the same time, the weight of student selectivity overall has declined from 15 percent to 12.5 percent to place less emphasis on inputs. This change reduced the effective weight of class rank in the overall rankings from 6 percent to 3.125 percent; increased the effective weight of SAT and ACT scores in the overall rankings from 7.5 percent to 8.125 percent; and slightly reduced the effective weight of acceptance rate in the overall rankings from 1.5 percent to 1.25 percent.
2. Graduation rate performance: We expanded the use of the graduation rate performance indicator to all the Best Colleges ranking categories; this meant that for the first time, it applied to Regional Universities and Regional Colleges, nearly 1,000 additional colleges.
Since 1997, this ranking factor had been used only in the National Universities and National Liberal Arts Colleges ranking categories. It now has a weight of 7.5 percent in the ranking model for all schools.
Incorporating this indicator for all schools improves the Best Colleges ranking methodology as it's an important outcome measure that focuses on the difference between each school's predicted graduation rate (as calculated by U.S. News based on key characteristics of the incoming class closely linked to college completion, such as SAT and ACT scores and Pell Grants) and its actual graduation rate. The indicator gives credit to schools that have higher-than-expected graduation rates.
3. Other ranking factors: We changed other weights in the ranking model to further emphasize outcome measures.
The weight of the peer assessment score was reduced in the Regional Universities and Regional Colleges categories from 25 percent to 22.5 percent; the weight of graduation and retention rates was increased for National Universities and National Liberal Arts Colleges to 22.5 percent from 20 percent.
Since we added graduation rate performance as a ranking factor for Regional Universities and Regional Colleges, the weights for the graduation rates themselves and retention rates dropped from 25 percent to 22.5 percent.
As a result of the changes described above, many schools' ranks changed in the 2014 edition of the Best Colleges rankings compared with the 2013 edition.
If a school's ranking data changed in the 2014 edition compared with the 2013 edition, this could have had an impact on its new overall rank.
Even if a school's ranking data changed little in the 2014 edition compared with the previous edition, if the new methodology placed more emphasis on a ranking factor that the school scored relatively higher in, then its rank may have risen.
Similarly, if the new methodology placed more emphasis on a factor that the school was relatively weaker in, then its rank may have fallen.
Beyond the ranking methodology changes, we used clearer footnotes to indicate the schools that did not report to U.S. News fall 2012 SAT and ACT scores for all first-time, first-year, degree-seeking students with these scores – including athletes, international students, minority students, legacies, those admitted by special arrangement and those who started in the summer of 2012.
The footnotes also include schools that declined to tell us whether all students with test scores were represented.