Most colleges wouldn't fare well under the federal government’s proposed system to rate institutions on affordability, access and quality measures, according to a report released by the American Enterprise Institute.
The three measures – known as higher education’s “iron triangle” – are closely intertwined, according to experts in the field. They're also nearly impossible to change individually without affecting the outcomes of all three, the report says.
“While it’s easy to hypothesize about which institutions and students would win and lose under the new ratings scheme, an informed debate requires an empirical look at how America’s colleges and universities currently fare on the three sides of the triangle,” the report says. “Is the iron triangle an iron law? Or are there colleges hitting high marks on all three sides?”
President Barack Obama’s proposal to rate colleges has been controversial, given the fact that critics have said other ratings and ranking systems, including those of U.S. News, can push schools to misreport data and make decisions based on what would benefit their standings, rather than their students. While the purpose of the government ratings – to give families more information about how well colleges serve students – is not a new idea, the intent to tie federal financial aid to those ratings could have wide-ranging implications for institutions. Schools that perform well would be rewarded with larger Pell Grant funding and federal student loans, while those deemed less effective would have funds taken away.
Still, many questions remain, including whether community colleges would be rated, how the government plans on determining which schools are similar for comparison (by mission or by kind) and exactly what data would be used to define each accountability measure. Whether all three measures can be accomplished well also is still up for debate among college leaders. Schools with a large percentage of disadvantaged students, for example, tend to have lower graduation rates. Enrolling fewer of those types of students could improve student success, but decrease access.
As higher education leaders from across the country gathered Thursday to debate the merits of the ratings system – formally named the Postsecondary Institutions Rating System (PIRS) – some also have brought up concerns about how using such measures could disproportionately hurt some types of institutions while rewarding others.
Andrew Kelly, director of AEI’s Center on Higher Education Reform, says the debate over Obama’s proposed rating system boils down to two main concerns: measurement issues and deciding which incentives to attach to the ratings.
On the one hand, college leaders are concerned with how government officials plan to accurately measure some abstract concepts that determine the ratings. While measuring the percentage of enrolled students who receive Pell Grants may be a good measure of access, Kelly says, it wouldn't necessarily be fair to compare an urban commuter school that could draw from a larger pool of low-income students to a residential, rural campus that may simply have trouble attracting low-income students.
Additionally, there still isn't a consensus on what value – in terms of affordability – and quality mean.
In a letter signed by 25 education organizations sent to the Department of Education, Molly Corbett Broad, president of the American Council on Education, said at least five magazine publishers – including U.S. News – rate colleges by “value” but come up with significantly different results. The department needs to clearly and specifically define terms of “value” and “quality” before implementing a rating system, she wrote.
Additionally, Broad argued that a rating system of any kind could create “perverse incentives” that would skew both students’ and colleges’ behaviors. Relying too much on graduation rates, she said, could make colleges less willing to admit “students with marginal qualifications.”
“There are lots of ways in which that incentive structure could be problematic,” Kelly says. “If you measured campuses on these three dimensions and then gave them a single score, two schools with totally opposite problems could have the same score.”
Such a blanket accountability measure, he says, could bring the potential for unintended consequences already seen in the K-12 education sphere, when No Child Left Behind led schools to focus on bringing middle-range students to proficiency but cut out those at the very bottom and top ends of the performance spectrum.
“The analog in higher education would be, ‘I need to raise my completion rates, so I need to stop letting in students who are more difficult to educate,’ or ‘I should just lower my standards completely and become sort of a diploma mill,’” Kelly says. “Those are concerns and the second we start to tie aid eligibility to those measures, you present the possibility that campuses will respond in ways that we don’t necessarily want.”
Kelly, along with Awilda Rodriguez, a center research fellow and lead author of the report, analyzed information from more than 1,700 four-year colleges with data available on all three measures.
Overall, they found that although very few colleges would miss the mark on all three measures, just 19 of those 1,700 colleges – or about 1.1 percent – would perform well on all three.
The two used federal data for the three measures, similar to what the White House has suggested, although critics have said the data are imperfect and significantly limited under current law.
To measure access, for example, Kelly and Rodriguez used the percentage of students at an institution who receive Pell Grants. For student success, or quality, they used official six-year graduation rates for first-time, full-time students. The limitations of this measure include the fact that transfer students and those returning to college are not counted. In some cases, transfer students could be counted as dropouts, which could hurt an institution’s performance in the ratings (more on that later).
Finally, Kelly and Rodriguez settled on using an institution’s average net price – what students pay out-of-pocket after grants and scholarships are taken into account – to measure affordability.
Based on those measures, the 19 high-performing colleges identified in the report all had graduation rates above 50 percent, net prices below $10,000 and a student population with more than 25 percent receiving Pell Grants.
To put that in another light, those 19 institutions served just 3 percent of the undergraduates in the study’s sample.
Some of the institutions named are the University of Washington, San Diego State University, West Virginia University, the University of Texas at Dallas and the University of North Carolina at Greensboro.
Notably, Dewey University (located in Puerto Rico) had the lowest net price ($4,518), the second highest percentage of Pell enrollment (93 percent) and the highest six-year graduation rate at 84 percent.
But the majority of colleges fell somewhere else along the spectrum, with questionable results as to how they would fare under the rating system. Many institutions – 20 percent – fell into a category with a high percentage of Pell recipients and relatively low net prices, but below-average graduation rates, which the report says would leave them “in a precarious position” if the accountability measures took effect.
Another third of the institutions were simply average across the board: average graduation rates, with an average percentage of Pell recipients and middle-of-the-road net prices.
“Arguably, the President’s proposed policies would have the greatest impact here,” the report says. “With a lackluster average graduation rate of 41 percent, a performance-based funding model could lead institutions at the lower end of this measure … to receive less grant money, generating higher net prices and causing them to tumble even further.”
As the saying goes, the devil is in the details, and blanket performance measures could have drastically different results from school to school. Expensive schools with high graduation rates that are penalized for not enrolling enough Pell recipients might be even less incentivized to do so, Kelly and Rodriguez argue.
through the potential reactions of colleges to these new incentives, it is
worth keeping in mind a basic pattern in higher education: It is generally
easier for a college to change who they admit than it is to change the success
rates of the students already there,” Kelly and Rodriguez write.
Corrected on Feb. 7, 2014:
This article misidentified the lead author of the AEI
report. Awilda Rodriguez is the lead author.