The Measure of Learning
Can you test what colleges teach? Academics are appalled that the government wants to try
Common knowledge. One of the major hurdles for measuring value added is agreeing on what students should learn.
Should a philosophy major be proficient in calculus? Should a physics major be able to conjugate French verbs? A study of hundreds of students at the University of Washington suggests that measuring success within disciplines might be the way forward instead. "We found that learning outcomes were highly dependent on a student's major," says Catharine Beyer, who has compiled the results of that research into a book to be published this spring. "A chemistry student will learn something very different about writing than a philosophy major. That's why standardized tests across institutions are too simplistic to determine what learning takes place."
Others contend that a myopic focus on testing is simply the wrong way to think about learning. Peter Ewell, vice president at the National Center for Higher Education Management Systems, says that alternative assessments, like portfolios of student work or senior-year capstone projects, can be effective yardsticks for gauging progress. Ball State University in Muncie, Ind., for instance, requires that all students must pass a writing test in order to graduate; in two hours, students must produce a three-page expository essay. In several majors, including architecture and education, students must maintain an electronic portfolio of their work.
In the next five years, Ball State will also give all students the opportunity to participate in an "immersive learning project," in which they solve a real-world problem. One recent class, for example, produced a DVD about the American legal system for the local Hispanic communities. "The limitation of the Spellings commission is that they only think about universities in terms of the classroom," says Jo Ann Gora, Ball State's president. "We see our educational mission in much broader terms, including community involvement that is not easy to quantify with a test."
To a large degree, schools already are held accountable for their performance. It happens through the accreditation process, in which an independent panel reviews the operation of an institution and gives its official blessing. When the process started, there were fewer colleges and far fewer federal dollars at stake. But now, with federal student loans contingent on a school's credentials, a loss of accreditation could put a college out of business. Thus, accreditors are reluctant to fail schools, preferring instead to issue warnings and encourage improvement. Accreditors meeting in Washington recently also confessed that some were reluctant to shutter schools that are "failing in the numerical sense" because those institutions were serving students who otherwise might not have options.
Freeze. But if the feds have their way, that sort of attitude may change. The Department of Education recently made an example out of the American Academy for Liberal Education, a minor accrediting agency, by freezing its authority for six months foramong other thingsfailing to clearly measure student achievement. It was an indication of how quickly the government is moving to implement the recommendations of the commission. "We're not just going to sit around and study this," says Cheryl Oldham, the commission's executive director. "We're going to begin to correct the problems."