The Collegiate Learning Assessment, a standardized testing initiative in American colleges designed to measure a college’s contribution to student learning (in effect, what students gained intellectually as a result of studying at certain institutions), in many ways looks like a promising way to evaluate colleges. Students go to college to learn, after all. Isn’t getting some objective measure of that learning important?
Perhaps, but the Collegiate Learning test may not be a very good method to evaluate education. The CLA, which is administered to both freshman and seniors in order to measure learning over a four-year period, has a lot of problems. According to a piece by Clifford Adelman of the Institute for Higher Education Policy (formerly of the Department of Education):
The problems lie in the test-taking sample, the ways in which the test is scored, the way scores are reported, and how these numbers are used by institutions of higher education.
Sample: Let’s stick with Texas/Austin. They have 7700 freshmen and 13,600 seniors, and when equal numbers of paid volunteers step forward—usually 100 or 200 out of both classes—not only do we have the problem of paid volunteer test-takers (which common sense-let alone 40 years of literature—will tell you don’t produce credible results), but also representation. How does Texas/Austin get almost twice as many seniors as freshmen? Some of the increase is that of transfers-in, but a lot of it consists of students who are in their 5th or 6th year of study and are still called “seniors.” The purveyors of the test will claim that they weight every student to represent an appropriate piece of the undergraduate body, but it would take a lot of statistical gymnastics to be convincing at any place other than a maximum security prison.
Furthermore, the scoring for the essay section is very troublesome. It’s not really possible to protect consumers, since the scoring system is mysterious, it doesn’t work well when evaluating other languages, and the section graded by a panel in one location may not be comparable to the same section graded by a panel somewhere else.
The appropriate solution to this problem is perhaps hard to determine. Adelman supports an alternative test that measures “objectives that every student should complete before earning a degree in a given subject at any college.”
That might be an improvement. For now, though, it’s important to keep the limitations of CLA assessment in mind. It measures information about student learning, sure, but is it a valid measure?