Apparently under the impression that existing tests students use for admittance to universities is not enough to fully predict their success, American colleges are now trying to develop measures of “noncognitive” ability.

According to a piece by Eric Hoover in the Chronicle of Higher Education:

Over the last decade, a handful of colleges have designed “noncognitive” assessments to measure attributes—like leadership and the ability to meet goals—that content-based tests do not. Succeeding in college often requires initiative and persistence, or what some researchers call “grit.” Noncognitive measures are an attempt to gauge such qualities. If the SAT asks what a student has learned, these assessments try to get at how she learned it.

But it’s pretty hard to design something to measure such ambiguous characteristics. As Hoover explains:

Although noncognitive assessments are supposed to [measure qualities not captured by standardized tests], there’s no consensus on how best to get at students’ intangible qualities. With no gold standard, researchers are dabbling in an array of approaches. The College Board has tested a standardized way to measure 12 qualities, such as artistic and cultural appreciation, and integrity. The Educational Testing Service has created the Personal Potential Index, an online system allowing evaluators to rate applicants in six categories, including communication skills and teamwork. A means of standardizing letters of recommendation, the index has caught on at some graduate schools and may have a future in undergraduate admissions.

Is this necessary? It’s unclear why leadership and the ability to meet goals really need a special test. Isn’t that pretty well assessed by, well, leadership positions held in high school and grades attained?

One higher education administrator explained to that the reason noncognitive assessments were useful was because “This gets us out of the habit of talking about students as a 3.8, 29 ACT. If nothing else, this allows us to think of students as multidimensional.” But this is misleading.

In fact, before the introduction of the SATs, colleges routinely admitted students based on noncognitive things. It’s not that hard, it’s just a matter of how much effort colleges want to put into this project.

It’s not that noncognitive tests would allow colleges to “think of students as multidimensional,” it’s that they offer the promise of an off-the-shelf bubble test that can do this for them.

If colleges are really interested in the whole student, they should need to read applications very carefully and interview students. That’s the way to learn real information about students. There’s not a cheap way around this.

Daniel Luzer

Daniel Luzer is the news editor at Governing Magazine and former web editor of the Washington Monthly. Find him on Twitter: @Daniel_Luzer