Credit:

Like many other two-year college students, Monica Dekany has taken the long route to a degree. After graduating from high school in Glenelg, Maryland, in 1990, she enrolled in a local community college. Her grades were good there, but her direction was lacking. She dropped out, took a job at a fast-food restaurant, moved across the country, and then tried again at Utah State University in 1992. Again, she was able to pass her courses—with As, sometimes—but she still wasn’t sure of what she wanted to study, and eventually she stopped going to school. It wasn’t until many years, several jobs, and one child later that she decided to give college another try. In 2009, she enrolled at Golden West College in Huntington Beach, California, a two-year institution that, like most community colleges, accepts all who apply. Dekany was disappointed that most of her credits from the two other colleges wouldn’t transfer, but no matter: she was motivated enough to start building credits anew.

All she had to do, the registrars told her after she paid her fee, was go down a hallway, pick a cubicle, sidle up to a computer terminal, and take a short test. The “Accuplacer,” as the test is called, was no big deal, they said—nothing she could have studied for. It was just so they could see where she was. Dekany took one test in math and another in English, and was “floored,” as she put it, to learn that she had scored at a level that would consign her to remedial classes, reviews of fundamental material for which she would receive no college credit. “It caught me totally off guard,” Dekany says. The other colleges had let her enroll directly in college-level English and literature classes, and as her transcripts clearly showed, she had passed them. But Golden West told her the test results were all that mattered.

Dekany dutifully enrolled in, and paid for, the remedial—or what colleges euphemistically call “developmental”—courses. She knew everything in the English course already; her daughter’s seventh-grade English class was more advanced. Her math course was similarly low level, but it was taught by a sympathetic professor who helped save her from further remedial work. The college had mandated that Dekany take a second remedial math class before being allowed to take Math 100 for college credit, but her professor thought the requirement made no sense—she was clearly ready for college work. So he arranged for her to take Math 100 at Cal State, Long Beach, where he happened to also teach, and there she got an A.

Dekany went on to excel in college. She’s a member of the Alpha Gamma Sigma honors society, a reporter for the Golden West college newspaper, and the school’s homecoming queen. She’s just a semester away from getting her associate’s degree in social science and on her way to a bachelor’s in counseling. But there’s no getting back what the Accuplacer took from her. Remediation cost her several thousand dollars and set her education and her career back by a year. And if not for her math professor, it would have been even worse.

Dekany barely managed to dodge a fate that is very common among American college students. About 40 percent of them—a total of almost seven million people—go to community colleges, and millions more attend nonselective four-year universities. The vast majority of those institutions require students to take placement tests like the Accuplacer, and more than half the students who take those tests end up in remediation. Unlike Dekany, most students who are assigned to remediation don’t make it through. Some never even show up for
class. Others flunk out. Still more get discouraged and quit.

To be sure, open-access colleges need to assess the knowledge and abilities of incoming students. Dysfunctional public high schools routinely grant diplomas to students who lack basic math and reading skills. As a result, many new college students need help in order to grapple with college-level work. The problem is that colleges have chosen to deal with that challenge by diverting huge numbers of students into a parallel remedial education system with a dismal track record of helping students ultimately graduate from college. Compounding the problem, most colleges place students into the remediation track using nothing more than the results of a short, inexpensive, one-shot multiple-choice test of questionable accuracy and worth.

Most Americans think of the SAT as the ultimate high-stakes college admissions test, but the Accuplacer has more real claim to the title. (As it happens, the same company, the Education Testing Service, produces both exams.) When students apply to selective colleges, they’re evaluated based on high school transcripts, extracurricular pursuits, teacher recommendations, and other factors alongside their SAT scores. In open admissions colleges, placement tests typically trump everything else. If you bomb the SAT, the worst thing that can happen is you can’t go to the college of your choice. If you bomb the Accuplacer, you effectively can’t go to college at all.

The remedial placement process is ground zero for college non-completion in America. If the nation is going to make any headway in helping more students graduate from college, it will have to completely overhaul the way students enrolling in nonselective colleges are tested for college readiness, and make equally fundamental changes in how colleges use that information to help students earn degrees.

Placement testing has long been a dilemma for community colleges, and over the past few decades this crucial gateway to credit-bearing work has swung both open and closed. In the 1970s, responding to students who argued that they essentially had a right to fail, many institutions dropped mandatory placement testing and course prerequisites. The idea, as the researchers Katherine L. Hughes and Judith Scott-Clayton of Columbia University’s Teachers College explain it, was that students were old enough to decide for themselves whether they were ready for college work, and that doing so would only make them more responsible. Maybe they were, and maybe it did, but the new leniency also caused students to fail and drop out in troubling numbers.

Criticism of the relaxed policies came to a head in 1983 with the publication of A Nation at Risk, an alarming federal report that found that high school graduates were not nearly as prepared for postsecondary life as they needed to be. State legislators, who had already started to complain, demanded that standards be set and readiness measured. New Jersey was the first state to require a placement test, an assessment developed by the Educational Testing Service. Officials assumed that 10 to 20 percent of incoming students would fail the test. Instead, according to Hunter R. Boylan, director of the National Center for Developmental Education, 40 percent did. By the late 1990s, any college that did not require a placement test was a rare institution indeed.

Most colleges use one of two assessments: the Accuplacer, which is used by 62 percent of community colleges, and the COMPASS, which is administered by ACT and used by 46 percent of community colleges. (Some institutions use homegrown tests or other measures.) Both are so-called adaptive tests, which means questions are chosen for individual test takers based on how well they do on the previous question. If the student does well, the questions get harder; if he doesn’t, they get easier or stay at the same level. Like the COMPASS, the Accuplacer tests sentence skills, reading comprehension, basic math, and algebra. It also assesses a writing sample.

Students are told, reassuringly, that there is no such thing as failing the Accuplacer or the COMPASS. But there is: students who score below a certain number, or “cut score,” flunk the test for credit-bearing work. In most schools, that means that before they can enroll in for-credit, college-level courses, they have to take and pass remedial classes. At other schools, students with low cut scores can, if they insist, go straight into for-credit courses, though guidance counselors often caution them against it, or misinform them of their options.

While most students who take the SAT know it’s a life-determining experience, the high-stakes nature of the Accuplacer and similar tests comes as distressing news to most of the people who take them. A 2010 study by researchers at Northwestern University surveyed 2,000 students who took placement tests and found that 75 percent of them did not understand the significance of the tests—and two-thirds didn’t realize that remedial classes would earn them no credit. Andrea Venezia, a researcher for the policy organization West-Ed, conducted with colleagues a study of placement policies at California colleges and got similar results: the majority of test takers were unaware that their performance would determine what classes they would be able to take and whether they would receive credit. In a typical comment, one student told the researchers, “The woman at the test center said it doesn’t matter how you place. It’s just to see where you are.” Another misguided student had the placement test confused with a career aptitude assessment. “I thought it was one of those tests you take just to see what kind of field they were going to recommend,” she said.

Because they don’t know what’s coming, most students don’t prepare for the tests, even though studies have shown that a review course can raise scores enough to place students at a higher remedial level or keep them out of remediation altogether. Whereas a student taking the SAT might spend several weeks in a Kaplan or Princeton Review course, doing vocabulary drills and working with sample math problems, the typical community college student takes the placement test stone cold. There are books available, but prep courses are not nearly as numerous or institutionalized as they are for the SAT. (An Amazon search turns up thirty-five results for Accuplacer prep books, compared with 6,158 for SAT guides.) The market for Accuplacer prep is no doubt less attractive: many Accuplacer takers lack the money and the time. Fewer than half of the colleges that responded to Venezia’s survey said they provide any practice.

Students who take the SAT are often encouraged to retake it multiple times to maximize the chance of a high score. Retakes are possible with placement tests as well, but the policies for doing so depend on the institution and, again, they can work against college success. Students at Lane Community College in Oregon, for example, can’t take the test again for three months—essentially a whole semester—which is a potential deal killer for a student who is older, employed, and in a hurry. At one community college in California cited by Venezia, the wait to retake the placement test was three years, a delay tantamount to no second chance at all. “They basically can’t go,” she says.

Even if placement tests are administered inconsistently, and students given too little preparation, the use of them might be easier to defend if they were accurate predictors of whether students will be successful in college work. If fact, little research has been conducted on this crucial question. The College Board points to an independent study it commissioned that found a moderate to strong correlation between Accuplacer test scores and subsequent course performance. ACT did a study of the COMPASS that found essentially the same correlation. But Thomas Bailey, director of the Community College Research Center at Teachers College, an authoritative research organization funded by the federal government’s Institute for Education Sciences, says that the placement tests have, at best, “only a weak relationship with educational performance.”

Indeed, research by Bailey and others suggests that Monica Dekany’s experience is not unusual, and that tests like the Accuplacer and the COMPASS routinely underestimate the ability of large numbers of students to do credit-earning college work. In 2010, Bailey and colleagues Dong Wook Jeong and Sung-Woo Cho led a study that looked at tens of thousands of community college students who scored low on placement tests and other measures but ignored the advice or instruction to take remedial classes and instead enrolled directly in a for-credit course. A full 71 percent passed the for-credit course. That’s not much lower than the 77 percent pass rate for all students who took those for-credit courses. And it’s only slightly lower than the pass rate for students who first took and completed remedial courses. As the researchers note, however, many who start in remedial classes either drop out or fail before they ever take a credit-bearing course. Factor that in, and only about 27 percent of those who agreed to take remedial courses ultimately passed for-credit courses, as opposed to the 72 percent who blew off remediation. “It appears,” the researchers concluded, “that the students in this sample who ignored the advice of their counselors and proceeded directly to college-level courses made wise decisions.” Michael W. Kirst, a former professor of education at Stanford University and a member of the California state board of education, said the findings “suggest strongly that student access may be unfairly denied and that many students capable of success are not given the chance to try.”

This may explain what happened to Monica Dekany. In the early 1990s she had learned enough math and English to pass college courses. Nearly two decades later, she had forgotten enough of that material to fail the Accuplacer. But who actually remembers much of what they learned in college, let alone high school?

Here’s a sample question on the elementary algebra portion of the Accuplacer:

What is the value of 2x² + 3xy – 4y² when x = 2 and y = -4?

A.) -80. B.) 80 C.) -32 D.) 32.

It’s a fairly simple question. (The answer is A.) But many middle-aged adults with college degrees would probably flub it—especially with no opportunity to prepare. Indeed, when Boylan gave the math Accuplacer to forty-two community college faculty members and administrators attending a recent conference, everybody but the math teachers scored poorly. “Almost all of them would have ended up in remediation,” he said.

As measures of basic cognitive skills, tests like the COMPASS and the Accuplacer aren’t bad, say experts like Boylan. But they obviously miss many students who are quite capable of doing college-level work. In part that’s because community colleges tend to use these tests as the main or only determinant of who gets to take credit-bearing courses. They could avoid that by, at the very least, doing what most four-year colleges do, and what the test companies recommend they do: looking at multiple measures of a student’s potential—placement scores, high school grades, recommendations, the fact that a student has already passed college courses. But with state budgets tight and community college classes already oversubscribed, the institutional incentives are to screen students out as quickly and cheaply as possible.

Still, it’s perfectly possible to devise a cost-effective assessment system that would do a much better job of getting the maximum number of students into regular classes. For instance, there are factors beyond cognitive skills that researchers have found to be equal—or better—predictors of college success: attributes like ambition, persistence, willingness to seek help, and a desire to connect with instructors. Some researchers say that the poorer a student’s cognitive skills, the more important these so-called “affective” skills become. Yet only 7 percent of colleges, according to a 2007 survey, collect both affective and cognitive data. This is despite the fact that many relatively cheap measurement tools exist. They include something called the Learning and Study Strategies Inventory (LASSI), an eighty-item assessment that measures things like study skills, motivation, and self-discipline. The combination of an assessment like LASSI, which costs $3.50, and a personal interview with an adviser would produce a far more accurate picture of the applicant, Boylan says, helping schools limit remediation to those who truly need it. If someone falls just below a cut score on the Accuplacer but scores well on the affective assessment, he might best be assigned to a regular course. If the affective assessment and interviews reveal weaknesses, he would likely best be served by remediation.

Even more helpful would be tests that could give more specific and precise information than the Accuplacer or the COMPASS can provide about what students do and do not know. Both testing companies say they are trying to develop more predictive measurements, but according to David T. Conley of the University of Oregon’s Center for Educational Policy Research, placement tests now provide “very little diagnostic information about the specific academic deficiencies that students may have.” Has the student simply forgotten the material and needs only a memory refresher? Or did he never learn the stuff in the first place? The test can’t tell. It also can’t tell if the student needs “a small amount of focused review or a re-teaching of the material from scratch.” In other words, while a test may identify deficiencies, Conley says, it is not particularly useful in helping to fix them.

The clear limitations of placement tests and the abysmal track record of the remediation system have led a growing number of advocates and public officials to call for wholesale reform. Stan Jones, the president of the not-for-profit Complete College America and the former Indiana commissioner of higher education, says he can’t even stand to talk about placement because “the whole system is so awful.” Boylan is just as blunt: “The way these tests are used is awesomely bad.”

The expert consensus is that the problem with the placement system—as with the entire business of remedial education—needs far more than a technical solution. Conley, for one, thinks the very notion of a student being judged as either remedial or college ready presents “a false dichotomy that is in need of fundamental rethinking.” The assumption, he says, should be that all students are college ready and remedial to varying degrees. Thus, he says, a wider range of data should determine their course of study, and readiness should be assessed “as a matter of degrees, not as an absolute.”

Boylan’s ideal system, which would likewise aim for placing the highest possible number of students in regular courses, would use cognitive and affective tests, along with counseling and personal interviews—triangulating, essentially. A student who scored just under the cut might be placed in regular courses and succeed with some tutoring and other support. Students at the low end might need several layers of remediation, but the courses would be targeted to particular weaknesses revealed by placement tests that would be far more diagnostic than the ones used now.

There are promising examples of these ideas being put into practice. One can be found at Austin Peay University, a public four-year institution in Tennessee that admits 90 percent of the students who apply. For years, roughly half of all Austin Peay students were put in remediation, with typically dismal results. In 2007, the university took the bold step of eliminating remediation entirely. Instead of a placement test, underprepared students were given a diagnostic test and enrolled in college-level courses, with the requirement that they spend two hours in a learning laboratory each week, where they received individual tutoring and personalized computer-based instruction tailored to the results of the diagnostic test.

The results were impressive. Before the switch, only 53 percent of students passed developmental math, and only 30 percent completed a for-credit math class within two years. After the elimination of remediation, the percentage of underprepared students completing college-level math more than doubled, to 67 percent. English results were also significant—the percentage of students passing college English increased from 54 percent to 76 percent. Austin Peay saved on the classroom space they had been devoting to remediation, and students ended up saving on tuition because they weren’t paying for remedial courses. Everybody won.

If reforms like these were implemented in all of America’s open-admissions colleges, millions of students who have been swept into ineffective remedial classes by placement tests might be able to move forward with their lives. And these are not just any students. They are young people who worked their way out of bad public high schools from which most students drop out. Or they are adults like Monica Dekany, who, despite false starts, setbacks, and the demands of work and family, are taking the plunge back into college, making time in the mornings and in the evenings when the kids are asleep. Getting a college education is seldom easy for these folks. But the least society can do is not make the task harder than it needs to be.

Susan Headden

Susan Headden , a Pulitzer Prize-winning journalist, is a senior writer/editor at Education Sector, a Washington, D.C., think tank.