My friend Rob took as few classes as possible at Stanford. He had top-notch SAT scores and high-school grades, and he was smart enough to graduate even if he was less at ease in the library than party-hopping in a caveman suit. We took one class together and were assigned Gulliver’s Travels—a book I had read before and loved. Our prestigious professor, however, drained the life out of it by lecturing monotonically about his pet theory that Gulliver’s voyage was a metaphor for birth. Good students sat with mouths agape; bad students slept. When we broke into small discussion sections, everyone was so stultified that the understudies running the classes simply rehashed the original monologue. To Rob, it was all “cool”; he was on his way to a degree.
Rob may not have been a stereotypically great student. But he was outstanding from the viewpoint of the U.S. News and World Report college rankings, the most important arbiter of status in higher education. He went to a top-rated school, and he didn’t hurt its score because the U.S. News rankings don’t measure how much students learn; they don’t measure whether students spend their evenings talking about Jonathan Swift or playing beer pong; and they don’t measure whether students, like Rob, are just there to get through.
A single magazine’s idiosyncratic ranking system may seem peripheral to the larger issues of higher education, but this particular one matters a lot. The U.S. News rankings are read by alumni, administrators, trustees, applicants, and almost everyone interested in higher education. The New York Times aptly described them as “a huge annual event,” and they dominate what is far and away the best-selling college guide available. Subsequently, the rankings do have a kind of Heisenberg effect, changing the very things they measure and, in certain ways, changing the entire shape of higher education. The problem isn’t that the rankings put schools in the wrong order: A better ranking system might put Stanford 1st; it might put it 35th. I can’t presume to know where it, or any other school, would rank. What I do know, however, is that a better ranking system, combined with more substantive reporting, would push Stanford to become an even better school–a place where students like Rob would have to focus more on learning than sliding by, and a place with fewer teachers putting their students to sleep. Unfortunately, the U.S. News rankings instead push schools to improve in tangential ways and fuel the increasingly prominent view that colleges are merely places in which to earn credentials.
The first U.S. News rankings appeared in 1983. The magazine grouped colleges into categories like “national universities” and “regional liberal arts colleges” and sent a survey asking for the opinions of university presidents on the five best schools in their category. There was nothing scientific or subtle about the survey and most people just shrugged it off. Donald Kennedy, president of then-first-ranked Stanford, said, “It’s a beauty contest, not a serious analysis of quality.”
That issue still sold remarkably well and in 1985 and 1987 U.S. News, under new owner Mortimer Zuckerman, again published rankings based solely on university presidents’ perceptions. Then in 1988, U.S. News decided to take the rankings more seriously and to try to develop a franchise much like People’s “50 Most Beautiful People” or the “Forbes 400.” So Zuckerman placed Mel Elfin, an influential Washington journalist recently lured away from Newsweek, in charge of developing a more respectable system. Elfin found his sidekick a year later in Robert Morse, an intelligent, soft-spoken man who, if he were an actor cast as an introverted accountant, would be criticized for overplaying his role. The team got to work: Morse crunching the numbers, Elfin packaging the rankings with stories on higher education and creating the institution christened “America’s Best Colleges.”
Morse, Elfin, and, later, Managing Editor Alvin Sanoff, rapidly created a franchise: Every September since 1988, the magazine has produced an eagerly anticipated list that precisely orders every college in the country. According to last September’s rankings, for example, St. Mary’s College of California is the eighth best western regional college, just slightly better than Mt. St. Mary’s College, California but well ahead of Our Lady of the Lake University in Texas–a school ranked down in the second of three tiers that the magazine groups institutions into once the top 50 schools in a category have been nailed down. “America’s Best Colleges” sells about 40 percent more than U.S. News’ standard weekly issues and the magazine also produces a hot-selling accompanying book. Last year, eight million people visited U.S. News’ Website when it posted the rankings.
The rankings are opaque enough that no one outside the magazine can figure out exactly how they work, yet clear enough to imply legitimacy. For the past 12 years the main ranking categories have remained fairly constant: student selectivity, academic reputation in the eyes of other university presidents and admissions deans, student retention and graduation rates, faculty quality rated by pay and Ph.Ds, financial resources, and alumni giving. A category introduced in 1996 measures a university’s “value added,” assessed by the difference between actual and expected graduation rates (if you let in highly qualified students, your expected graduation rate is high). In short, the perfect school is rich, hard to get into, harder to flunk out of, and has an impressive name.
Beyond the rough guidelines, each category is then broken up further. Under the rules of the 2000 survey, “student selectivity” is based on some unexplained combination of the SAT scores of the 25th and 75th percentile of the entering freshman class, their class ranks, the percentage of applicants accepted, and “yield,” the percentage of admitted applicants who enroll. These numbers are hard to parse, but it’s difficult to accuse U.S. News’ ranking system of being a simple beauty contest; it’s now a complicated beauty contest and its scientific air contributes greatly to the attention people pay it. As Groucho Marx said: “Integrity is everything. If you can fake integrity, you’ve got it made.”
Of course, there is no one definitive way to judge colleges and U.S. News does consistently encourage students to take the rankings with salt. Caltech, for example, was a surprise 1999 choice as the top-rated national university, but even its name suggests that it caters to technophiles, not poets, marines, or aspiring history professors. It also didn’t seem like a great place to the seven African-Americans accepted last year: none of them chose to attend. U.S. News understands this and sprinkles lines throughout the issue warning students of the importance of “researching the intangibles?’ But it also understands that, in a status-conscious era, rankings sell.
So, the magazine trudges forward, annually tweaking its algorithm. Last year, for example, criticism from rural schools persuaded the magazine to include a costof-living adjustment in the calculation of faculty salaries so as not to make the faculty seem cut-rate at schools where groceries and apartments cost less. This and other small changes have made the rankings better and the editors have often been praised for their willingness to listen to criticism from universities, even as they are criticized for closing decisions about the rankings off from the rest of the U.S. News staff and creating a private fiefdom isolated from the rest of the magazine.
In 1997, U.S. News commissioned the National Opinion Research Council to write a critique of its ranking methodology. This internal document is probably the most detailed examination of the U.S. News rankings that has been done.
NORC’s first major critique was that there is little justification for the precise weighting scheme that U.S. News uses: “The principal weakness of the current approach is that the weights used to combine various measures into an overall rating lack any defensible empirical or theoretical basis.”
The report’s second critique was that U.S. News had not done exemplary statistical work and had not determined, for example, how individual variables are correlated. “Apart from the weights, however, we were disturbed by how little was known about the statistical properties of the measures or how knowledge of these properties might be used in creating the measures.”
The report also made specific criticisms of the way that U.S. News interpreted graduation rates, yield, and alumni giving and suggested that the rankings should be tabulated as three-year averages: “to smooth out short-term fluctuations, random errors in reporting, or other factors that might cause unbelievably large movements in rankings for particular institutions.”
The report also recommended that U.S. News focus more on education: “There are two areas where some sort of measure should be added. These areas are student experience and curriculum.”
So, the magazine trudges forward, annually tweaking its algorithm. Last year, for example, criticism from rural schools persuaded the magazine to include a cost-of-living adjustment in the calculation of faculty salaries so as not to make the faculty seem cut-rate at schools where groceries and apartments cost less. This and other small changes have made the rankings better and the editors have often been praised for their willingness to listen to criticism from universities, even as they are criticized for closing decisions about the rankings off from the rest of the U.S. News staff and creating a private fiefdom isolated from the rest of the magazine. According to one former senior reporter who worked on the rankings in the early ’90s: “We were roped around the neck to get us to write the serious journalistic stories in the issue, but none of us had a clue how the rankings worked.” According to another former staff writer who contributed to the “Best Colleges” issue: “The rankings are completely ridiculous. But they totally pay your salary.”
But further from home, the rankings are taken much more seriously. According to research done by James Monk and Ronald Ehrenberg for the National Bureau of Economic Research, a one-place change in a school’s ranking one year increases its admittance rate by 0.4 percent. In other words, if a school that needs to admit 15 percent of its applicants to fill its class moves from 5th place to 10th place, they will need to admit 17 percent the next year. Monk and Ehrenberg also found that the rankings have a statistically significant impact on both yield and SAT scores of incoming freshmen. Furthermore, foundations and bond-rating organizations like Moody’s use the rankings when evaluating institutions.
Ranking placement also has a demonstrably larger impact on institutions outside the highest grouping. Schools in the top subset may bounce around by a couple of places at most each year, but they tend to have all the applicants they can handle anyway. Down lower, one tiny change in a school’s data or in U.S. News’ methodology, can bump it from the second “tier,” where its score is identical to the 51st best school in the country to the third tier where its score is identical to the 176th.
Not surprisingly, there is evidence that schools alter policies for the sake of rankings. This isn’t automatically bad; most of what U.S. News encourages is pretty good. But because U.S. News doesn’t measure the most important thing on campus–actual learning–it is pushing colleges to prioritize in ways that are not necessarily the best. In a sense, the rankings are like a professor who ignores the content of her student’s papers and instead bases her grades only on spelling and punctuation.
Since U.S. News began factoring in yield, the percentage of admitted students who choose to attend a given school, the number of colleges with “early decision” programs has shot up. Normally, students apply to college in the early winter of their senior years and then hold their breath until April when the verdicts come back. Under early decision, applicants apply to one school at the beginning of their senior year, promising to attend if admitted in December or January. Thus, if a college accepts half of its class under early decision, as many now do, it is guaranteed a much higher yield rate because all early decision students are required to attend.
Early decision programs have their advantages but they also make it much more difficult for students to compare financial aid offerings, and thus give an advantage to students who don’t need to worry about financial aid. In addition, early decision has essentially pushed the application cycle forward a year. In 1980, 19 percent of all the students enrolled in the Princeton Review’s SAT training course entered before January 1st of their junior year. By this year, that number had climbed to 52 percent. Yield only makes up a small percentage of an overall U.S. News score and there were a number of factors pushing the fad among elite institutions–including the fact that it makes the admissions office’s job easier by spreading work out–but the U.S. News rankings were nevertheless, as confirmed by multiple university officials, a significant factor.
The introduction of U.S. News’ category of “percentage of alumni who give” also significantly affected fundraising. When I was at Stanford, student groups were paid $25 an hour to solicit donations from alumni and, on the one shift I worked, were specifically told to mention that any donation would increase our ranking. Professor Ronald Ehrenberg of Cornell University described his university’s two-tiered approach to improve its score in this category: increase the number of alumni who give and decrease the number of living alumni. The first goal was achieved by increasing the number of contributing alumni by aggressively pursuing small donations. The second goal was achieved by removing the names from the database of people who attended Cornell at one point but are unlikely to donate (for example, people who left the school before earning degrees). At one West Coast college, I was told, alumni who have not given money in five years have been reclassified as dead.
Administrators will deny until their ears start smoking that rankings influence their actions. And in fact few administrators actually sit down with the book and decide that they are going to change specific policies. What happens is that the rankings grease the skids for changes in specific directions, and decisions are gradually made that move the school those ways. A good example comes from Wesleyan University where Vice President for University Relations Barbara-Jan Wilson described to me a successful campaign to increase the number of teachers hired. When she went to the trustees, she argued, in part, that an increase would, “be a good thing in the national media,” which, she said, meant U.S. News.
The rankings are one of the main ways that alumni and trustees keep track of their school’s progress and they are an indicator of the status society attaches to their degrees. Would the trustees have accepted Wilson’s proposal if the rankings didn’t exist? It’s hard to know. What is clear however is that the schools seem to take the rankings so seriously that it would be surprising if they weren’t having a large effect.
At Whitman college, for example, the president’s fax cover sheets proclaim that the university is “the only Northwest college in U.S. News’ top tier among national liberal arts colleges.” The Monthly recently received a letter from Connecticut College beginning “[We] would very much like to establish a recruiting relationship with your organization. We are ranked among the top 25 national liberal arts colleges by U.S. News and World Report.” It is indeed ranked exactly 25th, tied with four other schools.
There’s a certain irony to the way that universities trip over themselves to improve their rankings. Not only are many of the best minds at colleges across the country preoccupied with what is essentially a silly enterprise, the books were cooked to begin with. Since the beginning, U.S. News has operated a system with the top schools pre-selected and the rest jumbled behind.
When Elfin was first charged with creating a ranking system, he seems to have known that the only believable methodology would be one that confirmed the prejudices of the meritocracy: The schools that the most prestigious journalists and their friends had gone to would have to come out on top. The first time that the staff had drafted up a numerical ranking system to test internally–a formula that, most controversially, awarded points for diversity–a college that Elfin cannot even remember the name of came out on top. He told me: “When you’re picking the most valuable player in baseball and a utility player hitting .220 comes up as the MVP, it’s not right.”
Elfin subsequently removed the first statistician who had created the algorithm and brought in Morse, a statistician with very limited educational reporting experience. Morse rewrote the algorithm and ran it through the computers. Yale came out on top, and Elfin accepted this more persuasive formula. At the time, there was internal debate about whether the methodology was as good as it could be. According to Lucia Solorzano, who helped create the original U.S. News rankings in 1983, worked on the guide until 1988, and now edits Barron’s Best Buys in College Education, “It’s a college guide and the minute you start to have people in charge of it who have little understanding of education, you’re asking for trouble.”
To Elfin, however, who has a Harvard master’s diploma on his wall, there’s a kind of circular logic to it all: The schools that the conventional wisdom of the meritocracy regards as the best, are in fact the best–as confirmed by the methodology, itself conclusively ratified by the presence of the most prestigious schools at the top of the list. In 1997, he told The New York Times: “We’ve produced a list that puts Harvard, Yale and Princeton, in whatever order, at the top. This is a nutty list? Something we pulled out of the sky?”
The walls around the system that confirmed the top Ivies began to crack in 1996 when Zuckerman hired James Fallows (a contributing editor of The Washington Monthly) to edit the magazine. Fallows hired former New Yorker writer Lincoln Caplan and, when Elfin left in January of ’97, Fallows put Caplan in charge of special projects at the magazine, which included the annual development of the rankings. The two began to make a series of changes that improved the rankings, most noticeably by eliminating one decimal place in the scoring (schools now get grades like 77 instead of 76.8) to create more ties and reduce a spurious air of precision. Caplan also hired a statistical expert named Amy Graham to direct the magazine’s data gathering and analysis. Although both Caplan and Graham have left the magazine, and both declined to be interviewed, sources within U.S. News claim that, after looking deeply into the methodology of the rankings, Graham found that U.S. News had essentially put its thumb on the scale to make sure that Harvard, Yale, and Princeton continued to come out on top, as they did every year until 1999 after Elfin selected a formula.
This was done in large part by rejecting a common statistical technique known as standardization and employing an obscure weighting technique in the national universities category. Consider the data from the 1997 book, the last year the numbers for overall expenditures were posted publicly. Caltech spent the most of any college at $74,000 per student per year, Yale spent the fourth-most at $45,000 and Harvard spent the seventh-most at $43,000. According to the U.S. News formula applied in every single category except for national universities, the absolute rates of spending would be compared and Caltech would be credited with a huge 40-percent category advantage over Yale. Under the formula used solely in this category the difference between Caltech and Yale (first place and fourth place) was counted as essentially the same as the difference between Yale and Harvard (fourth place and seventh place) even with the vast difference in absolute spending.
According to sources close to the magazine, a bitter internal struggle broke out when it became clear that Caltech was going to come out on top in the late spring of 1999 after the rankings had been changed to count every category the same way. Fallows’ replacement Stephen Smith and new Special Projects Editor Peter Cary were both reportedly shocked to see that, under the new formula Graham had recommended, the conventional wisdom of the meritocracy would be turned upside down, and there were discussions about whether the rankings should be revised to change the startling results. (Morse and Cary both deny this.) Eventually, a decision was made to keep the new formula and U.S. News received a hefty dose of criticism from baffled readers. Morse declined to say how the formula has been changed for the rankings that will be printed on September 4th of this year. But if Caltech’s ranking drops and one of the three Ivies recovers its crown, read the small print carefully. Caltech’s advantage over the second ranked school last year was an astronomical seven points (more than the difference between #2 and #8). The methodology would have to be monkeyed with substantially to drop Caltech out of the top spot.
Learning
There are good things about the U.S. News rankings: They help high school students without college counselors figure out ballpark quality estimates of the schools they’re considering; and they have standardized the information that universities do make public. As Harrison Rainie, a long-time U.S. News editor who worked briefly as special projects editor after Caplan and before Cary says, “they helped create a common vocabulary. They made colleges count the number of part-time and full-time students the same way. They got colleges to define graduation rates the same way. …Colleges were using whatever numbers that they could justify under whatever definitions they felt like choosing.”
The rankings should also be given credit for intending to serve a worthy purpose. Over the past decades, colleges have become vastly more expensive and vastly more important to the American workforce. But they have not become commensurately more transparent. Colleges are reluctant to release information–like financial data or independent reports on teacher quality and student satisfaction–that could be useful to potential students and the public. Part of this reticence comes from a genuine belief that higher education functions best when left alone; part of it comes from an effort to use the ivory tower as an excuse to obscure information that might be viewed critically. U.S. News wanted to open some of the windows that colleges close and let students see in.
But to make the U.S. News rankings into something with a salutary effect overall–the kind of rankings that encourage good behavior by universities and truly help applicants–there need to be systemic changes. Specifically, the magazine needs to make a concerted effort to measure actual education. But U.S. News has never been able to change its system for two main reasons. First, with every change it makes, the magazine gets hammered by people who charge it with simply trying to generate interest each year by making schools bounce around. Secondly, when U.S. News changes its methodology it implicitly admits that its previous systems were inferior; if the ranking methodology is better this year, it must have been worse last year. A major change would throw fifteen years’ work into question.
The second reason was particularly acute when Mel Elfin, who had an almost paternal devotion to the rankings, held the reins. Elfin was Washington bureau chief of Newsweek for almost 20 years; he’s been to China with Nixon; he has pictures of himself sitting with every president from Johnson to Reagan on his wall. But when I asked him whether, looking back, the rankings were his greatest accomplishment, he repeated twice: “This is what’s going to last.” He wasn’t the kind of guy who was going to let the project into which he invested so much be turned upside down.
In some ways, Morse and Elfin treated critics of the rankings as enemies of the faith. When Reed College refused on principle to submit data in 1995, U.S. News summarily dropped it to the lowest tier; despite having the 18th best academic reputation of all national liberal arts colleges in U.S. News’ reputational survey, Reed was listed right next to Richard Stockton College of New Jersey which had the 153rd place. One of the most eloquent critics of the rankings, Stanford President Gerhard Casper, sent a personal letter in 1996 to Fallows, that was eventually made public, saying: “I am extremely skeptical that the quality of a university–any more than the quality of a magazine–can be measured statistically. However, even if it can, the producers of the U.S. News rankings remain far from discovering the method.” Elfin wasn’t CC’d and the criticism and its form did not please him. Casper announced his retirement this year and when I met Elfin for our interview, he almost immediately told me that: “Casper’s gone and that’s changed things.” At the most recent meeting of the National Association of College Admissions Counselors, Stanford Associate Admissions Director Jonathan Reider asked Morse a critical question about the rankings. Morse interrupted him with a caustic: “Your president’s just quit.”
Measuring Mr. Chips
When I asked Morse and Cary why they didn’t include more measures of actual education, they gave me four reasons: colleges don’t make the data available, it would be too expensive to gather, much of it simply can not be quantified, and, as Cary told me, “if we were to tread into it …we’d get into a dozen, scores of questions.” The last two concerns are valid, but also, in a sense, refute the whole enterprise. If the rankings are subjective and leave things out, U.S. News should say so. If, as Cary said to me, the rankings can’t “be everything to everybody,” the magazine should give schools much less precise rankings, or not even rank at all: Just publish the data without running it through the algorithm that produces the ordinal rankings. At the least, it should point out right in the middle of the table that it has left out extremely important data.
U.S. News also doesn’t have a terribly strong excuse in the contention that colleges give out little data: every college studies student satisfaction and teacher quality, so the data’s out there. And, as Harrison Rainie pointed out, one of the virtues of U.S. News is that it has been able to convince colleges to standardize data and make it public in other categories.
Expense is the best of the four arguments, and the previously noted NORC report did caution that the authors could not think of a financially reasonable way to gather the necessary data. But Robert Zemsky at the University of Pennsylvania’s Institute for Research on Higher Education has recently completed a survey of students six years after their graduations from 80 schools, compiling data on everything from general employment information, to whether respondents voted in the last election cycle, to complicated scenario-based questions that gauge confidence in certain job tasks and skills. How did Zemsky get his data? In large part because colleges were so eager to find reliable data on graduates that they helped fund the surveys. Another complementary research project comes from the National Survey of Student Engagement an organization working to survey undergraduates to find out which colleges use “good practices.” Students are asked, for example, how often they talk to professors outside of class, how much time they spend doing homework, and whether or not they would attend the same institution again. U.S. News could tap into either data set or it could develop its own.
Virtual Reality
Of course, even with all of Zemsky’s research and NSEE’s survey data included, rankings inevitably fall short: Numerical lists are a fundamentally flawed way to measure the quality of a college, an argument that university presidents, usually when their schools drop in the ratings, have been making for years. As John Katzman, president of the Princeton Review says, “It’s the equivalent of simply giving every woman a rating of 1-10 and saying we don’t have to date. Just marry the one with the best score.” It’s hard enough to quantify the quality of one person’s education, much less the quality of an entire college. There’s too much complexity, subtlety, and individuality to justify more than a rough score.
This isn’t to say that U.S. News should abandon its system. The good should be the enemy of the pretty bad (which is why U.S. News should improve) but the perfect shouldn’t be the enemy of the good. But it is to say that journalists should therefore work toward two goals: turning over the rocks that U.S. News leaves untouched and disabusing people of the notion that these rankings should play such a prominent role in higher education.
The data that U.S. News glides by isn’t the sort that comes easily; it’s buried deep. Do the the big shot professors actually teach? How many hours? Are they good teachers? Are there unknown professors who are better teachers? Or is it the graduate students who teach? What is the intellectual atmosphere like on campus? How frequently do students stay up arguing about Faulkner, aid to Ghana, or whether a wheel chair can be built that goes up and down stairs? What about the campus support and counseling system for students who begin to flail? Some journalists do a good job searching for answers to these questions, and there are guides, like the Princeton Review’s, that take a stab at sorting through them for hundreds of colleges. But the overall trend that U.S. News feeds into has been to treat universities as though they are in the business of conferring degrees–personnel offices for the rest of America–and less as intellectual environments where students really learn.
In fairness to journalists, of course, these are not questions that are easily answered and there’s good reason for that. Universities often don’t want students to know. They don’t want to make it easy for reporters to look into issues of teacher quality or the intellectual atmosphere on campus because what they’d find wouldn’t be pretty and they know that reporters would find a few too many students like Rob, and a few too many professors like mine on Gulliver’s Travels. As the Carnegie Foundation for the Advancement of Teaching wrote in a blistering report on research universities in 1998: some professors “are likely to be badly trained or even untrained teaching assistants who are groping their way toward a teaching technique; some others may be tenured drones who deliver set lectures from yellowed notes, making no effort to engage the bored minds of the students in front of them.”
As for disabusing readers of the notion that rankings should have a central function in educational evaluation, there’s a very good example of the kind of journalism needed that comes straight from the magazine itself: U.S. News’ very own 1999 guide to “Outstanding High Schools.” This 40 page report investigated high schools in six major metropolitan areas and evaluated them through a series of quantitative measures based on test scores and dropout rates controlled for family circumstances. Most important, after making their best rough analysis, U.S. News simply listed the schools they considered the most outstanding; they didn’t rank them. Thus, instead of focusing on the horse-race element, readers focused on the list of traits common among outstanding schools–for example, mentoring programs for new teachers and partnerships with parents. The report focused on extensive profiles of schools that succeeded in each category.
In a sense, the U.S. News rankings serve as a test; administrators are teaching to it, and society, including students, puts too much stock in the results. And, as in all levels of education, there’s no problem with that–if the test measures the right things as well as possible, if people recognize that no test can measure everything, and if there are well-developed methods for describing what the test leaves out.
Unfortunately, U.S. News falls far short of the first goal, and we all fall short of the others.
There are good things about the U.S. News rankings: They help high school students without college counselors figure out ballpark quality estimates of the schools they’re considering; and they have standardized the information that universities do make public. As Harrison Rainie, a long-time U.S. News editor who worked briefly as special projects editor after Caplan and before Cary says, “they helped create a common vocabulary. They made colleges count the number of part-time and full-time students the same way. They got colleges to define graduation rates the same way. …Colleges were using whatever numbers that they could justify under whatever definitions they felt like choosing.”
The rankings should also be given credit for intending to serve a worthy purpose. Over the past decades, colleges have become vastly more expensive and vastly more important to the American workforce. But they have not become commensurately more transparent. Colleges are reluctant to release information–like financial data or independent reports on teacher quality and student satisfaction–that could be useful to potential students and the public. Part of this reticence comes from a genuine belief that higher education functions best when left alone; part of it comes from an effort to use the ivory tower as an excuse to obscure information that might be viewed critically. U.S. News wanted to open some of the windows that colleges close and let students see in.
But to make the U.S. News rankings into something with a salutary effect overall–the kind of rankings that encourage good behavior by universities and truly help applicants–there need to be systemic changes. Specifically, the magazine needs to make a concerted effort to measure actual education. But U.S. News has never been able to change its system for two main reasons. First, with every change it makes, the magazine gets hammered by people who charge it with simply trying to generate interest each year by making schools bounce around. Secondly, when U.S. News changes its methodology it implicitly admits that its previous systems were inferior; if the ranking methodology is better this year, it must have been worse last year. A major change would throw fifteen years’ work into question.
The second reason was particularly acute when Mel Elfin, who had an almost paternal devotion to the rankings, held the reins. Elfin was Washington bureau chief of Newsweek for almost 20 years; he’s been to China with Nixon; he has pictures of himself sitting with every president from Johnson to Reagan on his wall. But when I asked him whether, looking back, the rankings were his greatest accomplishment, he repeated twice: “This is what’s going to last.” He wasn’t the kind of guy who was going to let the project into which he invested so much be turned upside down.
In some ways, Morse and Elfin treated critics of the rankings as enemies of the faith. When Reed College refused on principle to submit data in 1995, U.S. News summarily dropped it to the lowest tier; despite having the 18th best academic reputation of all national liberal arts colleges in U.S. News’ reputational survey, Reed was listed right next to Richard Stockton College of New Jersey which had the 153rd place. One of the most eloquent critics of the rankings, Stanford President Gerhard Casper, sent a personal letter in 1996 to Fallows, that was eventually made public, saying: “I am extremely skeptical that the quality of a university–any more than the quality of a magazine–can be measured statistically. However, even if it can, the producers of the U.S. News rankings remain far from discovering the method.” Elfin wasn’t CC’d and the criticism and its form did not please him. Casper announced his retirement this year and when I met Elfin for our interview, he almost immediately told me that: “Casper’s gone and that’s changed things.” At the most recent meeting of the National Association of College Admissions Counselors, Stanford Associate Admissions Director Jonathan Reider asked Morse a critical question about the rankings. Morse interrupted him with a caustic: “Your president’s just quit.”
When I asked Morse and Cary why they didn’t include more measures of actual education, they gave me four reasons: colleges don’t make the data available, it would be too expensive to gather, much of it simply can not be quantified, and, as Cary told me, “if we were to tread into it …we’d get into a dozen, scores of questions.” The last two concerns are valid, but also, in a sense, refute the whole enterprise. If the rankings are subjective and leave things out, U.S. News should say so. If, as Cary said to me, the rankings can’t “be everything to everybody,” the magazine should give schools much less precise rankings, or not even rank at all: Just publish the data without running it through the algorithm that produces the ordinal rankings. At the least, it should point out right in the middle of the table that it has left out extremely important data.
U.S. News also doesn’t have a terribly strong excuse in the contention that colleges give out little data: every college studies student satisfaction and teacher quality, so the data’s out there. And, as Harrison Rainie pointed out, one of the virtues of U.S. News is that it has been able to convince colleges to standardize data and make it public in other categories.
Expense is the best of the four arguments, and the previously noted NORC report did caution that the authors could not think of a financially reasonable way to gather the necessary data. But Robert Zemsky at the University of Pennsylvania’s Institute for Research on Higher Education has recently completed a survey of students six years after their graduations from 80 schools, compiling data on everything from general employment information, to whether respondents voted in the last election cycle, to complicated scenario-based questions that gauge confidence in certain job tasks and skills. How did Zemsky get his data? In large part because colleges were so eager to find reliable data on graduates that they helped fund the surveys. Another complementary research project comes from the National Survey of Student Engagement an organization working to survey undergraduates to find out which colleges use “good practices.” Students are asked, for example, how often they talk to professors outside of class, how much time they spend doing homework, and whether or not they would attend the same institution again. U.S. News could tap into either data set or it could develop its own.
Of course, even with all of Zemsky’s research and NSEE’s survey data included, rankings inevitably fall short: Numerical lists are a fundamentally flawed way to measure the quality of a college, an argument that university presidents, usually when their schools drop in the ratings, have been making for years. As John Katzman, president of the Princeton Review says, “It’s the equivalent of simply giving every woman a rating of 1-10 and saying we don’t have to date. Just marry the one with the best score.” It’s hard enough to quantify the quality of one person’s education, much less the quality of an entire college. There’s too much complexity, subtlety, and individuality to justify more than a rough score.
This isn’t to say that U.S. News should abandon its system. The good should be the enemy of the pretty bad (which is why U.S. News should improve) but the perfect shouldn’t be the enemy of the good. But it is to say that journalists should therefore work toward two goals: turning over the rocks that U.S. News leaves untouched and disabusing people of the notion that these rankings should play such a prominent role in higher education.
The data that U.S. News glides by isn’t the sort that comes easily; it’s buried deep. Do the the big shot professors actually teach? How many hours? Are they good teachers? Are there unknown professors who are better teachers? Or is it the graduate students who teach? What is the intellectual atmosphere like on campus? How frequently do students stay up arguing about Faulkner, aid to Ghana, or whether a wheel chair can be built that goes up and down stairs? What about the campus support and counseling system for students who begin to flail? Some journalists do a good job searching for answers to these questions, and there are guides, like the Princeton Review’s, that take a stab at sorting through them for hundreds of colleges. But the overall trend that U.S. News feeds into has been to treat universities as though they are in the business of conferring degrees–personnel offices for the rest of America–and less as intellectual environments where students really learn.
In fairness to journalists, of course, these are not questions that are easily answered and there’s good reason for that. Universities often don’t want students to know. They don’t want to make it easy for reporters to look into issues of teacher quality or the intellectual atmosphere on campus because what they’d find wouldn’t be pretty and they know that reporters would find a few too many students like Rob, and a few too many professors like mine on Gulliver’s Travels. As the Carnegie Foundation for the Advancement of Teaching wrote in a blistering report on research universities in 1998: some professors “are likely to be badly trained or even untrained teaching assistants who are groping their way toward a teaching technique; some others may be tenured drones who deliver set lectures from yellowed notes, making no effort to engage the bored minds of the students in front of them.”
As for disabusing readers of the notion that rankings should have a central function in educational evaluation, there’s a very good example of the kind of journalism needed that comes straight from the magazine itself: U.S. News’ very own 1999 guide to “Outstanding High Schools.” This 40 page report investigated high schools in six major metropolitan areas and evaluated them through a series of quantitative measures based on test scores and dropout rates controlled for family circumstances. Most important, after making their best rough analysis, U.S. News simply listed the schools they considered the most outstanding; they didn’t rank them. Thus, instead of focusing on the horse-race element, readers focused on the list of traits common among outstanding schools–for example, mentoring programs for new teachers and partnerships with parents. The report focused on extensive profiles of schools that succeeded in each category.
In a sense, the U.S. News rankings serve as a test; administrators are teaching to it, and society, including students, puts too much stock in the results. And, as in all levels of education, there’s no problem with that–if the test measures the right things as well as possible, if people recognize that no test can measure everything, and if there are well-developed methods for describing what the test leaves out.