Meet the competition: Students at Shanghai Jiao Tong University, which produced the first comprehensive global college rankings—part of a trend that is challenging the supremacy of America’s higher education system.
Five years ago, Hashim Yaacob, the vice chancellor of the University of Malaya, was on top of the world. In a recently published international ranking of universities, UM had placed eighty-ninth among 200 top institutions. This was a big deal, not only for the university, but for Malaysia as a whole—for a country that was bent on creating a knowledge economy, it was a nice validation of the progress it had made. Yaacob ordered banners reading “UM a world’s top 100 university” and had them hung around that city.
UM’s moment of glory was fleeting—one year long, to be exact. When the next international ranking came out, UM had plummeted, from eighty-ninth to 169th. In reality, universities don’t change that much from year to year. And indeed, UM’s drop turned out to be caused by a decline in a questionable measure of its reputation, plus the discovery and correction of an error the university itself had made. After the drop, UM was pilloried in the Malaysian press, and widespread calls for a royal commission of inquiry into the unfortunate episode followed. Within a few months, the vice chancellor, who had been vilified in the media, was effectively fired when he was not reappointed to a new term.
The instrument of Yaacob’s rise and fall was a periodical called the Times Higher Education Supplement, published by Rupert Murdoch’s News Corp (until a 2005 ownership change). For the past five years the newspaper has offered a ranking of universities around the world in a more-or-less open effort to duplicate internationally what U.S. News & World Report has done in the American higher education market. The impetus for the rankings was straightforward: “Particularly where research is concerned,” John O’Leary, the creator of the Times Higher rankings and the publication’s former editor, explained in an essay accompanying a recent installment, “Oxford and Cambridge are as likely to compare themselves with Harvard and Princeton as with other UK [institutions].” Universities are operating “at a time of unprecedented international mobility both by students and by academics”; furthermore, “governments all around the world have expressed an ambition to have at least one university among the international elite.”
Although Times Higher Education, as the publication is now called, wasn’t the first effort at producing international rankings, it has become the most controversial; its assessment of the global university pecking order is widely read not only among university administrators and students, but among government officials and politicians keen to assess their place in a world where educational achievement is a proxy for power. And for understandable reasons. Like most other economic sectors, higher education is fast becoming a global enterprise. Students and professors hopscotch from nation to nation more than ever. Western universities set up branch campuses in Asia and the Middle East, catering to huge demand for the best diplomas. In places like South Korea, Saudi Arabia, France, and Germany, a fierce race is in progress to create world-class research universities. Times Higher is now one of the chief de facto arbiters of who’s winning the knowledge industry competition.
In America, however, relatively few people have heard of the Times Higher rankings, even in academia. That’s partly the result of our famous insularity, partly the dominant place the U.S. News rankings still occupy in American higher education. Mostly, though, it’s due to a sense of invulnerability. American universities remain the unquestioned leaders in research and the top destination for international students. The biggest brand names among them routinely dominate the upper echelons of international rankings like Times Higher. We know we’re great.
Yet in this new world of mobility and competition, challenges to America’s educational primacy are inevitable—and international rankings are the means by which those challenges are most likely to arrive. Indeed, a process is already under way to expand international rankings beyond the metrics of reputation and research—in which U.S. schools do extremely well—to include measures of classroom learning. That could lead to some surprises for top dogs such as the United States, not to mention for other nations whose overall performance educating students and preparing graduates for the workforce may not match their justly admired strengths in other areas.
This shaking up of existing hierarchies—if it occurs—could be both traumatic and useful for the American higher education system. Rankings, for all their shortcomings, have the potential to be a very useful consumer tool in a border-free educational world. Done well, they can expose weaknesses in research, highlight lackluster classroom teaching, and give universities—including sometimes complacent American institutions—incentives to build the research and human capital on which so much innovation and economic growth depends. Global education markets, just like other markets, need information to function efficiently. But it needs to be the right information.
Colleges have never much liked the idea of being ranked. When U.S. News & World Report published its first full-fledged guide to colleges in 1987 a delegation of college presidents and senior administrators met with the magazine’s editors and asked that the ranking enterprise be stopped. Purely numerical indicators were an inappropriate way to measure the varied institutions and missions of American higher education, they argued. During my two-year tenure as editor of U.S. News’s annual undergrad and graduate rankings issues, part of my job entailed dealing with similarly unhappy administrators, who regularly visited our office to complain about the treatment they received in our rankings. Some made it pretty clear that they thought outsiders—especially journalists—shouldn’t be ranking colleges at all. Others took exception to one or more aspects of the U.S. News methodology.
Such complaints were hardly disinterested—we rarely heard them from universities that fared well in the rankings, who trumpeted the results on their Web sites. But the disgruntled administrators did have a point. The U.S. News rankings depend heavily on a survey of college and university presidents and administrators, who are asked their opinions on the quality of their own and other institutions. Whether these individuals have enough knowledge to accurately judge the quality of other schools or are just circulating the conventional wisdom is a fair question (although if they are behaving ethically they are supposed to simply follow the magazine’s instructions and avoid rating any school they don’t know much about). The rankings also rely on a plethora of more objective measures, such as faculty salaries, average class size, graduation rates, and the SAT scores of incoming freshmen. But while these are defensible stand-ins for excellence (and I defended them) they are mostly measures of inputs—what human and financial resources go into a university rather than what educational results come out. To truly judge educational quality requires also measuring outputs—specifically, how much students actually learn in the classroom, particularly at the undergraduate level. Such data exists, albeit in limited quantities, and U.S. News has tried to get it. But colleges and universities jealously hoard the information, so U.S. News has had to make do with the data it can obtain.
Creating an accurate, useful ranking of schools is a daunting, and perhaps quixotic, task. But arguing against imperfect efforts to try to rank colleges is beside the point. The runaway popularity of the U.S. News rankings demonstrates the enormous appetite that exists for the product: unsurprisingly, people like to know something about the education they’re considering paying tens of thousands of dollars a year for, and in the absence of truly meaningful measures of what exactly a college produces in exchange for your tuition, halfway measures like the existing rankings are about as good as you’re going to get.
The potency of college rankings can be seen in their rapid spread internationally. Since the 1990s, more than forty countries, from Poland to Argentina to Kazakhstan, have developed some sort of ranking of their national universities—often called “league tables,” the same term used in Europe for sports rankings.
The current decade has seen the rise of rankings that compare colleges not just within countries but between them. In 2001, administrators at China’s Shanghai Jiao Tong University tasked a chemical engineering professor named Nian Cai Liu with assessing how their school compared with others around the world. China’s massive economic expansion was in full bloom, the government saw higher education as a potential source of innovation and economic growth, and the Shanghai Jiao Tong officials wanted to know where they stood in the international marketplace. Looking at a number of mostly research-oriented factors—how many frequently cited researchers schools employed, how many articles they published, and how many prizes they won—Liu and his staff spent two years compiling data for more than 2,000 institutions. Then they weighted the measures, converted them into an aggregate score for each institution, and sorted the world’s universities from top to bottom. Without really trying to, Liu had created the first comprehensive global college rankings—Shanghai Jiao Tong’s Academic Ranking of World Universities became an international sensation, much followed in the academic and government worlds.
A year after the debut of the Shanghai rankings, Times Higher launched its own, very different global effort to compare universities. Where the Shanghai rankings focused almost exclusively on research, the Times Higher rankings aimed for a more “rounded assessment” that would also appeal to consumers—prospective college students and their parents. In practice, this has meant that Times Higher is even more reliant than U.S. News on subjective “reputational” data from surveys of college administrators, though Times Higher also incorporates such measures as employer surveys, student/faculty ratios, and the percent of international students and faculty at a university—the latter on the grounds that it serves as a market test of an institution’s ability to attract brainpower in an ever more globalized world.
Other attempts to grade the world’s colleges and universities have also blossomed. The “Webometrics Ranking of World Universities,” put together by a division of the Spanish National Research Council, the country’s largest public research body, measures Web-based academic activities. The International Professional Rankings of World Universities, compiled by France’s prestigious Mines ParisTech, are based solely on the number of a university’s alumni serving in top positions in Fortune 500 companies.
These international rankings serve as broad measures of quality for nations intent on improving their international standing. They are also being used in some cases as the equivalent of the Good Housekeeping seal of approval. The Mongolian government, for instance, has weighed a policy that would give study-abroad funding only to students admitted to a university that appears in one of the global rankings. In the Netherlands, an immigration-reform proposal aimed at attracting more skilled migrants would restrict visas to all but graduates of the universities ranked in the two top tiers of global league tables.
Other countries are trying to revamp their university systems in the hope of achieving higher stature in the rankings. “Excellence initiatives in Germany, Russia, China and France are policy responses to rankings,” Ellen Hazelkorn, director of the Higher Education Policy Research Unit at the Dublin Institute of Technology, wrote in the online publication University World News. “The pace of higher education reform is likely to quicken in the belief that more elite, competitive and better institutions are equivalent to being higher ranked.” A recent study of rankings in four countries, conducted by the Institute for Higher Education Policy, found that rankings had a useful impact on how universities make decisions, including more data-based assessment of success, but also some potentially negative effects, like encouraging a focus on elite research universities at the expense of those that tend to serve less-advantaged students. In some cases, universities are playing the game, however grudgingly, with cold hard cash: in Australia, a number of vice chancellors have received salary bonuses predicated on their success in nudging their campuses up in the rankings.
All the existing international rankings have significant failings. Spain’s “Webometrics” effort is creative but necessarily very narrow. Shanghai’s approach is heavily biased toward science-oriented institutions, and gives universities dubious incentives to chase Nobel winners whose landmark work may not be recent enough to add meaningfully to the institution’s intellectual firepower. France’s Professional Rankings just happen to place far more French schools in the top echelon of universities than do other global rankings—a result that led to a memorable tautological headline in University World News: “French Do Well in French World Rankings.” Critics of Times Higher note that its highly volatile rankings depend heavily on an e-mail survey with a miniscule response rate and a marked bias toward institutions in the U.K. and former British Empire.
The authors of the rankings themselves are often up front about their shortcomings. “Any criticisms I’m quite happy to print,” says Ann Mroz, a veteran journalist who edits Times Higher Education. “I would prefer that people came to us and there was some sort of debate about it and see whether maybe we have got a few things wrong. Until we discuss it we’re never going to know.” Mroz says that she herself is uncomfortable with the use of faculty-student ratios in the Times Higher rankings. “It’s so crude,” she says. “Does it tell you how good the teaching is?” She would like to use a better measure, she says—if one can be found.
That’s the crux of the matter: students and governments love rankings, and people will continue to produce them, however problematic they may be, as long as that appetite exists. Valérie Pécresse, France’s minister of higher education and research, once quipped that the problem with rankings was that they existed. But if that’s the problem, it’s an insoluble one—international rankings are quite clearly here to stay. The question is, how do we make them better?
Many organizations, mostly outside the United States, are tackling this problem. The European Union, for instance, just announced that it is developing a new “multi-dimensional global university ranking.” Mostly focused on Europe, the goal of the new assessment, still in the exploratory stage, is to move beyond research in hard sciences to include humanities and social sciences, as well as teaching quality and “community outreach.” But for now, the best bet in the rankings world may be an initiative that the Organisation for Economic Co-operation and Development has in the works, called the International Assessment of Higher Education Learning Outcomes, or AHELO. On an aggregate level, a nation’s success in building its higher education system is typically measured by enrollment levels and graduation rates—in other words, measures of quantity. AHELO is based on the premise that those measures should be accompanied by assessments of quality—how well professors are teaching, and how well students are actually learning. It’s an approach that focuses on the missing link in the rankings explosion: outputs and value-added rather than inputs and reputation.
It’s a difficult nut to crack, since there’s no standardized measure of learning quality, or even much agreement on what that might hypothetically be. The nascent AHELO’s answer to that conundrum relies on four major components. The first measures students’ skills in areas such as analytical reasoning, writing, and applying theory to practice. The second measures subject-specific knowledge. The third looks at the context in which students learn, including their own demographic backgrounds and the characteristics of the universities they attend.
The last part of AHELO is the least developed, and the most important—an attempt to measure the value-added component of higher education. In other words, it would differentiate between schools that are good at attracting A students and schools that are good at transforming B students into A students. While the first three parts of the assessment are currently being tested in a handful of countries, OECD officials say the value-added measure is not ready for prime time, even on an experimental basis—they’re still thinking through possible methodologies, drawing on similar work they are doing at the secondary school level.
This last point should give Americans pause. The OECD’s secondary school work—an international test known as the Programme for International Student Assessment, or PISA—brought some unpleasant news when it debuted in 2001 and showed that U.S. high school students were far behind many of their global counterparts. So far, U.S. colleges have little to fear from the currently available international rankings, which focus heavily on the research and reputation measures at which the long-established and top tier of American schools excel. But new rankings that shine a spotlight on student learning as well as research could deliver far less pleasant results, both for American universities and for others around the world that have never put much focus on classroom learning.
The truth is that we don’t know how we’ll stack up—and not everybody wants to find out. Some in the American higher education community have been deeply resistant to the prospect of AHELO.Yes, measuring learning outcomes is important, Terry W. Hartle, a senior official at the American Council on Education, the umbrella lobbying group for U.S. colleges and universities, told Inside Higher Ed. But, he stressed, “If we haven’t been able to figure out how to do this in the United States, it’s impossible for me to imagine a method or standard that would work equally well for Holyoke Community College, MIT and the Sorbonne.” (Disclosure: Last year I consulted for ACE on a short writing assignment.)
There is some truth to what Hartle says. AHELO isn’t all-encompassing—it pays zero attention to research, which is a core activity of top universities—and we wouldn’t want schools and governments to base decisions entirely on it much more than we would want them to base decisions entirely on the Shanghai Jiao Tong or Times Higher rankings; all rankings, after all, are imperfect instruments. But that doesn’t mean we should follow the advice of many in American higher education and try to steer clear of the assessment. Such a move would only preserve U.S. schools’ international reputations in the short term; if the rest of the world cooperates with the OECD assessments, claims of American exceptionalism will look absurd.
Furthermore, if the news AHELO brings about American higher education is worse than expected, we’ll be better off knowing it sooner rather than later. Finding out that America’s K–12 education was lagging behind the rest of the developed world didn’t hurt our primary and secondary schools—it pushed us to make needed reforms. AHELO could similarly be an instrument of much-needed change in the teaching side of American higher education, a useful way to get around the recalcitrance of those educational institutions that resist attempts at bringing some accountability to their multibillion-dollar enterprise. It’s also crucial to remember that the international race to improve higher education isn’t a zero-sum game—it’s good for all of us when other people become smarter and wealthier. We shouldn’t be overly worried about other nations developing world-class universities on par with ours; what we should be worried about is whether we are really producing the great teaching and research that our students, and our society, deserve.