Students in the class of 2022 graduating at Queens College in Flushing Queens. Photo by: Luiz Rampelotto/EuropaNewswire/picture-alliance/dpa/AP Images

Much of higher education has a love/hate relationship with college rankings: love them when their college does well, and refuse to recognize their existence if they ever drop a spot. But most colleges—and selective institutions in particular—play the rankings game in two key ways. First, they spend considerable time and effort putting together data for U.S. News & World Report to use in their annual rankings. Second, they often have an in-house research staff that is tasked with figuring out how to move up in the rankings as quickly as possible. Sometimes colleges juke their numbers, as evidenced by recent scandals at Temple University and the University of Southern California, in which programs submitted false data for years and are now facing lawsuits from irate students. 

Breaking Ranks: How the Rankings Industry Rules Higher Education and What to Do About It By Colin Diver Johns Hopkins University Press, 368 pp.

Enter Colin Diver. As president of Reed College in Oregon, he carried on the tradition of his predecessor by refusing to provide data to U.S. News and being willing to bear the consequences of not being highly ranked. After a long and distinguished career in higher education, he has written a book, Breaking Ranks, which is in part a treatise against
prestige-based college rankings that drive colleges to make bad decisions and in part how he would like to evaluate colleges if he got the chance.

In my day job as an education professor and department head, I study higher education accountability while also experiencing firsthand the pressures to move up in the U.S. News rankings. But I have also moonlighted as the Washington Monthly’s rankings guy for the past decade, which gives me perspectives into how the rankings industry works and how colleges respond to rankings. This made me excited to read this book, and it generally does not disappoint.

Diver focuses most of his ire at U.S. News, even though the title is a critique of the rankings industry as a whole. I had to chuckle at the Washington Monthly being labeled as a cousin to the 800-pound gorilla that is U.S. News. He devotes nearly half of the book to two lines of attack that are preaching to the Monthly choir: how rankings can reinforce the existing prestige-based hierarchy and encourage colleges to focus on selectivity instead of inclusivity. These are reasons why the Monthly started publishing college rankings nearly two decades ago, and we do get some credit from Diver for our alternative approach, such as including net prices faced by working-class students and excluding acceptance rates.

Diver then discusses the challenges of producing a single number that captures a college’s performance. He raises legitimate concerns about the selection of variables, how weights are assigned, and how strongly correlated the selected variables are with each other. We at the Monthly get questioned by Diver for having “somehow divined that its Pell graduation-gap measure … factored in at 5.56 percent of its overall rating, while a college’s number of first-generation students deserved a measly 0.92 percent.” He also expresses frustration with rankings seemingly changing their methodology each year to either shake up the results or prevent colleges from gaming them.

These are all issues that I think about every year, along with the rest of the Monthly team, when we put together our college guide. We take pride in using publicly available metrics data and not requiring that colleges fill out onerous surveys in order to be included in our rankings—because data provided directly by colleges to U.S. News has suffered from accuracy issues in recent years, and because we think colleges could better use those resources to directly help students. When we change variables, it is because new measures have become available or old ones have stopped being maintained. Our general principle has been to provide equal weights to groups of variables that all measure the same concept, and we have used a panel of experts to give us feedback on weights and variables. Is any of this perfect? Absolutely not. But we feel like we are doing the best we can to be transparent about our decisions and produce a reasonable set of rankings that highlight the public good of higher education.

Diver uses the fourth part of Breaking Ranks to share his philosophy for evaluating the quality of individual colleges. He begins by discussing the feasibility of using student learning outcomes to measure academic quality, and he is much more optimistic than I am in this regard. While this can be done readily for more technical skills learned in a student’s major, efforts to test general critical thinking and reasoning skills have been a challenge for decades. There was a great deal of hype around the Collegiate Learning Assessment in the 2000s—culminating in Richard Arum and Josipa Roksa’s book, Academically Adrift, which claimed only modest student learning gains—but the test was never able to catch on broadly or be viewed as a good measure of skills.

The next proposed quality measure is instructional quality, which is even more difficult to measure. Diver discusses the possibility of counting types of pedagogical practices used, others’ opinions of teaching practices, or even student assessments of instructors. Yet he neglects research showing that all of these measures work better in theory than in practice, as students often give their professors lower ratings if they are women, underrepresented minorities, or in STEM fields. He then floats the idea of using instructional expenditures as a proxy for quality, but my take is that this rewards the wealthiest institutions, who can spend a lot of money even if it does not generate student learning.

He then speaks favorably of the approach that the Monthly rankings use to assess other potential measures of quality. He likes using social mobility metrics like the graduation rates of Pell Grant recipients (a proxy for students from lower-income families) and net price for students with modest financial means. He also approves of using graduation rates and earnings using a value-added approach that compares actual and predicted outcomes after adjusting for student and institutional characteristics. 

The Monthly gets another shout-out for our service metrics, which Diver calls “a quirky choice of variables,” and our use of the number of graduates who go on to earn PhDs in the research portion of the rankings. It is a somewhat quirky choice to use items such as ROTC participation and voting engagement, but these metrics capture different aspects of service and have available data. This gets back to an advantage and a limitation of our rankings: We use data that is readily available and not submitted directly by colleges.

Finally, Diver concludes by offering recommendations for students and educators on how to approach the wild world of college rankings. He recommends that students focus more on the underlying data than the college’s position in the rankings, and that they use rankings as a resource to learn more about particular institutions. These are reasonable recommendations, although they assume that students have the time and social capital to access numerous rankings and can choose from among a wide set of colleges to attend. This is great advice for students from upper-middle-class families whose parents went to college. But it is likely overwhelming for first-generation college students who are choosing institutions based on price more than other factors.

He starts his recommendations to educators by stating that college rankings be ignored, which is extremely difficult to do when legislators and governing boards are paying such close attention to them. Perhaps this could work for a president with a national brand and a lot of political capital, like Michael Crow at Arizona State University. But for a leader at a status-conscious institution? Not a chance. This also trickles down to deans, department heads, and faculty, as rankings are often parts of strategic plans.

Diver’s steps of rankings withdrawal, however, are worth considering. He first advises college leaders not to fill out the U.S. News peer reputation survey, which is frequently gamed and has declining response rates. No argument from me on that one. He then recommends that college leaders ignore rankings that do not fit their values and celebrate ones that do. This is crucial, in my view, but colleges have to be consistent in that viewpoint instead of only ignoring rankings when they go down in a given year. If the Monthly or U.S. News rankings fit you better, be prepared to justify good as well as bad changes.

Overall, Breaking Ranks is an easy, breezy read that serves as a useful primer on the pros and cons of college rankings with a very heavy focus on U.S. News. The one thing that I want to emphasize—and the reason why I have been with the Monthly rankings for so many years—is that rankings are not going away. It is on us to produce rankings that try to measure what we think is important, and I take that charge seriously. I think the Monthly rankings do that by focusing on the public good of higher education and shining a light on data points that would otherwise not be known outside a small circle of higher education insiders.

Robert Kelchen

Robert Kelchen, a professor of education at the University of Tennessee, Knoxville, is data manager of the Washington Monthly College Guide.