A Note on Methodology: 4-Year Colleges and Universities

To establish the set of colleges included in the rankings, we started with the 1,550 colleges in the 50 states that are listed in the U.S. Department of Education’s Integrated Postsecondary Education Data System (IPEDS) and have a 2018 Carnegie basic classification of doctoral, master’s, and baccalaureate colleges, are not exclusively graduate colleges, participate in federal financial aid programs, and had not announced an impending closure as of June 15, 2021. We then excluded 28 colleges with fewer than 100 undergraduate students in any year they were open between fall 2017 and fall 2019 and an additional four colleges with fewer than 25 students in the federal graduation rate cohort in 2018 and 2019. 

Next, we decided to exclude the five federal military academies (Air Force, Army, Coast Guard, Merchant Marine, and Navy) because their unique missions make them difficult to evaluate using our methodology. Our rankings are based in part on the percentage of students receiving Pell Grants and the percentage of students enrolled in the Reserve Officers’ Training Corps (ROTC), whereas the service academies provide all students with free tuition (and thus no Pell Grants or student loans) and commission graduates as officers in the armed services (and thus not the ROTC program). Finally, we dropped an additional 47 colleges for not having data on at least one of our key social mobility outcomes (percent Pell, graduation rate, net price, or the number of Pell recipients earning bachelor’s degrees). This resulted in a final sample of 1,466 colleges and includes public, private nonprofit, and for-profit colleges. 

In the face of changing data availability, we assembled a metrics advisory group of seven higher education experts to advise us on updating our main rankings. The board consisted of Fenaba Addo of the University of North Carolina at Chapel Hill; Beth Akers of the American Enterprise Institute; Michael Itzkowitz of Third Way; Konrad Mugglestone and Eleanor Eckerson Peters of the Institute for Higher Education Policy; and Nicole Smith and Martin Van Der Werf of the Georgetown University Center on Education and the Workforce. Our changes to this year’s rankings reflect these conversations and the advisory group’s focus on equity and value for all students.

Our rankings consist of three equally weighted portions: social mobility, research, and community and national service. This means that top-ranked colleges needed to be excellent across the full breadth of our measures, rather than excelling in just one measure. In order to ensure that each measurement contributed equally to a college’s score within any given category, we standardized each data element so that each had a mean of zero and a standard deviation of one (unless noted). Missing social mobility data (affecting less than 1 percent of all observations) was imputed and noted with “N/A” in the rankings tables. We adjusted data to account for statistical outliers by allowing no college’s performance in any single area to exceed five standard deviations from the mean of the data set. All measures (unless noted) use an average of the three most recent years of data in an effort to get a better picture of a college’s performance rather than statistical noise. 

The social mobility portion of the rankings also doubles as our Best Bang for the Buck rankings, with the exception that the main rankings are by Carnegie classification while the Best Bang for the Buck rankings are by region (while predicted rates are calculated by Carnegie classification). We used a college’s graduation rate over eight years for all students instead of the first-time, full-time graduation rate that is typically used but presents an incomplete picture of a college’s success. This graduation rate counted for 16.66 percent of the social mobility score. Half of that score was determined by the reported graduation rate and the other half came from comparing the reported graduation rate to a predicted graduation rate based on the percentage of Pell recipients, the percentage of students receiving student loans, the admit rate, the racial/ethnic and gender makeup of the student body, the number of students (overall and full-time), and whether a college is primarily residential. We estimated this predicted graduation rate measure in a regression model separately for each classification using average data from the past three years, imputing for missing data when necessary. Colleges with graduation rates that are higher than the “average” college with similar stats score better than colleges that match or, worse, undershoot the mark. A few colleges had predicted graduation rates over 100 percent, which we then trimmed back to 100 percent.

We used IPEDS data comparing graduation rates of Pell and non-Pell students to develop a Pell graduation gap measure. Colleges that had higher Pell than non-Pell graduation rates received a positive score on this measure, which was based on just the one year of available data and counted for 16.66 percent of a college’s score. We included the number of Pell recipients earning bachelor’s degrees, which is designed to reward colleges that successfully serve large numbers of students from lower-income families. This measure, from IPEDS, counts for 8.33 percent of the social mobility score. 

We also used IPEDS data for the percentage of a college’s students receiving Pell Grants to get at colleges’ commitments to educating a diverse group of students. We had to drop our previous first-generation student measure because the College Scorecard has not updated the data in several years. Our measure compared actual shares of Pell students to the predicted share after controlling for ACT/SAT scores and the share of families in a state with incomes below $35,000 and between $35,001 and $75,000 per year. The Pell enrollment performance measure counted for 8.33 percent of the social mobility score.

We measured a college’s affordability using data from IPEDS for the average net prices paid by first-time, full-time, in-state students with family incomes below $75,000 per year over the past three years. We focused on these income categories because of our interest in affordability for students from lower- to middle-income families. Net price counted for 16.66 percent of the social mobility score.

We replaced two of our measures of financial success due to changes in the College Scorecard. The first new measure is the share of students who earned at least 150 percent of the federal poverty line three years after graduating from college. This is a proxy for whether students are able to support themselves financially after graduation. This metric is worth 16.66 percent of the social mobility score. The other financial success metric is the student loan repayment rate, which is the percentage of dollars borrowed that is still outstanding five years after leaving college, with lower rates being better than higher rates. Rates above 100 percent mean that students have had more interest accumulate on their loans than they have been able to repay, while rates below 100 percent reflect students being able to make a dent in the loan’s principal. We use the raw repayment rate for 8.33 percent of the social mobility score and a regression-adjusted repayment rate (using the same predictors as the graduation rate metric) for another 8.33 percent.

The research score for national universities is based on five measurements (from the Center for Measuring University Performance and the National Science Foundation): the total amount of an institution’s research spending; the number of science and engineering PhDs awarded by the university; the number of undergraduate alumni who have gone on to receive a PhD in any subject, relative to the size of the college; the number of faculty receiving prestigious awards, relative to the number of full-time faculty; and the number of faculty in the National Academies, relative to the number of full-time faculty. For national universities, we weighted each of these components equally to determine a college’s final score in the category. For liberal arts colleges, master’s universities, and bachelor’s colleges, which do not have extensive doctoral programs, science and engineering PhDs were excluded and we gave double weight to the number of alumni who go on to get PhDs. Faculty awards and National Academies membership were not included in the research score for these institutions because such data is available for only a relative handful of these colleges.

We determined the community service score by measuring each college’s performance across a range of measures. We compiled AmeriCorps and Peace Corps data into a combined metric. We used an indicator for whether a college currently provides at least some matching funds for undergraduate students who had received a Segal AmeriCorps Education Award for having completed national service (two points) and a standardized measure of the share of students receiving Segal awards. We divided the number of alumni currently serving in the Peace Corps by total enrollment, using pre-pandemic data due to the program’s suspension. 

We judged military service by collecting data on the size of each college’s Air Force, Army, and Navy ROTC programs and dividing by the number of students. We used the percentage of federal work-study grant money spent on community service projects as a measure of how much colleges prioritize community service; this is based on data provided by the Corporation for National and Community Service. Each of these three measures was standardized using a three-year rolling average.

We added a measure for whether a college received the Carnegie Community Engagement Classification, with listed colleges receiving two points. This classification, which is housed at Albion College, rewards colleges that provide documentation of their institutional mission and broader public engagement. We used a measure of voting engagement using data from the National Study of Learning, Voting, and Engagement (NSLVE) at Tufts University and the ALL IN Campus Democracy Challenge. Colleges could earn up to six points for fulfilling each of six criteria. They could receive one point for being currently enrolled in NSLVE and up to two points for making their NSLVE survey data publicly available through ALL IN in 2016 or 2018 (one point for each year). They could receive up to two points for creating an action plan through the ALL IN Campus Democracy Challenge in 2018 or 2020 (one point for each year). A college earned one point for having a student voter registration rate above 85 percent. Finally, we added a new measure of the percentage of all degrees awarded in health, education, and social work to reward colleges that produce leaders in socially valuable fields that are not always highly paid.

We compared our rankings to the U.S. Department of Education’s list of colleges subject to the most severe level of heightened cash monitoring, which indicates that a college is facing significant financial problems or has other serious issues that need to be addressed. Four colleges (Bacone College in Oklahoma, Cheyney University in Pennsylvania, Southwestern Christian University in Oklahoma, and Wiley College in Texas) were on that list as of March 2021. We kept these colleges in our rankings, but denoted them with ^ to draw this concern to readers’ attention. Finally, we checked a random sample of colleges to see if they had any serious issues that had been exposed in recent news coverage. No institution had concerns that rose to the level of us removing them from our rankings.

Support Nonprofit Journalism

If you enjoyed this article, consider making a donation to help us produce more like it. The Washington Monthly was founded in 1969 to tell the stories of how government really works —and how to make it work better. More than fifty years later, the need for incisive analysis and new, progressive policy ideas is clearer than ever. As a nonprofit, we rely on support from readers like you.