To establish the set of colleges included in the rankings, we started with the 1,739 colleges in the fifty states that are listed in the U.S. Department of Education’s Integrated Postsecondary Education Data System (IPEDS) and have a 2015 Carnegie basic classification of research, master’s, baccalaureate, and baccalaureate/associate’s colleges, are not exclusively graduate colleges, participate in federal financial aid programs, and plan to be open in fall 2018. We then excluded 179 baccalaureate and baccalaureate/associate’s-level colleges which reported that at least half of the undergraduate degrees awarded between 2013 and 2015 were below the bachelor’s-degree level, as well as twenty-four colleges with fewer than 100 undergraduate students in any year they were open between fall 2014 and fall 2016 and an additional eleven colleges with fewer than twenty-five students in the federal graduation rate cohort in 2016.

Next, we decided to exclude the five federal military academies (Air Force, Army, Coast Guard, Merchant Marine, and Navy) because their unique missions make them difficult to evaluate using our methodology. Our rankings are based in part on the percentage of students receiving Pell Grants and the percentage of students enrolled in the Reserve Officers’ Training Corps (ROTC), whereas the service academies provide all students with free tuition (and thus no Pell Grants or student loans) and commission graduates as officers in the armed services (and thus not the ROTC program). Finally, we dropped an additional thirty-two colleges for not having data on at least one of our key social mobility outcomes (percent Pell, graduation rate, or net price). This resulted in a final sample of 1,488 colleges and includes public, private nonprofit, and for-profit colleges.

Check out the complete 2018 Washington Monthly rankings here. 

Our rankings consist of three equally weighted portions: social mobility, research, and community and national service. This means that top-ranked colleges needed to be excellent across the full breadth of our measures, rather than excelling in just one measure. In order to ensure that each measurement contributed equally to a college’s score within any given category, we standardized each data element so that each had a mean of zero and a standard deviation of one (unless noted). The data was also adjusted to account for statistical outliers. No college’s performance in any single area was allowed to exceed five standard deviations from the mean of the data set. All measures use an average of the three most recent years of data (except where noted) in an effort to get a better picture of a college’s performance rather than statistical noise. Thanks to rounding, some colleges have the same overall score. We have ranked them according to their pre-rounding results.

Check out the complete 2017 Washington Monthly rankings here. 

The social mobility portion of the ranking also doubles as our Best Bang for the Buck rankings, with the exception that the main rankings are by Carnegie classification while the Best Bang for the Buck rankings are by region (while predicted rates are calculated by Carnegie classification). For the first time in 2018, we used a college’s graduation rate over eight years for all students instead of the first-time, full-time graduation rate that was the only measure available in the past. This graduation rate, which was only available for one cohort of students, came from IPEDS and counted for 16.66 percent of the social mobility score. Half of that score was determined by the reported graduation rate and the other half came from comparing the reported graduation rate to a predicted graduation rate based on the percentage of Pell recipients and first-generation students, the percentage of students receiving student loans, the admit rate, the racial/ethnic and gender makeup of the student body, the number of students (overall and full-time), and whether a college is primarily residential. We estimated this predicted graduation rate measure in a regression model separately for each classification using average data from the last three years, imputing for missing data when necessary. Colleges with graduation rates that are higher than the “average” college with similar stats score better than colleges that match or, worse, undershoot the mark. A few colleges had predicted graduation rates over 100 percent, which we then trimmed back to 100 percent.

We used new IPEDS data on Pell Grant recipients’ graduation rates for the first time this year, comparing graduation rates of Pell and non-Pell students to develop a Pell graduation gap measure. Colleges that had higher Pell than non-Pell graduation rates received a positive score on this measure, which was based on just the one year of available data and counted for 16.66 percent of a college’s score. We also used IPEDS data for the percentage of a college’s students receiving Pell Grants and College Scorecard data on the percentage of first-
generation students to get at colleges’ commitments to educating a diverse group of students. These percentages counted for 8.33 percent of the social mobility score, with 5.56 percent for percent Pell and 2.77 percent for percent first-generation. We then estimated predicted percentages of Pell recipients and first-generation students based on regressions using admit rates and ACT/SAT scores. The gaps between actual and predicted percentages counted for 8.33 percent of a college’s score, with 5.56 percent for Pell performance and 2.77 percent for first-generation performance. We measured a college’s affordability using data from IPEDS for the average net prices paid by first-time, full-time, in-state students with family incomes below $75,000 per year over the last three years. We focused on these income categories because of our interest in affordability for students from lower- to middle-income families. Net price counted for 16.66 percent of the social mobility score.

The first metric of financial success we used compares the median earnings of a college’s former students (graduates and dropouts alike) ten years after initial enrollment to predicted earnings based on the variables used to predict graduation rates as well as two other factors designed to take colleges’ missions and locations into account. We adjusted for a college’s mix of bachelor’s degrees awarded, using STEM, education, business, health, social science, and liberal arts as broad degree categories. We also adjusted for regional living costs using fair market rent data from the Bureau of Labor Statistics to account for the fact that $40,000 per year in the rural South goes much farther than $40,000 per year in the Washington metropolitan area. This metric is worth 16.66 percent of the social mobility score. The other financial success metric is the student loan repayment rate, reflecting the percentage of students who paid down at least $1 in principal within five years of leaving college and entering repayment. We use the raw repayment rate for 8.33 percent of the social mobility score and a regression-adjusted repayment rate (using the same predictors as the graduation rate metric) for another 8.33 percent.

The research score for national universities is based on five measurements: the total amount of an institution’s research spending (from the Center for Measuring University Performance and the National Science Foundation); the number of science and engineering PhDs awarded by the university; the number of undergraduate alumni who have gone on to receive a PhD in any subject, relative to the size of the college; the number of faculty receiving prestigious awards, relative to the number of full-time faculty; and the number of faculty in the National Academies, relative to the number of full-time faculty. For national universities, we weighted each of these components equally to determine a college’s final score in the category. For liberal arts colleges, master’s universities, and baccalaureate colleges, which do not have extensive doctoral programs, science and engineering PhDs were excluded and we gave double weight to the number of alumni who go on to get PhDs. Faculty awards and National Academy membership were not included in the research score for these institutions because such data is available for only a relative handful of these colleges.

We determined the community service score by measuring each college’s performance in five different measures. We judged military service by collecting data on the size of each college’s Air Force, Army, and Navy ROTC programs and dividing by the number of students. We similarly measured national service by dividing the number of alumni currently serving in the Peace Corps by total enrollment. We used the percentage of federal work-study grant money spent on community service projects as a measure of how much colleges prioritize community service; this is based on data provided by the Corporation for National and Community Service. Each of these three measures was standardized using a three-year rolling average, except for work study (which used the two most recent years of data available).

We then added an indicator for whether a college provided at least some matching funds for undergraduate students who had received a Segal AmeriCorps Education Award for having completed national service. Colleges that awarded at least some grants to students regardless of programs received two points, colleges that limited grants to specific undergraduate programs received one point, and colleges that did not participate or limited awards to only graduate students received no points.

Finally, we added a new measure of voting engagement to the 2018 rankings using data from the National Study of Learning, Voting, and Engagement (NSLVE) at Tufts University and the ALL IN Campus Democracy Challenge. Colleges could earn one point for each of four criteria: participating in the NSLVE survey, publicly releasing a report on student voting rates in either 2014 or 2016, participating in the ALL IN Campus Democracy Challenge to improve civic engagement, or releasing an action plan through ALL IN.

We compared our rankings to the U.S. Department of Education’s list of colleges subject to the most severe level of heightened cash monitoring, which indicates that a college is facing significant financial problems or has other serious issues that need to be addressed. Two colleges (Cheyney University in Pennsylvania and Eastern Nazarene College in Massachusetts) were on that list as of March 2018. We kept these colleges in our rankings, but denoted them with ^^ to draw this concern to readers’ attention. Finally, we checked a random sample of colleges to see if they had any serious issues that had been exposed in recent news coverage. No institution had concerns that rose to the level of us removing them from our rankings.

Our ideas can save democracy... But we need your help! Donate Now!