There are two primary goals to our methodology. First, we considered no single category to be more important than any other. Second, the final rankings needed to reflect excellence across the full breadth of our measures, rather than reward an exceptionally high focus on, say, research. Thus, all three main categories were weighted equally when calculating the final score. In order to ensure that each measurement contributed equally to a school’s score within any given category, we standardized each data set so that each had a mean of zero and a standard deviation of one. The data was also adjusted to account for statistical outliers. No school’s performance in any single area was allowed to exceed five standard deviations from the mean of the data set. Thanks to rounding, some schools have the same overall score. We have ranked them according to their pre-rounding results.

To establish the set of colleges included in the rankings, we started with the 1,727 colleges in the fifty states that are listed in the U.S. Department of Education’s Integrated Postsecondary Education Data System as having a Carnegie basic classification of research, master’s, baccalaureate, and baccalaureate/associate’s colleges, are not exclusively graduate schools, and participate in federal financial aid programs. We then excluded 134 baccalaureate and baccalaureate/associate’s-level colleges that reported that at least half of the undergraduate degrees awarded in 2012 were below the bachelor’s degree level, as well as eleven colleges with fewer than 100 undergraduate students in fall 2012. Next, we decided to exclude the five federal military academies (Air Force, Army, Coast Guard, Merchant Marine, and Navy) because their unique missions make them difficult to evaluate using our methodology. Our rankings are based in part on the percentage of students receiving Pell Grants and the percentage of students enrolled in the Reserve Officers’ Training Corps (ROTC), whereas the service academies provide all students with free tuition (and thus no Pell Grants) and commission graduates as officers in the armed services (and thus not the ROTC program). Our final set of exclusions was to not rank colleges that had not reported data on the three main measures used in the social mobility section (percent Pell, graduation rate, and net price) at least once in the past three years. This resulted in a final sample of 1,540 colleges and includes public, private nonprofit, and for-profit colleges.

The primary change in this year’s rankings is the use of the three most recent years of data (each equally weighted) instead of the most recent year of data, as we had done in the past. This helps reduce wild swings in rankings, particularly at smaller colleges where a few more students graduating or defaulting on their student loans would have substantial implications for their rankings. Using the average of multiple years would hurt the ranking position of colleges that have exhibited rapid improvements in their outcomes, but the truth is that few colleges can move the dial this quickly. This will reduce size of the year-to-year changes in a college’s rankings going forward, which may sell fewer magazines but paints a more accurate picture of performance.

Each of our three categories (service, research, and social mobility) includes several components. We have determined the community service score by measuring each school’s performance in five different areas: the size of each school’s Air Force, Army, and Navy ROTC programs, relative to the size of the school; the number of alumni currently serving in the Peace Corps, relative to the size of the school; the percentage of federal work-study grant money spent on community service projects; a combined score based on the number of students participating in community service and total service hours performed, both relative to school size; and a combined score based on the number of full-time staff supporting community service, relative to the total number of staff, the number of academic courses that incorporate service, relative to school size, and whether the institution provides scholarships for community service.

The latter two measures are based on data reported to the Corporation for National and Community Service by colleges and universities in their applications for the President’s Higher Education Community Service Honor Roll (data is available for 2011 and 2012, but not 2013—making this the only set of measures where two years of data were used instead of three). The first is a measure of student participation in community service, and the second is a measure of institutional support for service. Colleges that did not submit applications in a given year had no data and were given zeros on these measures. (Our advice to those schools: If you care about service, believe you do a good job of promoting it, and want the world to know, then fill out the application!)

The research score for national universities is also based on five measurements: the total amount of an institution’s research spending (from the Center for Measuring University Performance and the National Science Foundation); the number of science and engineering PhDs awarded by the university; the number of undergraduate alumni who have gone on to receive a PhD in any subject, relative to the size of the school; the number of faculty receiving prestigious awards, relative to the number of full-time faculty; and the number of faculty in the National Academies, relative to the number of full-time faculty. For national universities, we weighted each of these components equally to determine a school’s final score in the category. For liberal arts colleges, master’s universities, and baccalaureate colleges, which do not have extensive doctoral programs, science and engineering PhDs were excluded and we gave double weight to the number of alumni who go on to get PhDs. Faculty awards and National Academy membership were not included in the research score for these institutions because such data is available for only a relative handful of these schools.

As some readers have pointed out in previous years, our research score rewards large schools for their size. This is intentional. It is the huge numbers of scientists, engineers, and PhDs that larger universities produce, combined with their enormous amounts of research spending, that will help keep America competitive in an increasingly global economy. But the two measures of university research quality—faculty awards and National Academy members, relative to the number of full-time faculty (from the Center for Measuring University Performance)—are independent of a school’s size.

The social mobility score is more complicated. We have data from the federal Integrated Postsecondary Education Data System survey that tells us the percentage of a school’s students receiving Pell Grants, which is a good measure of a school’s commitment to educating lower-income students. We’d like to know how many of these Pell Grant recipients graduate, but schools aren’t required to report those figures. Still, because lower-income students at any school are less likely to graduate than wealthier ones, the percentage of Pell Grant recipients is a meaningful indicator in and of itself. If a campus has a large percentage of Pell Grant students—that is to say, if its student body is disproportionately poor—it will tend to diminish the school’s overall graduation rate.

We first predicted the percentage of students on Pell Grants based on the average ACT/SAT score and the percentage of students admitted. This indicated which selective universities (since selectivity is highly correlated with ACT/SAT scores and admit rates) are making the effort to enroll low-income students. (Since most schools only provide the twenty-fifth percentile and the seventy-fifth percentile of scores, we took the mean of the two. For schools where a majority of students took the SAT, we converted SAT scores into ACT equivalents.)

The predicted graduation rate measure is based on research by Robert Kelchen, assistant professor in the Department of Education Leadership, Management and Policy at Seton Hall University and methodologist for this year’s college guide, and Douglas N. Harris, associate professor at Tulane University. In addition to the percentage of Pell recipients and the average ACT/SAT score, the graduation rate prediction formula includes the percentage of students receiving student loans, the admit rate, the racial/ethnic and gender makeup of the student body, the number of students (overall and full-time), and institutional characteristics such as whether a college is primarily residential. We estimated this predicted graduation rate measure in a regression model separately for each classification using average data from the last three years, imputing for missing data when necessary. Schools with graduation rates that are higher than the “average” school with similar stats score better than schools that match or, worse, undershoot the mark. Two colleges, the California Institute of Technology and Harvey Mudd College, had predicted graduation rates of just over 100 percent. We adjusted these predicted graduation rates to 100 percent.

We then divided the difference between the actual and predicted graduation rate by the net price of attendance, defined as the average price that first-time, full-time students who receive financial aid pay for college after subtracting need-based financial aid. This cost-adjusted graduation rate measure rewards colleges that do a good job of both graduating students and keeping costs low. The two social mobility formulas (actual vs. predicted percent Pell and cost-adjusted graduation rate performance) were weighted equally.

Our ideas can save democracy... But we need your help! Donate Now!