There are two primary goals to our methodology. First, we considered no single category to be more important than any other. Second, the final rankings needed to reflect excellence across the full breadth of our measures, rather than reward an exceptionally high focus on, say, research. Thus, all three main categories were weighted equally when calculating the final score. In order to ensure that each measurement contributed equally to a school’s score within any given category, we standardized each data set so that each had a mean of zero and a standard deviation of one. The data were also adjusted to account for statistical outliers. No school’s performance in any single area was allowed to exceed five standard deviations from the mean of the data set. Thanks to rounding, some schools have the same overall score. We have ranked them according to their pre-rounding results.
Each of our three categories includes several components. We have determined the community service score by measuring each school’s performance in five different areas: the size of each school’s Army and Navy Reserve Officer Training Corps programs, relative to the size of the school; the number of alumni currently serving in the Peace Corps, relative to the size of the school; the percentage of federal work-study grant money spent on community service projects; a combined score based on the number of students participating in community service and total service hours performed, both relative to school size; and a combined score based on the number of full-time staff supporting community service, relative to the total number of staff, the number of academic courses that incorporate service, relative to school size, and whether the institution provides scholarships for community service.
The latter two measures are new to this year’s rankings. The first is a measure of student participation in community service and the second is a measure of institutional support for service. The new measures are based on data reported to the Corporation for National and Community Service by colleges and universities in their applications for the President’s Higher Education Community Service Honor Roll. Colleges that did not submit applications had no data and were given zeros on these measures. Many of the schools that dropped in our service rankings this year fall into this category. (Our advice to those schools is that if you care about service, believe you do a good job of promoting it, and want the world to know, then fill out the application!)
The research score for national universities is also based on five measurements: the total amount of an institution’s research spending (from the Center for Measuring University Performance and the National Science Foundation); the number of science and engineering PhDs awarded by the university; the number of undergraduate alumni who have gone on to receive a PhD in any subject, relative to the size of the school; the number of faculty receiving prestigious awards, relative to the number of full-time faculty; and the number of faculty in the National Academies, relative to the number of full-time faculty. For national universities, we weighted each of these components equally to determine a school’s final score in the category. For liberal arts colleges, master’s universities, and baccalaureate colleges, which do not have extensive doctoral programs, science and engineering PhDs were excluded and we gave double weight to the number of alumni who go on to get PhDs. Faculty awards and National Academy membership were not included in the research score for these institutions because such data is available for only a relative handful of these schools.
As some readers have pointed out in previous years, our research score rewards large schools for their size. This is intentional. It is the huge numbers of scientists, engineers, and PhDs that larger universities produce, combined with their enormous amounts of research spending, that will help keep America competitive in an increasingly global economy. But the two measures of university research productivity and quality—faculty awards and National Academy members, relative to the number of full-time faculty (from the Center for Measuring University Performance)—are independent of a school’s size. This year’s guide continues to reward large universities for their research productivity, but these two additional measures also recognize smaller institutions that are doing a good job of producing quality research.
The social mobility score is more complicated. We have data that tell us the percentage of a school’s students on Pell Grants, which is a good measure of a school’s commitment to educating lower-income kids. (In prior years we’ve had to rely on several data sources for this information. This year, we have a more authoritative source. The U.S. Department of Education now requires colleges to report the number of Pell Grant recipients as part of the federal Integrated Postsecondary Education Data System survey, providing reliable and comparable data across all institutions. The methodology the Department of Education uses to calculate Pell Grant percentages is slightly different from the methodology we used in prior years, resulting in some changes to colleges’ Pell Grant percentages and thus to our social mobility rankings.) We’d like to know how many of these Pell Grant recipients graduate, but schools aren’t required to track those figures. Still, because lower-income students at any school are less likely to graduate than wealthier ones, the percentage of Pell Grant recipients is a meaningful indicator in and of itself. If a campus has a large percentage of Pell Grant students—that is to say, if its student body is disproportionately poor—it will tend to diminish the school’s overall graduation rate.
We have a formula that predicts the graduation rate of the average school given its percentage of Pell Grant students and its average SAT score. (Since most schools only provide the twenty-fifth percentile and the seventy-fifth percentile of scores, we took the mean of the two. For schools where a majority of students took the ACT, we converted ACT scores into SAT equivalents.) Schools with graduation rates that are higher than the “average” school with similar stats score better than schools that match or, worse, undershoot the mark. Four schools had comparatively low Pell Grant rates and comparatively high SAT scores, and had predicted graduation rates of over 100 percent. We adjusted these graduation rates to 100 percent. In addition, we used a second metric that predicted the percentage of students on Pell Grants based on SAT scores. This indicated which selective universities (since selectivity is highly correlated with SAT scores) are making the effort to enroll low-income students. Two schools with extremely high SAT scores had predicted Pell percentages below zero. We adjusted these percentages to zero. The two formulas were weighted equally.