College Guide – A Note on Methodology – 2006

W e settled on two primary goals in our methodology. First, we considered no single category to be more important than any other. Second, the final rankings needed to reflect excellence across the full breadth of our measures, rather than reward an exceptionally high focus on, say, research. All categories were weighted equally when calculating the final score. In order to ensure that each measurement contributed equally to a school’s score in any given category, we standardized the data sets so that each had a mean of zero and a standard deviation of one. The data were also adjusted to account for statistical outliers. For the purposes of calculating the final score, no school’s performance in any single area was allowed to exceed three standard deviations from the mean of the data set.

Each of our three categories includes several components.We determined the Community Service score by measuring each school’s performance in three different areas: the percentage of its students enrolled in the Army and Navy Reserve Officer Training Corps; the percentage of its alumni who are currently serving in the Peace Corps; and the percentage of its federal work-study grants devoted to community service projects. A school’s Research score is also based on three measurements: the total amount of an institution’s research spending, the number of PhDs awarded by the university in the sciences and engineering, and the percentage of undergraduate alumni who have gone on to receive a PhD in any subject (baccalaureate PhDs). For national universities, we weighted each of these components equally to determine a school’s final score in the category.

For liberal arts colleges, which do not grant doctorates, baccalaureate PhDs were given double weight.The baccalaureate PhDs are a new addition to our formula. Last year, research spending made up 100 percent of the liberal arts colleges’ research score; this year, it makes up only a third. This rewards liberal arts schools for how well they train students for graduate programs, rather than just for how much they spend on research. We feel this is fairer.

The Social Mobility score is more complicated. We have data that tells us the percentage of a school’s students on Pell Grants, which is a good measure of a school’s commitment to educating lower-income kids. But, while we’d also like to know how many of these students graduate, schools aren’t required to track those figures. Still, because lower-income students at any school are less likely to graduate than wealthier ones, the percentage of Pell Grant recipients is a meaningful indicator in and of itself. If a campus has a large percentage of Pell Grant students—that is to say, if its student body is disproportionately poor—it will tend to diminish the school’s overall graduation rate. Last year, using data from all of our schools, we constructed a formula (using a technique called regressional analysis) that predicted a school’s likely graduation rate given its percentage of students on Pell. Because this formula disproportionately rewarded more academically exclusive schools (whose students were high achievers and inherently more likely to graduate), however, our formula this year has been altered to predict a school’s likely graduation rate given its percentage of Pell students and its average SAT score. (Since most schools only provide the 25th percentile and the 75th percentile of scores, we took the mean of the two.) Schools that outperform their forecasted rate score better than schools that match or, worse, undershoot the mark.In addition, we added a second metric to our Social Mobility score by running a regression that predicted the percentage of students on Pell Grants based on SAT scores. This indicated which selective universities (since selectivity is highly correlated with SAT scores) are making the effort to enroll low-income students. The two formulas were weighted equally.