Jonathan Rothwell and Siddharth Kulkarni of the Metropolitan Policy Program at Brookings made a big splash today with the release of a set of college “value-added” rankings (link to full study and Inside Higher Ed summary) focused primarily on labor market outcomes. Value-added measures, which adjust for student and institutional characteristics to get a better handle on a college’s contribution to student outcomes, are becoming increasingly common in higher education. (I’ve written about college value-added in the past, which led to me taking the reins as Washington Monthly’s rankings methodologist.) Pretty much all of the major college rankings at this point include at least one value-added component, and this set of rankings actually shares some similarities with Money’s rankings. And the Brookings report does mention correlations with the U.S. News, Money, and Forbes rankings—but not Washington Monthly. (Sigh.)
The Brookings report uses five different outcome measures, which are then adjusted for available student characteristics and institutional characteristics such as the sector of the college and where it is located:
(1) Mid-career salary of alumni: This measures the median salary of full-time workers with a degree from a particular college and at least ten years of experience. The data are from PayScale, which suffers from being self-reported data for a subset of students, but the data likely still have value for two reasons. First, the authors do a careful job of trying to decompose any biases in the data—for example, correlating PayScale reported earnings with data from other sources. Second, even if there is an upward bias in the data, it should be similar across institutions. As I’ve written about before, I trust the order of colleges in PayScale data more than I trust the dollar values—which are likely inflated.
But there are still a few concerns with this measure. Some of the concerns, such as limiting just to graduates (excluding dropouts) and dropping students with an advanced degree, are fairly well-known. And the focus on salary definitely rewards colleges with large engineering programs, as evidenced by those colleges’ dominance of the value-added list (while art schools look horrible). However, given that ACT and SAT math scores are the other academic preparation measure used, the bias favoring engineering schools may actually be smaller than if verbal/reading scores were also used. I would also have estimated models separately for two-year and four-year colleges instead of putting them in the same model with a dummy variable for sector, but that’s just my preference.
(2) Student loan repayment rate: This represents the opposite of the average three-year student loan cohort default rate over the last three years (so a 10% default rate is framed as a 90% repayment rate). This measure is pretty straightforward, although I do have to question the value-added estimates for colleges with very high repayment rates. Value-added estimates are difficult to conceptualize for colleges with a high probability of success, as there is typically little room for improvement. But here, the highest predicted repayment rate is 96.8% for four-year colleges, while several dozen colleges have actual repayment rates in excess of 96.8%. It appears that linear regressions were used, while some type of robust generalized linear model should have also been considered. (In the Washington Monthly rankings, I use simple linear regressions for graduation rate performance, but very few colleges are so close to the ceiling of 100%.)
(3) Occupational earnings potential: This is a pretty nifty measure that uses LinkedIn data to get a handle of which occupations a college’s graduates pursue during their career. This mix of occupations is then tied to Bureau of Labor Statistics data to estimate the average salary of a college’s graduate, where advanced degree holders are also included. The value-added measure attempts to control for student and institutional characteristics, although it doesn’t control for the preferences of students toward certain majors when entering college.
I’m excited by the potential to use LinkedIn data (warts and all) to look at students’ eventual outcomes. However, it should be noted that LinkedIn is more heavily used in some fields that might be expected (business and engineering) and others that might not be expected (communication and cultural studies). The authors adjust for these differences in representation and are very transparent about it in the appendix. This appendix is definitely on the technical side, but I welcome their transparency.
They also report five different quality measures which are not included in the value-added estimate: ‘curriculum value’ (the value of the degrees offered by the college), the value of skills alumni list on LinkedIn, the percentage of graduates deemed STEM-ready, completion rates within 200% of normal time (8 years for a 4-year college, or 4 years for a 2-year college), and average institutional grant aid. These measures are not input-adjusted, but generally reflect what people think of as quality. However, average institutional grant aid is a lousy measure to include as it rewards colleges with a high-tuition, high-aid model over colleges with a low-tuition, low-aid model—even if students pay the exact same price.
In conclusion, the Brookings report tells readers some things we already know (engineering programs are where to go to make money), but provides a good—albeit partial—look at outcomes across an unusually broad swath of American higher education. I would advise readers to focus on comparing colleges with similar missions and goals, given the importance of occupation in determining earnings. I would also be more hesitant to use the metrics for very small colleges, where all of these measures can be influenced by a relatively small number of people. But the transparency of the methodology and use of new data sources make these value-added rankings a valuable contribution to the public discourse.
[Cross-posted at Kelchen on Education]