It seems that the “Center for College Affordability and Productivity” has issued a ranking of “worst college teaching” using data obtained from…ratemyprofessor.com. Based on a similar analysis of think tank punditry, this is the worst “study” ever from the worst think tank anywhere.
People, I understand the urge to rank colleges. As centers of education or places to work, some universities are better than others, and good rankings can help us make important choices.
And while no ranking system is perfect, some systems at least try to collect and analyze meaningful data. The ubiquitous U.S. News & World Report rankings are based on a combination of quantitative measures and a survey of college officials–not perfect, but not based on nothing. Or, if one wants to focus on college-as-an-investment, there are rankings of schools based on return on investment (spoiler alert: science & engineering nerds make more money).
And then there is the “Center for College Affordability and Productivity.” To be fair, the Center’s rankings do bring in some conventional measures of college quality: retention rates, the aforementioned ROI rankings, graduation rates, student indebtedness. The hokiness, however, comes in the form of “listings of Alumni in Who’s Who in America” (10% weight) and “Student Evaluations from Ratemyprofessors.com” (17.5% weight). The RMP data, in turn, provide the grist for news articles on the supposed quality of university teaching…lazy data for lazy reporters.
Why does this sound wrong? The term you are looking for is construct validity. I will put it in bold caps so you can see it from afar: CONSTRUCT VALIDITY. Construct validity is “the degree to which an instrument measures the characteristic being investigated; the extent to which the conceptual definitions match the operational definitions” and it is what the CCAP study lacks. Let’s consider the two most important features:
Rating Ratemyprofessor
•Quality CCAP wishes to measure: Faculty teaching skill
•Data Source: Ratemyprofessor.com (RMP)
•Data Generating Process: students (or, really, anyone) log in to RMP, rate a professor on several dimensions, and make comments. Note that, since this from the perspective of (presumably lazy) college students, “easy” professors get higher scores. There is also a system for denoting “hot” professors with chili peppers
•Quality actually measured: for a given professor, the ratings of a small, non-random sample of students-as-consumers. That is, students who really like or really dislike a professor express how much they like or dislike…but not how much they (could have) learned or did not learn. For a given school, the rankings generated through this process probably say more about the student body than the university faculty, so the list of “best teaching schools” is dominated by military academies, religiously affiliated schools, and Southern schools. Perhaps–and I am just floating this as a “what if”–what this really measures is the extent to which students respect and defer to authority figures. Gosh, I sure hope the CCAP has considered that they may have incorporated “student submission to authority” into their rankings.
How do we know that this is a poor measure of teaching quality? The director of CCAP says so, albeit indirectly. In this column, Richard Vedder decries the lax standards of college education:
Students do less yet are rewarded more. Several recent
surveys have concluded that undergraduates study less frequently than their
parents did (fewer than 30 hours a week on all academic chores, including class
attendance, paper writing, etc.), but get higher grades: above a “B” average for all students, compared with a “C+” to “B-” average 50 years ago.
Why, then, would Vedder cite a website that promotes this culture of declining standards? Here’s a hypothesis: chili pepper blindness. That’s right, it’s a certified fact: Vedder is a hottie. And perhaps Vedder, seeing the plain truth of this rating, may have assumed it extended to RMP’s broader assessment of teaching ability. That’s my best guess why anyone would use RMP ratings as if they were real data.
What’s What with Who’s Who?
•Quality CCAP wishes to measure: Alumni career success
•Data Source: Marquis’s Who’s Who in America?
•Data Generating Process: Marquis selects the entrants in this most prestigious publication by….??? The website does not say. However, it does have an interface where one can provide data on one’s awesomestness. Now, I am not saying that Who’s Who just publishes the names of people whose desire to see their names in print is all-consuming (that’s what blogs are for!) but, based on their stated methodology, I can’t rule it out either. And, certainly, to the extent that Who’s Who contacts people for inclusion, those individuals must self-select into the volume by providing their biographical information.
•Quality actually measured: signal of quality? Not sure. Self-selection into self-promotion vehicle? Yes.
In order to put this awesomely bad study in perspective, I did a similar study ranking think tanks. Specifically, I checked my Twitter feed to see which think tanks were hated by people I was following. Having done so, I noted this tweet:
Did CBS really just rank best and worst profs based on RateMyProfessors data? RT @Prof_BearB: http://www.cbsnews.com/8301-505145_162-57570111/u.s-colleges-with-the-best-professors/ …
Having finished my survey, I tabulated my list of worst think tanks in the world:
WORST THINK TANKS IN THE WORLD
1) Center for College Affordability and Productivity
After that they are all tied…my Twitter followees don’t talk about think tanks very much.
[Cross-posted at Mischiefs of Faction]