Is there a better way to hire and evaluate professors? For decades university academics have argued that their research matters, but no one’s really been keeping track of their influence systemically.
Maybe we should do that. So argues John Yoo and a colleague in a recent paper. They say that:
This study seeks to improve on earlier efforts by producing a more relevant and accurate citation-based ranking system. It produces a measure that explains 81 percent of the variation in the U.S. News academic peer rankings, implicitly revealing how schools could boost those rankings, and lists the most cited professors based on this new ranking methodology, both overall, amongst younger scholars, and in 20 areas of legal. This allows for the top school in each area of law to be calculated, which could be useful to aspiring JD students who desire to know the best school in the area(s) of law they are most interested in. Finally, this study proposes an alternative faculty ranking system focusing on the percentage of a law school faculty that are “All-Stars” (ranked in the top 10 in citations per year in an area of law). This alternative ranking system improves upon some of the weaknesses of previous faculty quality ranking methodologies and argues that citation-based studies do measure something important – relevance of scholarship.
Yoo presents this as a sort of Moneyball measure of academic influence. Rather than working from “impressionistic opinions about who is smart and productive” we’d know, essentially, who was the best professors by measure how much other people cite them. This could, potentially, be useful for development purposes:
Suppose you were a university president, dean, or simply a rich donor. Could you deploy resources to attract undervalued professors and build a faculty that would punch above its salary? In other words, if Harvard is the Yankees, could you build a faculty like the Oakland A’s?
The problem with this idea is that just saying “like Moneyball” ignores the reality of actual Moneyball, which is based on very specific formula: teams win when their players get runs. Just get runs and you’ll be fine. This is something rather different. But the number of times a professor is cited by another professor isn’t directly correlated to the professor’s effectiveness as a teacher or a member of his academic community.
Furthermore, Yoo seems to believe that a university could use his measure to build up a university on the cheap, since his explains “81 percent of the variation in the U.S. News academic peer rankings.”
Um, the academic peer rankings constitute 25 percent total U.S. News ranking. It’s true that’s more than any other factor, but it’s hard for any colleges to know how effective Yoo’s Moneyball strategy would prove. Yoo’s measure is consistent with 80 percent of the variation in the academic peer rankings but, he says nothing about how much his measure is consistent with the total ranking. He’d have to do an analysis of the variance in his measure with respect to the overall ranking in order to determine how meaningful his new mesure really is.
He’s implying that a school can come up from behind using his new formula but, because his measure is ultimately a pretty minor part of the total U.S. News ranking; it could take a damn long time for a focus on his strategy to pay off.
Yoo, by the way, was Deputy Assistant Attorney General of the United States from 2001 to 2003 in the Department of Justice’s Office of Legal Counsel (OLC). He wrote the Bush Administration’s infamous Torture Memos, which argued that torture acts are legally permissible as part of the president’s ‘War on Terror” because the those detained by the Bush administration were not entitled to prison of war legal rights under the Geneva Conventions since the War on Terror wasn’t, you know, a real war.
It’s odd that Yoo is now so interested in measuring the “influence” of professors using this particular method. Yoo is cited by other law professors all the time, but it’s usually not a very positive way.