Lies, Damn Lies, and….

LIES, DAMN LIES, AND….Via Kieran Healy, here’s something way off the beaten path: a new paper by Alan Gerber and Neil Malhotra titled “Can political science literatures be believed? A study of publication bias in the APSR and the AJPS.” It is, at first glance, just what it says it is: a study of publication bias, the tendency of academic journals to publish studies that find positive results but not to publish studies that fail to find results. The reason this is a problem is that it makes positive results look more positive than they really are. If two researchers do a study, and one finds a significant result (say, tall people earn more money than short people) while the other finds nothing, seeing both studies will make you skeptical of the first paper’s result. But if the only paper you see is the first one, you’ll probably think there’s something to it.

The chart on the right shows G&M’s basic result. In statistics jargon, a significant result is anything with a “z-score” higher than 1.96, and if journals accepted articles based solely on the quality of the work, with no regard to z-scores, you’d expect the z-score of studies to resemble a bell curve. But that’s not what Gerber and Malhotra found. Above a z-score of 1.96, the results fit the bell curve pretty well, but below a z-score of 1.96 there are far fewer studies than you’d expect. Apparently, studies that fail to show significant results have a hard time getting published.

So far, this is unsurprising. Publication bias is a well-known and widely studied effect, and it would be surprising if G&M hadn’t found evidence of it. But take a closer look at the graph. In particular, take a look at the two bars directly adjacent to the magic number of 1.96. That’s kind of funny, isn’t it? They should be roughly the same height, but they aren’t even close. There are a lot of studies that just barely show significant results, and there are hardly any that fall just barely short of significance. There’s a pretty obvious conclusion here, and it has nothing to do with publication bias: data is being massaged on wide scale. A lot of researchers who almost find significant results are fiddling with the data to get themselves just over the line into significance.

And looky here. In properly sober language, that’s exactly what G&M say:

It is sometimes suggested that insignificant findings end up in ?file drawers,? but we observe many results with z-statistics between zero and the critical value. There is, however, no way to know how many studies are ?missing.? If scholars ?tweak? regression specifications and samples to move barely insignificant results above conventional thresholds, then there may be many z-statistics below the critical value, but an inordinate number barely above the critical value and not very many barely below it. We see that pattern in the data.

We see that pattern in the data. Message to political science professors: you are being watched. And if you report results just barely above the significance level, we want to see your work.

Class dismissed.