Last month, I learned about a publication that has been quickly gaining popularity, the Journal of Negative Results in BioMedicine (JNRBM). Published, presumably, by a gang of dour curmudgeons who hate everything, JNRBM openly welcomes the data that other journals won’t touch because it doesn’t fit the unspoken rule that all articles must end on a cheery note of promise. (“This could lead to new therapies!” boast most journal articles, relying on the word “could” to keep their platitudes accurate and the exclamation point to boost excitement, stand for “factorial,” or make a clicking sound, depending on your field.)

You might imagine that JNRBM is a place where losers gather to celebrate their failures, kind of like Best Buy or Division III football. But JNRBM meets two important needs in science reporting: the need to combat the positive spin known as publication bias and the need to make other scientists feel better about themselves.

(Unfortunately, if you don’t work in biomedicine, you’re still screwed. The Journal of Negative Results in Zoology, for example, is just called “not seeing animals.” And the Journal of Negative Results in Homeopathy is the entire field of homeopathy.)

When it comes time to put our science into words, why do we pretend that the negative results never happened? Why do we have so much trouble accepting that sometimes our hypotheses are disproved? But most importantly, where was this freaking journal when I was in grad school? You can get published even when the experiment fails—it’s the easiest way to pad your CV since the invention of 1.25-inch margins.

From this post by Adam Ruben.  It gets better from there.  Via @garyking.

[Cross-posted at The Monkey Cage]

John Sides

John Sides is an associate professor of political science at George Washington University.