My graduate school mentor was the editor of a leading journal in the field. I once asked him what kind of reviews he most hated to receive from his journal’s reviewers. He said “It’s not the openly abusive ones, though I hate those. What I really can’t stand is when reviewers tell me what a submitted paper is not rather than what it is”. Until I began editing scientific journals myself, I did not fully appreciate the wisdom in my mentor’s remark.
Let me describe the kind of comments on articles some reviewers generate:
This study uses two years of ethnographic study of 100 drug dealers in Oakland, California to provide an up close understanding of the motives, behaviors, risks and income of individuals in the drug trade. Sadly, the study tells us nothing about national trends in drug markets, which would require a large quantitative survey of dealers in major metropolitan areas nationwide.
But for a different paper, the same reviewer would write:
This study used a large quantitative survey of drug dealers in major metropolitan areas to describe national trends in drug markets. Unfortunately, this approach tells us nothing about the motives, behaviors, risks and income of individuals in the drug trade, which would require extended ethnographic research in a single drug market.
And for yet a different paper, the same reviewer would write
This study used a large quantitative survey of drug dealers in major metropolitan areas to describe national U.S. trends in drug markets coupled with extended ethnographic research in a single drug market to describe the motives, behaviors, risks and income of individuals in the drug trade. Unfortunately, the study leaves us completely in the dark about cross-cultural comparisons of drug markets, which would require a multi-national study.
When I see these kinds of reviews of what submitted manuscripts are not, I wonder if my editorial colleagues in cardiology have to deal with reviewers who say that research articles about the heart sadly tell us nothing about the kneecap. The premise of these reviewers is that authors do not have the right to determine the purpose of their own study. Rather, each piece of research should be judged based on the nearly infinite number of goals it might have pursued, but did not.
The logical and practical impossibilities of this stance become obvious when you consider that like most editors, I am typically looking at multiple reviews of submitted work. If every reviewer is entitled to an individual, binding, opinion about what the purpose of the study authors should have been, authors could only be published if all reviewers independently had the same fancy about what they study should have done (irrespective of what it was intended to do, natch). That’s so unlikely that I would end up rejecting all submissions. I imagine the letters I would write back to authors: “Sorry, but Reviewer 1 felt your study of major depressive disorder did not add to his understanding of the Peloponnesian War, Reviewer 2 was disappointed that your work does not even evaluate Rod Carew’s claim of being the best baseball player of his generation, and Reviewer 3 concluded that the method you chose had no possibility of resolving the long-running debate on the authenticity of the Shroud of Turin. p.s. Reviewer 3 also added that he hated “Schindler’s List” because there wasn’t a single belly laugh in the whole movie.”
It is a worthy role for reviewers to point out that a study did not achieve its intended purpose. It is also valuable for reviewers to ask for a strong rationale for why the intended purpose of the study is important. But for reviewers to tell an editor that a paper is no good because it didn’t achieve goals that it didn’t pursue is less than useless.
[Cross-posted at The Reality-Based Community]