From time to time I come across journal articles whose quality is so poor that you wonder how they passed peer review. Two that I have mentioned here are by Cook and by Lewandowsky, and I will say no more about them. The Climategate emails showed how some researchers combined to put pressure on journals not to publish article with which they disagreed, and John Ioannidis has written a wonderful paper about why most published research findings in medical research are false.

In my undergraduate years I had a happy couple of years editing our student newspaper, but despite that extensive editorial experience I managed to escape being the editor of a learned journal, though I sat on a number of editorial boards, and acted as a referee myself on many occasions. As I got older I discovered that my judgments were more severe than suited the editors, which happened to me to a smaller degree as a PhD examiner as well. The kindest explanation is that I was increasingly out of touch with what was happening in the fields.

The point of this introduction is that ‘peer review’ is a chancy business. While we still have academic journals somebody has to decide what gets published in them. A hundred years ago the editor would do that on his own initiative. Today we use the opinion of those knowledgeable in the fields, mostly because the span of human knowledge is now so great that no individual could know more than a tiny fraction of the work being done even in sub-fields of the old disciplines, like physics or chemistry.

So we have to have something like peer review, but the knowledge a paper has passed that test, to say it again, may not tell us very much. A nice demonstration of this truth was published a few years ago  in the field of psychology, the domain where the Cook and Lewandowsky papers were agreed to be worthy of publication. Douglas Peters and Stephen Ceci pointed out in their abstract that ‘A growing interest in and concern about the adequacy and fairness of modern peer-review practices in publication and funding are apparent across a wide range of scientific disciplines. Although questions about reliability, accountability, reviewer bias, and competence have been raised, there has been very little direct research on these variables.’

A paywall prevented my reading the whole paper, but the abstract has the guts of it. Peters and Ceci took a dozen already published research articles ‘by investigators from prestigious and highly productive American psychology departments, one article from each of 12 highly regarded and widely read American psychology journals with high rejection rates (80%) and nonblind refereeing practices.

‘With fictitious names and institutions substituted for the original ones (e.g., Tri-Valley Center for Human Potential), the altered manuscripts were formally resubmitted to the journals that had originally refereed and published them 18 to 32 months earlier. Of the sample of 38 editors and reviewers, only three (8%) detected the resubmissions. This result allowed nine of the 12 articles to continue through the review process to receive an actual evaluation: eight of the nine were rejected. Sixteen of the 18 referees (89%) recommended against publication and the editors concurred. The grounds for rejection were in many cases described as “serious methodological flaws.”’

What a hoot! I’d love to know what happened when the three alert editors/reviewers pointed out that the paper had already been published, and in their journals! Presumably Peters and Ceci owned up, and awarded the journal a gold star, as they may also have done in the case of the one article that was  deemed worthy of publication. And what can one say about the others that passed the authenticity test and were then rejected, for serious methodological flaws in many cases? Oh dear!

My preference, as I have explained in the past when these instances come up, is for a system of online open review. You have a paper you think is important, and you publish a draft that you think is ready for real publication, and invite comments and objections. I used to do that on occasion by sending the manuscript to a few people whose judgment I valued. But that was time-consuming. Online publishing is not. Such a system is beginning to appear, and websites like Judith Curry’s Climate etc do provide what amount to seminars where a published paper is dissected and defended by people who know what they are talking about.

In the meantime we will continue to have people telling us that X’s article must be taken seriously because it has passed peer review, and that if we want to be taken seriously ourselves we should write an article and submit it for judgment that way. There is an assumption built into this argument that papers that are published display truth of some kind. They don’t, at least in my view. They are all contributions to the solution of a puzzle. We may not ever get to the solution, or it may become unimportant because of a discovery somewhere else.

What we should always have is a serious and respectful debate or discussion. But we are all humans, and ego and status can and do get in the way. And that’s why Drs Peters and Ceci deserve our gratitude, for puncturing a balloon that is too big and too unstable to be of much good. It will rise again.

Join the discussion One Comment

Leave a Reply