Monday, November 03, 2008

How the hell did that get published?????? [Updated]

[DISCLAIMER: This post was prompted, in part, by recent posts over at Isis's temple and DrugMonkey's cage. One should not take the following as a comment on the paper under discussion there - I have not read it and it's way outside my area of expertise, so I'm not making any judgments about it whatsoever.]



All too often I find myself reading a peer-reviewed paper and wondering how on earth it managed to get by the reviewers and editor, and end up being published. In a reputable journal. I know many of my colleagues have the same experience.

I'm not necessarily talking about disagreeing with the interpretation of data. Rather, it's a matter of poorly executed, incorrect or missing experiments, lack of suitable controls, incorrect statistical analyses, egregious lack of, or incorrect, citation of other published work in the area, etc. In other words, truly bad papers.

How do these get published? I don't know, but I would like to offer a couple of suggestions based on my experiences on the editorial board of a mid-level journal. First up, let me state that I am a strong believer in the peer review process. I am also well aware that it has it's flaws. Secondly, a little background...

Here's how things are supposed to work at the Journal of Doodlewidgets. When a manuscript is submitted, the authors must supply the names of four potential reviewers plus suggest a member of the editorial board as someone suitable for handling the review process. The Editor then assigns the manuscript to an editorial board member (EBM), who may or may not be the one suggested by the authors. The EBM is then supposed to read the manuscript and decide if it's good enough to send out for review or whether it should be rejected without full review. Those manuscripts deemed good enough are then sent out for review. When the reviews are returned to the journal, the EBM makes a final decision (accept, reject, major or minor revisions) and the authors are informed. Many journals in my field have similar processes for handling submissions.

Here's where I see the system breaking down too often: at the EBM level on two counts.

1) As noted above, at the Journal of Doodlewidgets the EBM is supposed to read the manuscript and decide if it's good enough to send out for review. In other words, the EBM is supposed to perform a preliminary review of the manuscript. I suspect (know) this doesn't happen in many cases. There are those who join editorial boards just for the extra line on their CV's, and who can't be bothered with applying the required effort. It's not clear how to handle these people. The obvious answer is to boot them, but that's not so easy if the offending party is a Big Cheese. Journals like to have Big Cheese's on their boards for the cachet. And don't want the negative publicity that might occur if they boot a Big Cheese...

2) The bigger problem lies in the choice of reviewers. It's all too easy to send the manuscript out to two (or more) of the suggested reviewers. Problem is, those people are likely good friends of the authors. We all play that game. We suggest people we know who we think will review our manuscripts fairly. Or, in the case of a (hopefully) minority of authors, automatically favorably. In some cases, authors suggest ex-co-authors or collaborators. The corresponding author of a manuscript I handled recently listed as a suggested reviewer someone they were co-authors with on a manuscript in press! Needless to say that's just not on.

So, it's possible that a given manuscript receives reviews that are more positive than they should be, and the EBM (who hasn't bothered reading the manuscript) simply accepts them at face value. Or the EBM is good friend of one of the authors and over-rides a more negative review. Or, perhaps the manuscript is somewhat outside the EBM's area of expertise, in which case they assume the reviews from the suggested reviewers are legit. In the end, a substandard manuscript can end up being accepted...

Note that this is the opposite of the usual reviewing issue (reviewers being too harsh) often discussed on the blogosphere. And I'm ignoring the issues of lazy reviewers - a good EBM should pick up on those - or unqualified reviewers - which is the EBM's fault for using them.


It's not clear how to deal with this. If all EBM's were conscientious it wouldn't be much of an issue. But how do you ensure that the editorial board is stocked with only the good? I suspect Editors have some idea of who's dead wood, but since there are no good metrics for measuring EBM performance...

EBM's could ignore the list of suggested reviewers, but then what's the point of making authors go through that process? And most authors are probably making legitimate suggestions. You could simply scrap the idea of having a list of suggestions, but then it becomes a real crapshoot if you have a lazy or somewhat unqualified EBM handling your manuscript.

I don't know the best way to fix this, or even if it's as big an issue as I perceive it to be, but I do have three suggestions. The first is for EBM's to use at most only one of the suggested reviewers. Yes, that means making some EBM's work harder, but it's not that much effort. I know because this is what I do. My second suggestion is to require authors to submit a list of people who should be excluded from reviewing due to various conflicts of interest. By this I mean the kind of thing the NSF requires from people applying for a grant. A list of all co-authors and collaborators over the last four years. Plus postdoc and PhD mentors. Okay, so that's a bunch of work for the authors, but you do want that manuscript reviewed fairly, right? And you do want to see fewer crap manuscripts accepted, right? If you keep a running list, it's not that much work. Alternatively, suggesting inappropriate reviewers could be made grounds for immediate rejection, but that can be a difficult call for even the most conscientious EBM.

And finally, the EBM's name should be included in the paper. That alone should improve matters.


And thus endeth yet another long post.

2 comments:

JollyRgr said...

It makes sense to me

Goose said...

Ah, peer review... waiting on the results of one of those as we speak...