GMP has taken down her blog, The Academic Jungle. In her final post she asserted that academic/science blogging (as she sees it) has little or no significance. That the disappearance of such blogging won't make any difference.*
Perhaps that's true for her, but not for me.
I am continually learning a lot from other blogs. Reading about the trials, tribulations and triumphs of tenure-track faculty as chronicled at Prof-like Substance, Professor In Training, Blue Labcoats, ChemBiLOLogy and The Prodigal Academic has no doubt had a positive effect upon my interactions with junior faculty at my institution. I've learned much about women in science from some of the same blogs, plus Isis and Zuska. Drugmonkey and Comrade PhysioProf have dispensed invaluable advice on obtaining funding, particularly from the NIH. And Janet Stemwedel's Adventures in Ethics and Science blog is always food for thought.
In addition, the interactions I have had with people in the comments sections of this and numerous other blogs have in general been a blast.
My own blog has allowed me to share frustrations and triumphs, dispense advice unasked, and generally blather and pontificate. Maybe not useful for others, but certainly cathartic for me.
Blogging and reading academic/science blogs have significance for me.
* Hopefully I'm not getting her comments wrong or out of context. Since her blog no longer exists I'm having to go by memory - I read the post last night.
The ramblings of a slightly disgruntled, but mostly not, bleeding heart liberal academic.
Showing posts with label editorial. Show all posts
Showing posts with label editorial. Show all posts
Tuesday, August 03, 2010
Monday, July 19, 2010
Data mining talks
As a molecular biophysicist I often hear talks (and see posters) given by bioinformaticists.* I am struck by how these are almost uniformly abysmal. I'm not necessarily referring to the data, but rather the presentation as a whole. This has reached the point where I don't think I can bring myself to sit through another bioinformatics talk (or poster presentation) for at least the next three months.
Why has the quality of the now dozens of such talks I've suffered through been so low?
In the majority of cases I posit it's a combination, in varying degrees, of a lack of imagination and a disconnection from the underlying biology. Too many of these presenters regale their audiences with interminable laundry lists of how property X is over-represented in sequences of class A, and under-represented in sequences of class B. Ummmm... So what? Why should I care? Often such presenters either don't know or are too lazy to spend the time connecting their data with known biology. As an example, I recently sat through a talk where the speaker made a big deal about the prevalence of glutamine-rich sequences in proteins involved in transcription. Not once did he refer to the fairly substantial body of experimental data on these very same sequences. In fact, when asked, he couldn't offer up any explanation for this observation.** Major fail.
I can't explain why this happens. Obviously it shouldn't. Perhaps it's a function of the relatively immature nature of bioinformatics as a field. It's still at a stage where method development trumps method application. Application of the intelligent kind.
I remember when macromolecular crystallography talks suffered from similar issues. They would often be these long detailed descriptions of the structure(s) just solved by the crystallographer. No connection to the biology, just the details of the structure. Listen, I don't give a rat's arse that there's a type VIIb turn between helices 7 and 8. What I want to know is what the structure tells us about the biology. Nowadays most crystallographers do make the connections. One can't get a grant for simply solving structures any more.***
I've heard through the grapevine that getting a grant to do bioinformatics has become increasingly difficult. More so than would be expected from the downturn in science funding. Perhaps we'll see the field forced to mature more rapidly and presentations improve.
* By "bioinformatics" I mean the data-mining thing. A colleague once defined it thusly: "Bioinformatics is the mining of biological databases for profit (not necessarily of the monetary kind)." This is distinct from computational biology which, at least at the molecular level, tends to employ an energy function of sorts.
** Glutamine-rich regions can be involved in DNA binding - the glutamine side chain is quite good at making hydrogen bonds with nucleic acids.
*** Not when I'm reviewing the grant. :-)
Why has the quality of the now dozens of such talks I've suffered through been so low?
In the majority of cases I posit it's a combination, in varying degrees, of a lack of imagination and a disconnection from the underlying biology. Too many of these presenters regale their audiences with interminable laundry lists of how property X is over-represented in sequences of class A, and under-represented in sequences of class B. Ummmm... So what? Why should I care? Often such presenters either don't know or are too lazy to spend the time connecting their data with known biology. As an example, I recently sat through a talk where the speaker made a big deal about the prevalence of glutamine-rich sequences in proteins involved in transcription. Not once did he refer to the fairly substantial body of experimental data on these very same sequences. In fact, when asked, he couldn't offer up any explanation for this observation.** Major fail.
I can't explain why this happens. Obviously it shouldn't. Perhaps it's a function of the relatively immature nature of bioinformatics as a field. It's still at a stage where method development trumps method application. Application of the intelligent kind.
I remember when macromolecular crystallography talks suffered from similar issues. They would often be these long detailed descriptions of the structure(s) just solved by the crystallographer. No connection to the biology, just the details of the structure. Listen, I don't give a rat's arse that there's a type VIIb turn between helices 7 and 8. What I want to know is what the structure tells us about the biology. Nowadays most crystallographers do make the connections. One can't get a grant for simply solving structures any more.***
I've heard through the grapevine that getting a grant to do bioinformatics has become increasingly difficult. More so than would be expected from the downturn in science funding. Perhaps we'll see the field forced to mature more rapidly and presentations improve.
* By "bioinformatics" I mean the data-mining thing. A colleague once defined it thusly: "Bioinformatics is the mining of biological databases for profit (not necessarily of the monetary kind)." This is distinct from computational biology which, at least at the molecular level, tends to employ an energy function of sorts.
** Glutamine-rich regions can be involved in DNA binding - the glutamine side chain is quite good at making hydrogen bonds with nucleic acids.
*** Not when I'm reviewing the grant. :-)
Friday, June 11, 2010
If you're going to say no, at least do it quickly
I'm on the editorial board of a journal in my field. I am often assigned manuscripts as managing editor. This means finding reviewers. Of late I've noticed a disturbing trend (ANECDOTE ALERT!!!). People I ask are taking an unreasonably long time to decide whether or not they will review a manuscript. Days. A week even. If this were just one or two people you could explain it away easily enough. They're traveling, for example. But it's not one or two. It's approaching 30-40%. Given that I'm managing two to four manuscripts at any given moment, that's a lot. And when they eventually get back to me (those that do...), they invariably say no they can't review the manuscript.
Why are you taking so long? Read the abstract (which we send in the email), think about what else you have to get done in the next couple of weeks, and decide whether or not this is a review you want to do. Then get back to me by reply email. Not a hard process. The longer you take to decline the invitation, the longer the whole process takes. Is that what you want to happen with your manuscripts? Didn't think so. So if you're going to decline, get off your rear end and say no quickly.
Why are you taking so long? Read the abstract (which we send in the email), think about what else you have to get done in the next couple of weeks, and decide whether or not this is a review you want to do. Then get back to me by reply email. Not a hard process. The longer you take to decline the invitation, the longer the whole process takes. Is that what you want to happen with your manuscripts? Didn't think so. So if you're going to decline, get off your rear end and say no quickly.
Labels:
academics,
editorial,
publishing,
research,
science
Monday, April 05, 2010
Can one reviewer make a difference?
I am occasionally asked to review manuscripts for a journal in my field that has a storied past, but over the last decade has fallen out of favor. It's impact factor (for what little that's worth) is now down below 2.5. This is not a journal I have ever published in, but I feel compelled to accept an invitation to review every now and then for two reasons. One is that I know a number of people on the editorial board and would like to stay on good terms with them. The other, lesser reason, is more altruistic and likely misguided. In the back of my mind I have this little voice saying that maybe this journal can be restored to its former prominence if submitted manuscripts were reviewed more rigorously. Judging from the dross that is regularly published I would say rigorous review is not a common occurrence at this journal.
I've been quite content to review two or three papers a year for this journal for the above reasons. Until recently. Much to my surprise I was asked to review a revised manuscript that I had rejected. Rejected just two weeks before the revised manuscript was submitted. Rejected for what I believed were fundamental flaws of the kind that make pretty much all of the work described in the manuscript worthless. Turns out neither the other reviewer, nor the editor (someone I don't know) spotted the same flaws or agreed with me. I can deal with that. Perhaps I was wrong - it can happen. So I read the revised manuscript and the authors' cover letter. Turns out the authors didn't think my perceived flaws were a problem. Again, I can deal with that. But here's the thing. The flaws I had identified were related to very, very basic solution thermodynamics. The kind you are taught in general chemistry. Or even high school. Okay, I could be wrong, so I consulted a physical chemistry textbook. Nope, I was right according to that. I asked a colleague*, who interrupted me before I could get half way through my explanation to tell me that the solution thermodynamics couldn't be right. For the same reasons I had identified.
So the authors are misguided wrong deluded idiots. Fine, it happens. Apparently the other reviewer either didn't read the manuscript carefully or is in the same category as the authors. Not so fine, but again, it happens. The editor? Now that's where I have a real issue. It would have taken him no more than 15 minutes to have read my review and checked the original submission to find I was right, leading to rejection of the manuscript. Instead he apparently looked at the two reviewer scores, one reject and one (very) minor revisions and split the difference. Lazy bugger.
So I ask you, what's the point? Apparently my efforts to be more rigorous as a single reviewer are wasted in this case. The reality is a journal cannot raise its standards via a "grassroots" effort on the part of the reviewers. I know that and have done for some time.
The moral here is I have to stop listening to those voices in my head.
Oh, and I rejected the manuscript again. And sent the editor in question a short lesson in basic solution thermodynamics. Perhaps not very tactful, but what do I care? I won't be reviewing for him again.
* Staying within the limits of reviewer/author confidentiality of course.
I've been quite content to review two or three papers a year for this journal for the above reasons. Until recently. Much to my surprise I was asked to review a revised manuscript that I had rejected. Rejected just two weeks before the revised manuscript was submitted. Rejected for what I believed were fundamental flaws of the kind that make pretty much all of the work described in the manuscript worthless. Turns out neither the other reviewer, nor the editor (someone I don't know) spotted the same flaws or agreed with me. I can deal with that. Perhaps I was wrong - it can happen. So I read the revised manuscript and the authors' cover letter. Turns out the authors didn't think my perceived flaws were a problem. Again, I can deal with that. But here's the thing. The flaws I had identified were related to very, very basic solution thermodynamics. The kind you are taught in general chemistry. Or even high school. Okay, I could be wrong, so I consulted a physical chemistry textbook. Nope, I was right according to that. I asked a colleague*, who interrupted me before I could get half way through my explanation to tell me that the solution thermodynamics couldn't be right. For the same reasons I had identified.
So the authors are misguided wrong deluded idiots. Fine, it happens. Apparently the other reviewer either didn't read the manuscript carefully or is in the same category as the authors. Not so fine, but again, it happens. The editor? Now that's where I have a real issue. It would have taken him no more than 15 minutes to have read my review and checked the original submission to find I was right, leading to rejection of the manuscript. Instead he apparently looked at the two reviewer scores, one reject and one (very) minor revisions and split the difference. Lazy bugger.
So I ask you, what's the point? Apparently my efforts to be more rigorous as a single reviewer are wasted in this case. The reality is a journal cannot raise its standards via a "grassroots" effort on the part of the reviewers. I know that and have done for some time.
The moral here is I have to stop listening to those voices in my head.
Oh, and I rejected the manuscript again. And sent the editor in question a short lesson in basic solution thermodynamics. Perhaps not very tactful, but what do I care? I won't be reviewing for him again.
* Staying within the limits of reviewer/author confidentiality of course.
Labels:
academics,
delusional,
editorial,
professor,
publishing,
science
Tuesday, December 01, 2009
You lost WHAT?!?!?!?!?
I was waiting for the lads over at Drugmonkey to tackle this since CPP would no doubt do a better job. But since they haven't as yet, here goes...
In the November 26 issue of Science there's yet another retraction. This time it's from the group of Peter Schultz. For those who aren't in the know, Schultz has made a name for himself developing ways to trick the translational machinery into inserting non-natural residues into protein chains. The retracted paper (Science 303, 371 (2004)) dealt with the insertion of residues with an attached sugar, the idea being this could be used to study glycosylated proteins in a more controlled manner.
For those without access to Science, here's the retraction in full:
Let's break this down...
Say what?!?!? You LOST the lab notebooks???? And it's not the fault of any of the authors???? Okay, I can imagine a number of circumstances where this could happen. A fire for example. But if it's something like that why not give the details???? I'm all for the assumption of innocence and all that, but come on, this smells worse than a bucket of shrimp in the sun.
They can't reproduce their own experiments. Now call me old school, but I always go by that tried and true rule that the Materials and Methods section of a paper should contain enough detail that the experiments can be reproduced by someone else. Someone not in the lab that did the work. And the retracted article does have two pages of supplementary material, most of which is the Materials and Methods... But the lab (and presumably the authors) that originally did the work still can't reproduce it even with the combination of the lab's collective knowledge and memory plus the published Materials and Methods. Smell that bucket of shrimp yet?
We lose our lab notebooks and can't reproduce our own experiments, but everyone should still trust us...
I need to open a window or two.
In the November 26 issue of Science there's yet another retraction. This time it's from the group of Peter Schultz. For those who aren't in the know, Schultz has made a name for himself developing ways to trick the translational machinery into inserting non-natural residues into protein chains. The retracted paper (Science 303, 371 (2004)) dealt with the insertion of residues with an attached sugar, the idea being this could be used to study glycosylated proteins in a more controlled manner.
For those without access to Science, here's the retraction in full:
Retraction
We wish to retract our Report (1) in which we report that β–N-acetylglucosamine-serine can be biosynthetically incorporated at a defined site in myoglobin in Escherichia coli. Regrettably, through no fault of the authors, the lab notebooks are no longer available to replicate the original experimental conditions, and we are unable to introduce this amino acid into myoglobin with the information and reagents currently in hand. We note that reagents and conditions for the incorporation of more than 50 amino acids described in other published work from the Schultz lab are available upon request.
Zhiwen Zhang,1 Jeff Gildersleeve,2 Yu-Ying Yang,3 Ran Xu,4 Joseph A. Loo,5 Sean Uryu,6 Chi-Huey Wong,7 Peter G. Schultz7,*
* To whom correspondence should be addressed. E-mail: schultz@scripps.edu
1 The University of Texas at Austin, Division of Medicinal Chemistry, College of Pharmacy, Austin, TX 78712, USA.
2 Chemical Biology Section, National Cancer Institute, Frederick, MD 21702, USA.
3 Rockefeller University, New York, NY 10065, USA.
4 6330 Buffalo Speedway, Houston, TX 77005, USA.
5 Department of Chemistry and Biochemistry, University of California, Los Angeles, CA 90095–1569, USA.
6 University of California, San Diego, CA 92121, USA.
7 The Scripps Research Institute, La Jolla, CA 92037, USA.
Reference
1. Z. Zhang et al., Science 303, 371 (2004).
Let's break this down...
Regrettably, through no fault of the authors, the lab notebooks are no longer available to replicate the original experimental conditions...
Say what?!?!? You LOST the lab notebooks???? And it's not the fault of any of the authors???? Okay, I can imagine a number of circumstances where this could happen. A fire for example. But if it's something like that why not give the details???? I'm all for the assumption of innocence and all that, but come on, this smells worse than a bucket of shrimp in the sun.
...and we are unable to introduce this amino acid into myoglobin with the information and reagents currently in hand.
They can't reproduce their own experiments. Now call me old school, but I always go by that tried and true rule that the Materials and Methods section of a paper should contain enough detail that the experiments can be reproduced by someone else. Someone not in the lab that did the work. And the retracted article does have two pages of supplementary material, most of which is the Materials and Methods... But the lab (and presumably the authors) that originally did the work still can't reproduce it even with the combination of the lab's collective knowledge and memory plus the published Materials and Methods. Smell that bucket of shrimp yet?
We note that reagents and conditions for the incorporation of more than 50 amino acids described in other published work from the Schultz lab are available upon request.
We lose our lab notebooks and can't reproduce our own experiments, but everyone should still trust us...
I need to open a window or two.
Labels:
academics,
delusional,
editorial,
publishing,
research,
science,
science fiction
Tuesday, March 17, 2009
Gratuitous self citation
I'm currently reviewing a manuscript where 38 of the 65 cited papers were written by the senior author. And ~30 of those are at best tangentially related to the manuscript under review... I have never come across such a egregious case of gratuitous self-citation. Have you?
Somehow I don't think this manuscript will see the light of day. At least not in its current form.
Somehow I don't think this manuscript will see the light of day. At least not in its current form.
Labels:
academics,
editorial,
publishing,
science,
science fiction
Monday, November 03, 2008
How the hell did that get published?????? [Updated]
[DISCLAIMER: This post was prompted, in part, by recent posts over at Isis's temple and DrugMonkey's cage. One should not take the following as a comment on the paper under discussion there - I have not read it and it's way outside my area of expertise, so I'm not making any judgments about it whatsoever.]
All too often I find myself reading a peer-reviewed paper and wondering how on earth it managed to get by the reviewers and editor, and end up being published. In a reputable journal. I know many of my colleagues have the same experience.
I'm not necessarily talking about disagreeing with the interpretation of data. Rather, it's a matter of poorly executed, incorrect or missing experiments, lack of suitable controls, incorrect statistical analyses, egregious lack of, or incorrect, citation of other published work in the area, etc. In other words, truly bad papers.
How do these get published? I don't know, but I would like to offer a couple of suggestions based on my experiences on the editorial board of a mid-level journal. First up, let me state that I am a strong believer in the peer review process. I am also well aware that it has it's flaws. Secondly, a little background...
Here's how things are supposed to work at the Journal of Doodlewidgets. When a manuscript is submitted, the authors must supply the names of four potential reviewers plus suggest a member of the editorial board as someone suitable for handling the review process. The Editor then assigns the manuscript to an editorial board member (EBM), who may or may not be the one suggested by the authors. The EBM is then supposed to read the manuscript and decide if it's good enough to send out for review or whether it should be rejected without full review. Those manuscripts deemed good enough are then sent out for review. When the reviews are returned to the journal, the EBM makes a final decision (accept, reject, major or minor revisions) and the authors are informed. Many journals in my field have similar processes for handling submissions.
Here's where I see the system breaking down too often: at the EBM level on two counts.
1) As noted above, at the Journal of Doodlewidgets the EBM is supposed to read the manuscript and decide if it's good enough to send out for review. In other words, the EBM is supposed to perform a preliminary review of the manuscript. I suspect (know) this doesn't happen in many cases. There are those who join editorial boards just for the extra line on their CV's, and who can't be bothered with applying the required effort. It's not clear how to handle these people. The obvious answer is to boot them, but that's not so easy if the offending party is a Big Cheese. Journals like to have Big Cheese's on their boards for the cachet. And don't want the negative publicity that might occur if they boot a Big Cheese...
2) The bigger problem lies in the choice of reviewers. It's all too easy to send the manuscript out to two (or more) of the suggested reviewers. Problem is, those people are likely good friends of the authors. We all play that game. We suggest people we know who we think will review our manuscripts fairly. Or, in the case of a (hopefully) minority of authors, automatically favorably. In some cases, authors suggest ex-co-authors or collaborators. The corresponding author of a manuscript I handled recently listed as a suggested reviewer someone they were co-authors with on a manuscript in press! Needless to say that's just not on.
So, it's possible that a given manuscript receives reviews that are more positive than they should be, and the EBM (who hasn't bothered reading the manuscript) simply accepts them at face value. Or the EBM is good friend of one of the authors and over-rides a more negative review. Or, perhaps the manuscript is somewhat outside the EBM's area of expertise, in which case they assume the reviews from the suggested reviewers are legit. In the end, a substandard manuscript can end up being accepted...
Note that this is the opposite of the usual reviewing issue (reviewers being too harsh) often discussed on the blogosphere. And I'm ignoring the issues of lazy reviewers - a good EBM should pick up on those - or unqualified reviewers - which is the EBM's fault for using them.
It's not clear how to deal with this. If all EBM's were conscientious it wouldn't be much of an issue. But how do you ensure that the editorial board is stocked with only the good? I suspect Editors have some idea of who's dead wood, but since there are no good metrics for measuring EBM performance...
EBM's could ignore the list of suggested reviewers, but then what's the point of making authors go through that process? And most authors are probably making legitimate suggestions. You could simply scrap the idea of having a list of suggestions, but then it becomes a real crapshoot if you have a lazy or somewhat unqualified EBM handling your manuscript.
I don't know the best way to fix this, or even if it's as big an issue as I perceive it to be, but I do have three suggestions. The first is for EBM's to use at most only one of the suggested reviewers. Yes, that means making some EBM's work harder, but it's not that much effort. I know because this is what I do. My second suggestion is to require authors to submit a list of people who should be excluded from reviewing due to various conflicts of interest. By this I mean the kind of thing the NSF requires from people applying for a grant. A list of all co-authors and collaborators over the last four years. Plus postdoc and PhD mentors. Okay, so that's a bunch of work for the authors, but you do want that manuscript reviewed fairly, right? And you do want to see fewer crap manuscripts accepted, right? If you keep a running list, it's not that much work. Alternatively, suggesting inappropriate reviewers could be made grounds for immediate rejection, but that can be a difficult call for even the most conscientious EBM.
And finally, the EBM's name should be included in the paper. That alone should improve matters.
And thus endeth yet another long post.
All too often I find myself reading a peer-reviewed paper and wondering how on earth it managed to get by the reviewers and editor, and end up being published. In a reputable journal. I know many of my colleagues have the same experience.
I'm not necessarily talking about disagreeing with the interpretation of data. Rather, it's a matter of poorly executed, incorrect or missing experiments, lack of suitable controls, incorrect statistical analyses, egregious lack of, or incorrect, citation of other published work in the area, etc. In other words, truly bad papers.
How do these get published? I don't know, but I would like to offer a couple of suggestions based on my experiences on the editorial board of a mid-level journal. First up, let me state that I am a strong believer in the peer review process. I am also well aware that it has it's flaws. Secondly, a little background...
Here's how things are supposed to work at the Journal of Doodlewidgets. When a manuscript is submitted, the authors must supply the names of four potential reviewers plus suggest a member of the editorial board as someone suitable for handling the review process. The Editor then assigns the manuscript to an editorial board member (EBM), who may or may not be the one suggested by the authors. The EBM is then supposed to read the manuscript and decide if it's good enough to send out for review or whether it should be rejected without full review. Those manuscripts deemed good enough are then sent out for review. When the reviews are returned to the journal, the EBM makes a final decision (accept, reject, major or minor revisions) and the authors are informed. Many journals in my field have similar processes for handling submissions.
Here's where I see the system breaking down too often: at the EBM level on two counts.
1) As noted above, at the Journal of Doodlewidgets the EBM is supposed to read the manuscript and decide if it's good enough to send out for review. In other words, the EBM is supposed to perform a preliminary review of the manuscript. I suspect (know) this doesn't happen in many cases. There are those who join editorial boards just for the extra line on their CV's, and who can't be bothered with applying the required effort. It's not clear how to handle these people. The obvious answer is to boot them, but that's not so easy if the offending party is a Big Cheese. Journals like to have Big Cheese's on their boards for the cachet. And don't want the negative publicity that might occur if they boot a Big Cheese...
2) The bigger problem lies in the choice of reviewers. It's all too easy to send the manuscript out to two (or more) of the suggested reviewers. Problem is, those people are likely good friends of the authors. We all play that game. We suggest people we know who we think will review our manuscripts fairly. Or, in the case of a (hopefully) minority of authors, automatically favorably. In some cases, authors suggest ex-co-authors or collaborators. The corresponding author of a manuscript I handled recently listed as a suggested reviewer someone they were co-authors with on a manuscript in press! Needless to say that's just not on.
So, it's possible that a given manuscript receives reviews that are more positive than they should be, and the EBM (who hasn't bothered reading the manuscript) simply accepts them at face value. Or the EBM is good friend of one of the authors and over-rides a more negative review. Or, perhaps the manuscript is somewhat outside the EBM's area of expertise, in which case they assume the reviews from the suggested reviewers are legit. In the end, a substandard manuscript can end up being accepted...
Note that this is the opposite of the usual reviewing issue (reviewers being too harsh) often discussed on the blogosphere. And I'm ignoring the issues of lazy reviewers - a good EBM should pick up on those - or unqualified reviewers - which is the EBM's fault for using them.
It's not clear how to deal with this. If all EBM's were conscientious it wouldn't be much of an issue. But how do you ensure that the editorial board is stocked with only the good? I suspect Editors have some idea of who's dead wood, but since there are no good metrics for measuring EBM performance...
EBM's could ignore the list of suggested reviewers, but then what's the point of making authors go through that process? And most authors are probably making legitimate suggestions. You could simply scrap the idea of having a list of suggestions, but then it becomes a real crapshoot if you have a lazy or somewhat unqualified EBM handling your manuscript.
I don't know the best way to fix this, or even if it's as big an issue as I perceive it to be, but I do have three suggestions. The first is for EBM's to use at most only one of the suggested reviewers. Yes, that means making some EBM's work harder, but it's not that much effort. I know because this is what I do. My second suggestion is to require authors to submit a list of people who should be excluded from reviewing due to various conflicts of interest. By this I mean the kind of thing the NSF requires from people applying for a grant. A list of all co-authors and collaborators over the last four years. Plus postdoc and PhD mentors. Okay, so that's a bunch of work for the authors, but you do want that manuscript reviewed fairly, right? And you do want to see fewer crap manuscripts accepted, right? If you keep a running list, it's not that much work. Alternatively, suggesting inappropriate reviewers could be made grounds for immediate rejection, but that can be a difficult call for even the most conscientious EBM.
And finally, the EBM's name should be included in the paper. That alone should improve matters.
And thus endeth yet another long post.
Subscribe to:
Posts (Atom)