Showing posts with label peer review. Show all posts
Showing posts with label peer review. Show all posts

6 Aug 2010

The Scientist has an attack of CNS disease

The Scientist this week tells us that
"Peer review isn’t perfect [who knew?]— meet 5 high-impact papers that should have ended up in bigger journals."
Wait, what? These high-impact papers got those citations despite ending up in "second tier" journals, so I doubt the authors have been crying into their beer about this "injustice". This is an example of CNS Disease, a term coined by Harold Varmus to characterise the obsession with Cell, Nature and Science. Not all high-impact papers must published in one of these journals, and not all papers published in these journals will be high impact. Biomedical publishing is not just a game in which editors sort articles by predicted future impact - at least, I hope it's not.

Authors chose their publication venue for all sorts of reasons, and it's hard to predict which new work will set the world on fire. Take BLAST - it was a "quick and dirty" algorithm that gave similar results to the Smith and Waterman algorithm only much faster, and the gain in speed came at a loss of accuracy. Only use by scientists in practice could decide whether this was a good approach. Focussing on the umpteen thousand citations to BLAST is missing the point: the important thing about BLAST is the millions or billions of hours of computer time saved by using it. As Joe, the other denizen of Journalology Towers, said recently:
"Lord protect us from the idea that an academic publication might have any value beyond its ability to accumulate citations."

5 Jul 2010

Elsevier experiments with peer review

Well I never. I've been advocating the adoption of open peer review and community peer review for a while now; I didn't expect one of the pioneers of community peer review to be Elsevier, but they've surprised me.

On 21 June, they announced a three-month trial of what they are calling PeerChoice on Chemical Physics Letters, which allows potential reviewers to volunteer to review papers. As Ida Sim points out, this doesn't open up peer review in the sense of making it more transparent, but it should help speed up peer review and it might avoid the bias caused by editors selecting from a limited pool of the same 'usual suspect' reviewers.

The devil is in the details: who gets to be in the pool of potential reviewers; how will you motivate reviewers to volunteer, when getting reviewers to agree when directly inviting them can be hard enough; will volunteers be vetted for suitability for that article; is this alongside or instead of editorial selection? These question aside, let's hope it's a success.

---
Edit: There's some answers on the hidden-away page about PeerChoice - PeerChoice is supplementary to editor-invited reviewers. Registered reviewers will see titles and abstracts and be allowed to download the manuscript if they agree to provide a "timely review." There doesn't appear to be a vetting/vetoing system, but the editor still makes the decision. The trial is on nanostructures and materials; the results might not be applicable outside that very narrow field as scholars in different fields react in very different ways to variations in the peer review process.

19 May 2010

"Predatory" open access publishers

The Charleston Adviser has published an interesting analysis of some of the recent open access 'upstarts', titled "“Predatory” Open-Access Scholarly Publishers". They include some that I've noted before such as Bentham Open and Scientific Journals International.

As I would have expected, Libertas Academica and its sister publisher Dove Press do better than the others included in this review, but they are still far from passing with flying colours. The reviewer, Jeffrey Beall of Auraria Library, University of Colorado Denver, places a very clear "author beware" sign on:

  • Academic Journals
  • Academic Journals
  • ANSINetswork
  • Bentham Open
  • Insight Knowledge
  • Knowledgia Review
  • Science Publications
  • Scientific Journals International
Beall's summary is worth repeating:
"These publishers are predatory because their mission is not to promote, preserve, and make available scholarship; instead, their mission is to exploit the author-pays, Open-Access model for their own profit.
They work by spamming scholarly e-mail lists, with calls for papers and invitations to serve on nominal editorial boards. If you subscribe to any professional e-mail lists, you likely have received some of these solicitations. Also, these publishers typically provide little or no peer-review. In fact, in most cases, their peer review process is a façade.
None of these publishers mentions digital preservation. Indeed, any of these publishers could disappear at a moment’s notice, resulting in the loss of its content."
I'd not touch any of them with a bargepole.

12 May 2010

Medical Hypotheses' editor is sacked

So Bruce Charlton's editorship at Medical Hypotheses comes to an end, and I must raise a small cheer. Schadenfreude is an ugly thing, but this journal was a boon to fringe 'scientists' everywhere, giving them the apparent legitimacy of publishing in a 'proper journal' (owned by Elsevier, indexed in PubMed) without the pesky hurdle of peer review. It was no surprise that it favoured kooks, having been set up by David Horrobin, a pusher of evening primose oil.

The final straw was allowing AIDS denialists a platform, and the subsequent outcry from scientists and Charlton's inability to see what he did wrong led Elsevier to pull the plug. Charlton thinks that as an editor he has a perfect right to publish whatever papers he wishes, but unaccountable editorial control is no way to run a journal. Poor editorial decisions should have consequences, and the lack of any peer review or other quality control on Medical Hypotheses (the only criterion being that a paper was 'interesting') always doomed it to be derided by serious scientists and medics.

Will the new (and improved?) Medical Hypotheses see any more gems like too much sex causing RSI, kissing evolving to spread germs, cancer being caused by stopping smoking, masturbation being good for relieving a bunged up nose, or the origin of belly button fluff?

18 Mar 2010

Peer review in the dock

This is a guest post by Joe Dunckley
Academic publishing, and peer review in particular, was headline news in February -- from stem cell researchers claiming that their work was being sabotaged by reviewers with conflicts of interest, to mainstream news noticing the absurdity of the impact factor situation. BBC Radio 4 must have decided that now was a good time to air an unedited repeat of 2008's documentary Peer Review in the Dock. So now certainly seems like a good time to post an unedited repeat of my comments from the time.

--

A few thoughts on Peer Review In The Dock (this evening, Radio 4).

  1. Nobody has ever questioned whether peer review is really needed: wrong. A lot of people have questioned this, and many experiments have been tried. The most prominent recent example is probably PLoS ONE (no reference to this in the programme). They very rapidly discovered that, yes, a minimum standard is peer review is required when running a journal. But perhaps moving to a non-review model is like communism: you need to have world revolution for it to have any chance of working; going it alone will just lead to your own collapse.
  2. Peer-reviewers aren't trained: somewhat misleading. Reviewers, at least in the publishing model that I am familiar with, are actively publishing research scientists of at least medium seniority. Most will, while pursuing their doctorates, have participated in "journal clubs" (where the grad students get together to shred a published paper), and many will also have co-reviewed manuscripts alongside their supervisors (not strictly allowed, but very widespread). What all students certainly are trained to do, even at undergraduate level, is not to take the truth of published work for granted, and to watch for potential flaws. To teach science is to teach scepticism. Which brings me on to the next point...
  3. Reviewers aren't all that great at spotting errors: so what? Academics and publishers know this. The system is designed this way. Review is supposed to be a basic filter for sanity and competence; it is only journalists who hear "peer-reviewed" and think it is the definitive stamp of authenticity. Like democracy and trial-by-jury, it is not used because it works, but because it fails less disastrously than the alternatives. (Incidentally, their example of introducing deliberate errors to a paper and seeing who notices them is not entirely fair: most papers are not only reviewed by the journals reviewers, but by the authors' colleagues before they submit the manuscript, and by editors before review.)
  4. The last part of the programme was devoted to publication bias. Publication bias is a big problem. But it has little, if anything, to do with peer-review, and everything to do with publisher policies and author dishonesty. The only conceivable connection it has with peer-review is that some people still mistakenly believe that negative results aren't worth publishing at all -- something that journals like BMC Research Notes and PLoS ONE, and initiatives like trial registration are explicitly tackling.
The programme explored what is an interesting issue in academic publishing at the moment (there are more interesting issues, of course), but, I think, from the wrong perspective. While it discussed many very real problems with the system, these problems are all well known and acknowledged; for decades people have explored solutions, and there are many interesting current developments. The makers of the programme seemed mostly unaware of these.

This is, of course, the limitation of having a half-hour national radio programme about a topic like academic publishing.

12 Aug 2007

A new way to find reviewers - the ouija board

Authors of manuscripts submitted to our journals can suggest potential peer reviewers.
A recent submitting author took advantage of this to suggest...

His former supervisor

Wait, it gets better....

His dead former supervisor

Wait, wait, it gets even better.....

His dead former supervisor, indicated with (deceased)

Guys, who last had the ouija board?

17 Jul 2007

13 ways to get your manuscript rejected

I've seen a couple of good guides to getting your work published in a peer reviewed journal. But how to ensure that you get it rejected?

1. Don't write in clear English. Hell, forget clear English, don't even write in English. Editors who insist on good English are probably just pining for the days of the Empire. The more incomprehensible the better. Ignore simple grammatical rules like the use of articles, and don't run a spell check. Spell check is for losers. Certainly don't get it copyedited - good lord, that'd just be throwing good money after bad.


2. Never cite prior work. Be like this correspondent to a physics journal*, who gaily admits that "The only time I access previous articles is when the referee forces me to". Oh joy.

3. Try and try again. So your work has been rejected several times over? Play the lottery of peer review, and eventually you'll slip it past the reviewers! Reviewers love it when they see an article for the fourth time, with none of their advice acted on. No, really, they do**. ***

4. Argue. Argue. Argue. The reviewers hate you; you hate the reviewers. Don't be diplomatic: let loose the vitriol. The editor won't mind, they'll obviously take your side. After all, who the hell do the reviewers think they are? Oh, you mean the editor picked them because they think that they're experts in the field? Then the editor's an idiot too!

5. Do you know who I am?! Editors are always delighted when an author points out their eminent qualifications in a rebuttal, while ignoring all scientific substance for the reasons for rejection**.

6. Use Word Art to brighten up your article**. It shows your playful side.

7. Go completely off the wall. Five dimensional alien brains?** Bring it on.

A typical day in the editorial office.
Image credit Shira Golding on Flickr, Creative Commons Attribution Non-Commercial 2.0
8. Ethics committee? What ethics committee? Oh, yeah, right, we've got an, er, ethics committee. What do you mean, it can't just be me, my dog, and my next door neighbour?!** You mean we actually had to ask the patients before we experimented on them!?!**

9. You're a hero. Patients adore you as their saviour and the scientific community are all paid lap-dogs of big pharma. You know what results you want, so what's a little data misrepresentation between friends?**

10. ID. The reviewers and editors won't mind if you slip just a little bit of Creationist terminology into the scientific peer-reviewed literature...**

11. Photoshop rules!!! Pesky band in the way? Just photoshop it! Transformation failed? Just photoshop it!**

12. Copy. Has someone else said it better than you ever could? Copy! Copy! Has someone else done the experiments better than you ever could? Definitely copy!

13. Don't support your conclusions. Who needs to spend hours preparing supporting data? Loser! It just takes a few quick keystrokes to write "Data not shown".

Be sure to also check out Horacio Plotkin's sage advice.

* Thanks to the Blog of the "Editor's Bookshelf" for helping me to find that letter again.
** Any resemblance of this blog post to real events or persons is, um, entirely coincidental.
*** Stop messing about and submit it to Biology Direct!

28 Jun 2007

Open peer review & community peer review

There has been a lot of discussion about 'open peer review' lately - this letter to Nature is just the latest example. With all these opinions and hypotheses about peer review flying around, I think that it is useful to make some distinctions between the different types of 'open' review, so here goes.

Traditional peer review. Anonymous reports received pre-publication. Letters to the editor are considered by many journals, but especially in paper journals relatively few are published. All the BioMed Central journals accept signed comments from readers.

Open peer review. Named, pre-publication review, which is how the BMC-series medical journals work, and the BMJ too. The difference lies in that the reviews are available for readers to see in the BMC-series medical journals, but the BMJ never made this move. Comments can also be posted by readers: the BMJ's Rapid Responses should be envied by any journal. It is controversial as some reviewers don't wish to be named, and it can make finding peer reviewers harder, but to anyone who doubts the open peer review works I can point out that the BMJ has published hundreds of peer reviewed articles since it introduced open peer review, and the medical journals in the BMC series have published thousands of peer reviewed articles since they launched in 2000. Open peer review can work.

Open and permissive peer review. This is Biology Direct's approach. Articles are published if they receive reviews solicited by the author from at least 3 members of the reviewing board (aside from pseudoscience, which the editors will veto), with the comments included at the end of the article, unless the author withdraw the manuscript. More here, and I discussed their approach in a previous post. Comments can be posted by readers, as with the other BioMed Central journals.

Community peer review. The idea of community peer review is to avoid peer review being the domain of a biased subset of the scientific community, and it has a powerful philosophy that "given enough eyeballs, all bugs are shallow". It can be either anonymous or named, and still happens before formal publication, but the difference is that reviewers volunteer rather than being selected by the editors. The manuscript is public while under review, but explicitly is not 'published' at that point. This was how Nature's experiment worked (or didn't work), but it was alongside the usual anonymous editorially selected reviews, and the comments don't seem to have been treated as 'proper' reviews by the editors.

Atmospheric Chemistry and Physics uses a similar approach, apparently with much more success than Nature. The editors refuse articles that don't meet minimal scientific standards, then post the remaining articles for 8 weeks of Interactive Public Discussion (named or anonymous), then publish the final version. There doesn't appear to be any mention of rejecting articles after the initial public posting, so this permissive peer review resembles a community version of Biology Direct.

The Journal of Interactive Media in Education uses named reports, and invites review from the community. The two-step process involves private, named review by invited reviewers, followed by publication of a preliminary version that is reviewed further by the community before final, formal publication.

Permissive peer review, post-publication commentary.
This is PLoS ONE's approach. They have minimal peer review, with the expectation that the scientific community will then comment on and annotate the articles. I was already a bit skeptical of the merits of minimal peer review, as are others, and now a Nature news story, among others, has attacked the publication of a study on HIV and circumcision in PLoS ONE, arguing that peer review failed in this case. Sending out an unbalanced press release written by the author seems to have compounded the problem, and wasn't very responsible. A lengthy response has been posted to the article, showing that post-publication review can work, but plenty of journals have the option to post comments, and the horse has already bolted.

No peer review, post-publication commentary.
This is how Philica works, and now Nature Preceedings, part pre-print, part repository for preliminary work. I don't think that Philica is working; Nature Preceedings will probably fare better. An essential difference is that while Philica is clogged with pseudoscience, Nature Preceedings explictly won't post pseudoscience, and it has the Nature brand name to help it gather interest and comments. I found an optimistically titled Web 2.0 Peer Reviewed Science Journal, which has a website but no articles. "This page that you are reading now is a review site, and I (Philip Dorrell) am the intended reviewer. If you, as an author of a scientific paper, are interested in having me review your paper, all you have to do is publish your paper as a web page, and then send an email". Hmm... sorry Philip, but peer review involves more than just your opinion on articles. Web 2.0 requires users and content.

BioMed Central is open access, PLoS is open access, the BMJ is open access, Nature Preceedings is open access, and they are all experimenting with peer review. Matthew Falagas has commented in Open Medicine (the open access journal that arose out of the editorial dispute at the CMAJ), after spotting this pattern of a link between experimenting with peer review and open access. I think it is worth stating that despite this trend, open access and open peer review don't necessarily go together. The biology journals in the BMC-series still have anonymous review, as do the PLoS journals. The problem of access to an article is at a tangent to the problem of reviewing it - but, of course, community peer review can't work if not enough people have access to the article.

I think that if there is doubt in the integrity of peer review (and there is more and more doubt), this increases the imperative for exposing pre-publication review processes. Journals can't just be paternalistic or secretive about peer review, and readers shouldn't take it on trust that an article labelled as 'peer reviewed' has been rigorously critiqued by experts in the field. PLoS ONE is encouraging its reviewers to make their reviews public on the published article, which is a great step. Requiring reviewers to opt-out would be even stronger, but PLoS Medicine recently backed away from this policy.

If journals really want community peer review to work, we cannot just sit back and wait for comments to come in. Pre-publication peer review takes a massive effort on the part of editors to find qualified reviewers, and the chances of enough qualified reviewers stumbling across an article and feeling obliged to leave comments to make post-publication review viable and vibrant are low. Ways to solicit comments are essential, using email alerts for example. In a definite step in the right direction, PLoS ONE is organising virtual 'journal clubs'. Remember that anyone who has had a face-to-face journal club at their institute about a BioMed Central article, or a BMJ article, or a PLoS article, can and should post the results of the discussion as a comment on the article.

I think that open peer review and community peer review are the future of assessing scientific articles. It doesn't stop there - I've not even mentioned wikis!

28 Apr 2007

What to do about late peer reviewers?

Editors and authors are left in the lurch when reviewers are late in returning their reports or even fail entirely to return a report. Although reviewers are usually volunteers, they have made a promise to the journal and their scientific colleagues, and the failure to return a report can greatly lengthen and complicate the review process.

Marc Hauser and Ernst Fehr writing in PLoS Biology have an idea of how to remedy this. "Reviewers that turn in their reviews late are punished, whereas those that arrive on time are rewarded". They suggest that "
for every day since receipt of the manuscript for review plus the number of days past the deadline, the reviewer's next personal submission to the journal will be held in editorial limbo for twice as long before it is sent for review" and "for every manuscript that a reviewer refuses to review, we add on a one-week delay to reviewing their own next submission".

I hate when reviewers are late, and it would be immensely satisfying to take revenge on them by snarling up their own submitted manuscripts, but I'm not sure that this is a workable system. It is true that journals can track the timeliness and helpfulness of reviewers - we do this at BioMed Central, so technical feasibility is not my objection.

Publishing is not a game; the aim is to get research checked and, if sound, published as quickly as possible. Deliberately adding in delays and checks both adds costs and impedes science. A publisher that started imposing such sanctions might lose those reviewers as both reviewers and authors, and introduce even more antagonism into what can already be a fraught process. This system would hit senior and well-known researchers the hardest, as they are most often asked to review and they simply can't agree to review every article sent to them. Reasons for declining would need to be distinguished - is someone who suggests a qualified and keen colleague deserving of sanction? What if they were inappropriate, or didn't even receive the email as it landed in their spam filter? What if they had a genuine reason that they were unable to return the report, such as the several reviewers in New Orleans at the time of Hurricane Katrina who were somewhat understandably late with their reports, or those suddenly falling ill or with a family emergency? What if they needed more time to complete a thorough reanalysis, as one of our editorial board members did recently?

What can we do instead? We already have a reviewer discount, such that reviewers who return their reports on time to a journal within the BMC series are entitled to a 20% discount on the article processing charge the next time (within a year) that they are the submitting author of a manuscript submitted to a journal in the series (i.e. BMC Bioinformatics, BMC Cancer etc.).

Editors can be ruthless. If an agreed reviewer is late, we may well make a decision without them if we already have reports in from other reviewers - if a reviewer doesn't want their time in reading the manuscript wasted and their opinions ignored, they should get the report in on time.


Fostering a good relationship with reviewers and authors helps. Authors who receive timely reviews will feel inclined to review quickly themselves. Authors who don't may well refuse to review until they have received a decision, the flip side of Hauser and Fehr's proposal. Equally, authors and reviewers need to remember that editors are human too. We sometimes receive a level of bad tempered abuse from authors that if we dished back out we'd be fired. A way to remind reviewers that we're human is to phone them - email can give the false impression that journals are run by robots, although we do find that email reminders can be very effective. A look at the statistics for the times that reviews are returned shows that most are returned within a day of us sending a reminder email letting the reviewer know that their report is due within three days.

If we respect each other and agree with the aim of efficiently and effectively assessing scientific research, we're all better off. I'm not sure that penalties are the best way to achieve this.

10 Mar 2007

Tags track growth in open access (and a dig at Philica)

An interesting observation: Tags Indicate That Open Access Is Flourishing. Comparing the growth in Connotea tags for "Internet" to the growth in tags for "Open access", the growth in tags for "Open access" is significantly higher.

On the topic of tags, Matt Cockerill on the BMC Blog discusses the tags used to tag BioMed Central articles on CiteULike. BioMed Central is working with CiteULike and is keen to capture the power of "Web 2.0".

So, Philica. The above observation by James Till on Philica seems much more suitable for a blog than an academic journal. Philica has yet to prove itself to be a serious academic journal -
much of the content seems to be trivial or pseudoscience, e.g. this, or this, or this.

Philica brings the much touted idea of community peer review to fruition, leaving peer review entirely to the readers. It has what I believe to be two fatal flaws. Firstly, it has absolutely no editorial selectivity and the requirement that authors be based at an academic institution has proven to be no barrier to junk being submitted. Not many researchers have the time to spend critiquing palpable nonsense. Secondly, it has no mechanism to solicit reviews from experts - only those who stumble across Philica will read the articles, and they probably won't feel at all obliged or inclined to review. Peer review survives because editors deliberately select those they believe are best placed to comment, and sometimes hound them for a review - remove a selection process, and it would collapse.

I'm not the only one to be skeptical about Philica. It's a great idea (I sketched out a similar idea last year and bored my colleagues with it at the pub), but it's not working.

27 Jan 2007

A case study in open peer review

Last year, BMC Anesthesiology published an article by Andrew Vickers and colleagues on the use of acupuncture for the pain caused by thoracotomy; it was a pilot study to examine whether a randomized controlled trial (RCT) was feasible. Thoracotomy is surgery to the chest to allow access to the lungs. Andrew Vickers is on the editorial boards for several of our journals, a respected statistician and trial methodology expert with an interest in testing complementary medicines. It was unlikely when he submitted the work that there would be any fatal flaws. However, we don't allow submissions from our editorial boards to escape peer review - and I've seen many manuscripts from editorial board members fail to pass muster.

Who would be suitable to review the manuscript? It's about acupuncture, so acupuncturists, right? Well, partly.

If we were to ask only acupuncturists to review, there are two potential drawbacks. Acupuncturists believe in the efficacy of acupuncture, otherwise they probably wouldn't be acupuncturists. If there were flaws in the study they might be inclined to give it the benefit of the doubt, whereas someone without a vested interest in the intervention under study might raise objections. The other issue is that they might be unfamiliar with pain relief in this particular setting. On the other hand, were we to only approach those who had never used acupuncture and who were otherwise experts in pain relief we would face two potential biases. For one, they might be skeptical that acupuncture works at all and thus be too picky, raising unreasonable objections to block publication. Another consideration is that they might themselves have a vested interest in the drugs used to relieve post-surgical pain, perhaps having received speaking fees or consulted for pharmaceutical companies.

To ensure rigorous and fair review, we needed someone who was familiar with acupuncture for pain relief, preferably with an additional experience of either randomized controlled trials, systematic reviews, or anesthesia for surgical interventions. Although it is not itself an RCT, the purpose of the trial was as a pilot to see if an RCT was feasible, and those who have conducted systematic reviews will have a good knowledge of critical appraisal. Secondly, we needed someone familiar with post-thoracotomy pain relief other than acupuncture, preferably with a knowledge of randomized controlled trials. Lastly, we needed someone with a familiarity with randomized controlled trials and anesthesiology for post-surgical pain, were either of the other two not themselves familiar with it.

BMC Anesthesiology is open access, but it is unusual among journals in another way. It has open peer review, as do all the medical journals in the BMC-series. By this we mean that the reviewers consent to their names being made to be known to the authors, and for their reports to be made public if the manuscript is accepted for publication. Open peer review allows me to sweep back the curtain, and reveal the peer review process, like a magician flaunting the secrecy of Magic Circle.

Among our reviewers were two complementary and alternative medicine experts, Betsy Singh and Hugh MacPherson. Betsy Singh has published on the use of several complementary medicines, including a systematic review of acupuncture for pain relief. Hugh MacPherson has published on ways to ensure the safety and accurate reporting of acupuncture trials, and is familiar with randomized controlled trials. We also had two reviewers who are researchers of pain relief, Deniz Karakaya and Jorge Dagnino. Deniz Karakaya is a thoracic surgeon has published on the use of conventional anesthetics in various procedures, including for post-thoracotomy pain. Jorge Dagnino has also published on the use of anesthetics for post-surgical pain, including thoracotomy. We had the full house.

What criticisms or points did they raise? Dr Singh had no complaints, and it isn't surprising to see a reviewer of a manuscript by Dr Vickers say this. Dr MacPherson was well-disposed to the study, but raised several points where the authors could better report details of their work, or better justify a statement. Dr Karayaka had only two relatively minor criticisms, asking for more detail on their procedures. Dr Dagnino raised the most objections, requiring many more methodological details, and questioning some of the conclusions. It is interesting to see that the two reviewers who asked the authors to make the most corrections were an acupuncturist and a traditional anesthesiologist. Both types of researchers applied their skills of critical appraisal to help the authors improve their work. Upon re-review, Drs Karayaka and Dagnino had some remaining questions, and the editorial staff determined that the authors' response to this second round of review was satisfactory and we proceeded with publication.


Although the study is limited in its scope and conclusions, inevitable for
a pilot, uncontrolled study in only 36 patients, and although it would be easy to dismiss by many journals as not interesting enough to publish, we thought it necessary and valuable to have enough qualified reviewers to assess it, and we obtained the advice of four reviewers who together were qualified to judge all the main aspects of the work. Judging soundness (technical validity) isn't easy, and is more difficult than measuring the level of interest. It isn't that uninteresting though - more than 3,000 readers have accessed the article from our website in the past 10 months, and more will have read it on PubMed Central, a level of interest for which many blog writers would willingly give up a kidney.

26 Jan 2007

No longer talking to the ether

One week into blogging, and Peter Suber at Open Access News has picked up my reply to the 10 problems with peer review. I'm not just talking to the ether now.

24 Jan 2007

Response to '10 Problems with the Peer-Review Publishing Process'

Kevin Dewalt's blog on the 19th January includes 10 criticisms of peer review. I've posted a comment on his blog with my response to each of the points, but I'll copy them here as well.

Kevin's original points are italicised, and I've made a couple of additional comments since I replied on his blog that are indicated by square brackets. I hope I've corrected some misconceptions about peer review.

---
1.
Unstated real or perceived conflicts of interest. Reviewers and authors can have relationships with entities that have an ulterior motive in getting material published.

True, but many journals, such as mine, require authors and reviewers to declare their competing interests - in our medical journals, these interests are published with the article. Editors are used to watching out for this.
---
2.
Peer-review process advances slower than scientific progress.

Yes, but peer review doesn't stop someone first posting their article on their own web-site, discussing their work at conferences, or posting their work on a pre-print server like ArXiv. Anyway, scientific progress isn't as rapid as people believe, and without the checks and balances that peer review gives, all sorts of rubbish would be published, and scientists would have to follow even more blind alleys than they do already after reading profoundly flawed research. Peer review adds some rigour into the process of communicating scientific research. Less haste, more speed is an apt concept here.
---
3.
The current process does not provide authors and reviewers with basic collaborative web tools.

That's nothing to do with peer review, just the delays in the Web 2.0 revolution getting to publishers. PLoS ONE (published by Public Library of Science, another OA publisher) does now offer reviewers and authors interactive tools to annotate articles. Many journals, like mine and the
BMJ, allow any reader to comment on a published article.
---
4.
Authors lose copyright privileges when publishing yet are often forced to publish to continue career advancement.

Traditional journals insist on copyright transfer. Many open access journals, including those published by BioMed Central and PLoS, allow the authors to retain copyright. The article is published under a Creative Commons Attribution License.
---
5.
Peer-review networks tend to form around cliques. Those “outside the club” of a particular discipline - where often the best ideas surface - cannot get published because new ideas are rejected by the current establishment. As a result great ideas are often lost.

I don't believe that this complaint is really that valid. The complaints I've read about were by top scientists who couldn't get their idea published in Nature,
Cell or Science. Well, just publish it elsewhere. There are plenty of journals that aren't as picky as those journals, and if authors had a little more self-awareness they'd recognise that they aim too high sometimes. Besides, many journals don't use established lists of reviewers, but go straight to those publishing related work and ask them. So, yes, you usually have to be a published scientist to review, but then it is called *peer* review, isn't it? I doubt that "the best ideas" surface outside academic research, the lone researcher is more likely to be a kook than a genius. There are some geniuses out there, but they are the ones you read about in the news - there's a teensy bit of a selection bias going on...
---
6.
Precedence is often establish by those with the best personal contacts and not those who first introduce new theories.

I don't see the basis for this argument. Precedence does go to those who first raised a theory, so long as scientists are aware of it [this is the idea of 'priority']. Those who publish in languages other than English are at a disadvantage, admittedly, but some journals allow republication of work in English that was previously published elsewhere in another language, so that gives authors the possibility to widen their audience. Peer reviewers go out of their way to alert authors to work that first demonstrated something, and I have also insisted that authors cite certain studies. Scientists are very attuned to giving due credit for the origin of ideas or techniques.
---
7.
There is no medium for wider, instant dissemination. Doctors or researchers who prepare a presentation or speech cannot “publish it” to a wider audience.

Yes, they can. ArXiv and other pre-print servers allow the publication of non-reviewed work (see e.g.
Public Knowledge Project). Theses and dissertations can be published electronically (e.g. NDLTD, MIT on DSpace). This Portugese university repository, for example, allows the publication of reports, presentation etc. If a university doesn't have a repository for this kind of material, then it should do! Staff and students can take the lead, rather than waiting for journals to do it for them - journals are traditionally for peer-reviewed research, why would we necessarily expect them to post presentations? That said, Nature has recently launched Nature Protocols, so publishers are making some effort to include material that is outside their usual range.

8.
Participating in the review process has little benefit for the reviewer. Performing reviews can take an enormous amount of time and the written reviews are not themselves “published”.

Reviewing takes between 2-6 hours, according a survey I read [an average of 3 hours]. I've seen reviews done in 10 minutes...
Here are a few reasons for participating in peer review:
- Allows a researcher control over what is published in their field - they are the "gatekeepers of science".
- Allows researchers to ensure that what is published accurately reflects and acknowledges their field.
- Can help a scientist get promoted and get grants, as journals often list the names of their reviewers annually.
- In the case of the medical journals in the
BMC-series, published by BioMed Central, we *do* publish the reports, along with the name of the reviewer.
- Reviewers are actually paid by a small minority of journals [the
BMJ pays £25], and more commonly can get other perks such as discounts on reading or publishing in the journal.
- Reviewers get the opportunity to read their competitors' work months before it will be published, and unscrupulous reviewers can deliberately block publication.
- It's interesting! They're scientists, they enjoy critiquing science!

9.
Reviews and reviewers are not “reviewed”. An author who receives a biased review or one based on poor critical thinking has no recourse to publicly respond or invite others to comment.

Not true. Editors assess the reviewer reports and qualifications. Authors who receive what they perceive to be a biased review can appeal to the editor, and request a further opinion. If they are badly treated and the journal is a member of the Committee on Publication Ethics (such as BioMed Central,
BMJ, Lancet) then authors can even take a case to that body [currently only editors can submit a case, but often will in cases of a dispute]. Some journals (BMJ, Lancet) have an ombudsman.

10.
Journals can be prohibitively expensive for some in the developing world.

Yes - this is one of the reasons why open access is a good idea! The research is free to read for anyone with Internet access. Traditional pay-to-view journals are also members of a scheme called HINARI, a WHO project that allows some people in developing countries to read the research for free (but it does have limitations, as they need to be connected to an institution).

22 Jan 2007

Does peer review work?

There are now a reasonable numbers of studies from journals such as the BMJ and JAMA on the factors affecting peer review. For example, we know due to a piece of work done by my colleagues that while author-suggested reviewers appear to return reports of equal quality to editor-suggested reviewers, they are significantly kinder to the authors in their recommendations on publication.

One of those authors, Pritt Tamber, regularly makes clear his belief that peer review doesn't work, most recently arguing in a
BMJ Rapid Response that "Much of the research conducted at the BMJ [...] showed that there is little or no objective value to the process, yet journals and their editors persist with—and advocate—peer review; their only defence is that "there's nothing better," even though few have tried to find an alternative".

As Pritt notes, one alternative is the system used by Biology Direct, published by BioMed Central. The idea is that authors obtain reviews from three members of the reviewing board. If the author cannot find three members of the board to agree (or to themselves solicit an external review) the manuscript is considered to be rejected. If they can get three reports, then the manuscript will be published, no matter what the reviewers say. The twist is that the comments of the reviewers will be included at the end of the manuscript, as an integral part of the manuscript, and signed by the reviewers. The author can make revisions to the manuscript if they wish or even withdraw it, but equally they can ignore the comments and publish despite them. This is with the knowledge that readers will be able to see the reviewers' dissent. Other alternatives include the community peer review being tried by Philica, PLoS ONE and
Nature (Nature's experiment appears to have been unsuccessful, but that is no reason to write-off the idea). More journals, publishers and researchers need to go out on a limb to explore new and better ways to assess and critique scientific research.

Before we go too far with condemning peer review, it is worth remembering that without an evidence base, we won't be able to work out where peer review works, where it doesn't and why, and how to improve it.

Much of the research done into the effects of peer review has been, in my opinion at least, quite superficial. Reading it has really only told me what I knew already from working as an editor.

My wish-list for studies of peer review are:

  1. Creating a metric of "on-topicness" that editors can use to assess how relevant a reviewer's expertise is to a piece of research or an aspect of that work. This could be by simple similarity analyses, comparing their PubMed abstracts to the abstract of the submitted manuscript, or by more complicated semantic analyses.
  2. Comparing manuscripts that were accepted to those rejected to examine the predictive factors. Some of these have been done, but the analyses always strike me as simplistic. The sample size needs to be greater, and the journals chosen need to not be so highly selective - is it really that interesting to see the factors that influence publication in journals like Nature, The Lancet or NEJM? I really want to see are the factors that affect whether a study is ever published in a reputable journal.
  3. A side-by-side comparison of published articles with the original submitted version (before any peer review in any journal). This could be done by a paid panel who would be able to spend the time to do an in depth analysis; an alternative would be to invite journal clubs at universities worldwide to analyse manuscripts in this way (a sort-of Seti@Home for journalology). Did peer review noticeably improve the work?
  4. Examine the fate of articles rejected by journals. Several studies of this nature have been conducted, but they mainly focus only on the journal it is eventually published in and the Impact Factor of the publishing journal. Why not examine whether any changes had been made since rejection? What about whether the rejected work is cited and read? Do a panel and journal clubs agree that the work is now sound, even if it might be uninteresting?
  5. Compare the ability of different editors to assess a manuscript and select appropriate reviewers under time pressure, pitted against some of the new semi-automated tools available, like etBLAST. This would be like a peer review Olympiad.
It is tough to design and conduct good studies to examine peer review, but editors need to make the effort, else skeptics like Pritt will have a point. Now, just as soon as I have some spare time...

21 Jan 2007

Why reviewers decline, and paying for peer review

"Reviewers are more likely to accept to review a manuscript when it is relevant to their area of interest. Lack of time is the principal factor in the decision to decline". This nugget comes from a study by Sara Schroter of the BMJ (Tite L, Schroter S: Why do peer reviewers decline to review? A survey. J Epidemiol Community Health 2007, 61(1):9-12).

Well, blow me down with a feather - it had never occurred to me that reviewers might decline if they were off-topic or busy...

What is more interesting is that their respondents doubt that paying reviewers would make them more likely to review when their time was constrained, and they suggest various other non-financial perks for reviewers, such as being publicly acknowledged, or joining the editorial board.

I'm not sure that I agree that payment would fail to act as an incentive, but I do have doubts that journals should move to making payments. Those journals that do pay reviewers, such as the
Lancet sometimes does, find it easier to get people to agree than those that don't. This is confounded by the prestige of the Lancet, but the extra cash can't harm their chances. What the respondents appear to have forgotten is that every reviewer is asked to review by several journals on a regular basis. If one routinely offers financial compensation, and the others don't, the paying journal will be the more attractive choice. Once enough journals began to pay reviewers, those that didn't would begin to notice their declining success rate, and feel the need to switch to paying (this is classic Game Theory). Payment would no longer help journals to obtain reviewers more easily than their competitors, but no journal could opt out for fear of losing reviewers.

Another issue with paying reviewers is that quite often reports are returned late, and may be of low quality. Payment could be tied to the report being delivered on time, but if reviewers were used to receiving payment, the incentive to then return the report once already late and without payment would be diminished. Payment could be tied to review quality, but using the Review Quality Instrument on every report would be laborious, and from speaking to someone who has used this rating tool it appears to be less than perfect. Currently editors send invitations to some reviewers who reply that they are off-topic or not qualified to review. Would the promise of payment fog the memory of some as to whether they were a suitable reviewer?

The payment of reviewers is also connected to a promise to fast-track. The
Journal of Medical Internet Research offers authors the option of paying a fast-track fee (currently $350), of which part is used to pay reviewers to return reports rapidly. My concern about such a promise of speed is that it conflicts with the job of an editor to ensure a high-quality review process. Although the standard number of reviewers is 2, quite often because editors invite more than 2 reviewers at a time more will agree to review. Normally an editor will be glad of the extra advice (authors may be less keen). If each peer reviewer needs to be paid or a very rapid decision needs to be made, editors will be less inclined to keep more agreed reviewers. If there is the need to seek further advice to resolve a certain issue, an editor might rather simply reject the manuscript rather than pay for and wait for another report.

The House of Commons Science and Technology Select Committee when reporting on open access in 2004, stated their belief that “the introduction of modest incentives for peer reviewers is an imaginative way of rewarding the contribution of peer reviewers to scientific endeavour. By carrying out reviews, researchers add value to the services provided by publishers. Whilst it would be inappropriate to pay reviewers personally, some recognition, made to their department, of the value of their contribution would be welcomed, particularly in view of the fact that many researchers are paid from public funds". I agree that, if it comes to it, a payment to the reviewer's institution would be preferable to direct payments to individual reviewers.

18 Jan 2007

Peer review lite at PLoS ONE?

PLoS ONE, the 'Open Access 2.0' journal trumpeted by the Public Library of Science, launched late last year. Editors and reviewers often make arbitrary decisions about importance, in a chase for the Impact Factor. The idea of removing the need for journals to select the most 'important' science, and instead concentrating on publishing solid, sound science is a good one. This is a philosophy already followed by the BMC-series journals, which I'm involved with, although we do frequently reject articles on the basis that they present no advance in the field. To assess the soundness of a manuscript you usually need to find at least two experts to judge the topic, methods and statistics, and when the journal was announced I was genuinely puzzled as to how PLoS ONE would run their peer review any differently to other journals. There has been debate in the blogosphere by some who were under the impression that PLoS ONE was doing way with peer review entirely. An example I have come across gives me cause to worry that rather than focussing on conducting solid peer review, the system PLoS ONE uses will indeed sometimes scrimp on peer review.

One of the really interesting features of PLoS ONE is the annotation and discussion system. It isn't the first journal to allow readers to post comments, but it is the first to allow them to attach comments to a certain part of the published article, like a post-it note. There is a list of the Most Annotated articles, and on the day I looked one of these was
A Large Specific Deterrent Effect of Arrest for Patronizing a Prostitute.

Alongside a comment discussing the use of the term "prostitute" is the Academic Editor's viewpoint. PLoS ONE may be experimental, but they don't have open peer review as standard
(i.e. named, rather than anonymous reviewing, with the reports published), so this is not standard practice (they do name the Academic Editor for each article). The editor commented that "Although this manuscript was quite far from my own field of expertise, I accepted to act as academic referee for this manuscript because I felt that it was important that this type of manuscript should be published in a open access mode, and that the possibility for further discussions offered by this new journal would be very positive. Although I am reasonably confident that the scientific content and the statistics performed have been conducted appropriately, this does not mean to say that I condone all that this manuscript contains".

This brought me up short. I handle the peer review of articles on which I am not an expert, but I never make a decision to publish based only on my assessment of the manuscript. This is what peer review is for. The Academic Editor was Etienne Joly, an immunogeneticist who is also a strong supporter of open access - he is also an editorial board member for Biology Direct, another experiment in peer review published by BioMed Central.

I respect Dr Joly, but there is something worrying about an article being accepted after only being assessed by someone who is not a peer of the authors. I'm not sure that an immunologist can assess the conduct and reporting of public health/social science research. This isn't peer review, it is editorial selection. Indeed, PLoS ONE states that:
"AEs [Academic Editors] can employ a variety of methods to reach a decision in which they are confident:

Based on their own knowledge and experience;
Through discussion with other members of the editorial board;
Through the solicitation of formal reports from independent external referees".

Chris Surridge, the Managing Editor of PLoS ONE, has said that "When papers are submitted they get assigned to one of these editors based on the content of the paper and the editor’s specific areas of expertise". Now whilst the board of Academics Editors is impressive, it is still only 200 people. As PLoS ONE has ambitiously opened itself to submissions from across all of science, not just biology and medicine, it is impossible that all submissions will find an Academic Editor who will be an expert. If an Academic Editor is pressed for time (as most academics are) might they not take the easy route and attempt to assess a manuscript themselves that they are not qualified to judge, rather than embarking on the process of selecting external peer reviewers?

Etienne Joly went on to say in his viewpoint that "
I have little doubt that this subject will lead to active debates. But this is exactly what PlosOne is about: Open Acess, and open discussions". PLoS ONE appears to be genuinely aiming to replace the pre-publication review process with "community-based open peer review", while at the same time not quite admitting this publicly, arguing
that "the pre-publication assessment of papers is definitely ‘peer-review’". What concerns me most about the discussion of peer review surrounding the launch of PLoS ONE is the perception that 'only' assessing the technical quality of a manuscript is somehow easy. It's not. If you don't have the fall back option of claiming that something is "out of scope", "not of interest to our readers", or "more suited to a more specialized journal", then the job of assessing manuscripts actually gets harder, not easier.

Editorial selection is the process already used by the Elsevier journal Medical Hypotheses, which states boldly that "
The editor sees his role as a 'chooser', not a 'changer': choosing to publish what are judged to be the best papers from those submitted". It has been said of the journal that it "exists to let people publish their craziest ideas". I would not imagine that this is a reputation that PLoS ONE hopes to emulate.
---

The most annotated article on PLoS ONE (aside from the testing 'Sandbox') now has 10 comments. Wonderful! However, they are all from an author of the article, to external links such as PubChem. Likewise, 5 of the 6 comments on the next most annotated article are links or notes added by the author. An annotation on another article is a note about the correct orientation of a figure. Is that not the sort of thing that is integral to the manuscript and needs correcting in the production process?