22 Jan 2007

Does peer review work?

There are now a reasonable numbers of studies from journals such as the BMJ and JAMA on the factors affecting peer review. For example, we know due to a piece of work done by my colleagues that while author-suggested reviewers appear to return reports of equal quality to editor-suggested reviewers, they are significantly kinder to the authors in their recommendations on publication.

One of those authors, Pritt Tamber, regularly makes clear his belief that peer review doesn't work, most recently arguing in a
BMJ Rapid Response that "Much of the research conducted at the BMJ [...] showed that there is little or no objective value to the process, yet journals and their editors persist with—and advocate—peer review; their only defence is that "there's nothing better," even though few have tried to find an alternative".

As Pritt notes, one alternative is the system used by Biology Direct, published by BioMed Central. The idea is that authors obtain reviews from three members of the reviewing board. If the author cannot find three members of the board to agree (or to themselves solicit an external review) the manuscript is considered to be rejected. If they can get three reports, then the manuscript will be published, no matter what the reviewers say. The twist is that the comments of the reviewers will be included at the end of the manuscript, as an integral part of the manuscript, and signed by the reviewers. The author can make revisions to the manuscript if they wish or even withdraw it, but equally they can ignore the comments and publish despite them. This is with the knowledge that readers will be able to see the reviewers' dissent. Other alternatives include the community peer review being tried by Philica, PLoS ONE and
Nature (Nature's experiment appears to have been unsuccessful, but that is no reason to write-off the idea). More journals, publishers and researchers need to go out on a limb to explore new and better ways to assess and critique scientific research.

Before we go too far with condemning peer review, it is worth remembering that without an evidence base, we won't be able to work out where peer review works, where it doesn't and why, and how to improve it.

Much of the research done into the effects of peer review has been, in my opinion at least, quite superficial. Reading it has really only told me what I knew already from working as an editor.

My wish-list for studies of peer review are:

  1. Creating a metric of "on-topicness" that editors can use to assess how relevant a reviewer's expertise is to a piece of research or an aspect of that work. This could be by simple similarity analyses, comparing their PubMed abstracts to the abstract of the submitted manuscript, or by more complicated semantic analyses.
  2. Comparing manuscripts that were accepted to those rejected to examine the predictive factors. Some of these have been done, but the analyses always strike me as simplistic. The sample size needs to be greater, and the journals chosen need to not be so highly selective - is it really that interesting to see the factors that influence publication in journals like Nature, The Lancet or NEJM? I really want to see are the factors that affect whether a study is ever published in a reputable journal.
  3. A side-by-side comparison of published articles with the original submitted version (before any peer review in any journal). This could be done by a paid panel who would be able to spend the time to do an in depth analysis; an alternative would be to invite journal clubs at universities worldwide to analyse manuscripts in this way (a sort-of Seti@Home for journalology). Did peer review noticeably improve the work?
  4. Examine the fate of articles rejected by journals. Several studies of this nature have been conducted, but they mainly focus only on the journal it is eventually published in and the Impact Factor of the publishing journal. Why not examine whether any changes had been made since rejection? What about whether the rejected work is cited and read? Do a panel and journal clubs agree that the work is now sound, even if it might be uninteresting?
  5. Compare the ability of different editors to assess a manuscript and select appropriate reviewers under time pressure, pitted against some of the new semi-automated tools available, like etBLAST. This would be like a peer review Olympiad.
It is tough to design and conduct good studies to examine peer review, but editors need to make the effort, else skeptics like Pritt will have a point. Now, just as soon as I have some spare time...

No comments: