Peer review in the dock
This is a guest post by Joe Dunckley
Academic publishing, and peer review in particular, was headline news in February -- from stem cell researchers claiming that their work was being sabotaged by reviewers with conflicts of interest, to mainstream news noticing the absurdity of the impact factor situation. BBC Radio 4 must have decided that now was a good time to air an unedited repeat of 2008's documentary Peer Review in the Dock. So now certainly seems like a good time to post an unedited repeat of my comments from the time.
--
A few thoughts on Peer Review In The Dock (this evening, Radio 4).
- Nobody has ever questioned whether peer review is really needed: wrong. A lot of people have questioned this, and many experiments have been tried. The most prominent recent example is probably PLoS ONE (no reference to this in the programme). They very rapidly discovered that, yes, a minimum standard is peer review is required when running a journal. But perhaps moving to a non-review model is like communism: you need to have world revolution for it to have any chance of working; going it alone will just lead to your own collapse.
- Peer-reviewers aren't trained: somewhat misleading. Reviewers, at least in the publishing model that I am familiar with, are actively publishing research scientists of at least medium seniority. Most will, while pursuing their doctorates, have participated in "journal clubs" (where the grad students get together to shred a published paper), and many will also have co-reviewed manuscripts alongside their supervisors (not strictly allowed, but very widespread). What all students certainly are trained to do, even at undergraduate level, is not to take the truth of published work for granted, and to watch for potential flaws. To teach science is to teach scepticism. Which brings me on to the next point...
- Reviewers aren't all that great at spotting errors: so what? Academics and publishers know this. The system is designed this way. Review is supposed to be a basic filter for sanity and competence; it is only journalists who hear "peer-reviewed" and think it is the definitive stamp of authenticity. Like democracy and trial-by-jury, it is not used because it works, but because it fails less disastrously than the alternatives. (Incidentally, their example of introducing deliberate errors to a paper and seeing who notices them is not entirely fair: most papers are not only reviewed by the journals reviewers, but by the authors' colleagues before they submit the manuscript, and by editors before review.)
- The last part of the programme was devoted to publication bias. Publication bias is a big problem. But it has little, if anything, to do with peer-review, and everything to do with publisher policies and author dishonesty. The only conceivable connection it has with peer-review is that some people still mistakenly believe that negative results aren't worth publishing at all -- something that journals like BMC Research Notes and PLoS ONE, and initiatives like trial registration are explicitly tackling.
This is, of course, the limitation of having a half-hour national radio programme about a topic like academic publishing.