28 Apr 2007

What to do about late peer reviewers?

Editors and authors are left in the lurch when reviewers are late in returning their reports or even fail entirely to return a report. Although reviewers are usually volunteers, they have made a promise to the journal and their scientific colleagues, and the failure to return a report can greatly lengthen and complicate the review process.

Marc Hauser and Ernst Fehr writing in PLoS Biology have an idea of how to remedy this. "Reviewers that turn in their reviews late are punished, whereas those that arrive on time are rewarded". They suggest that "
for every day since receipt of the manuscript for review plus the number of days past the deadline, the reviewer's next personal submission to the journal will be held in editorial limbo for twice as long before it is sent for review" and "for every manuscript that a reviewer refuses to review, we add on a one-week delay to reviewing their own next submission".

I hate when reviewers are late, and it would be immensely satisfying to take revenge on them by snarling up their own submitted manuscripts, but I'm not sure that this is a workable system. It is true that journals can track the timeliness and helpfulness of reviewers - we do this at BioMed Central, so technical feasibility is not my objection.

Publishing is not a game; the aim is to get research checked and, if sound, published as quickly as possible. Deliberately adding in delays and checks both adds costs and impedes science. A publisher that started imposing such sanctions might lose those reviewers as both reviewers and authors, and introduce even more antagonism into what can already be a fraught process. This system would hit senior and well-known researchers the hardest, as they are most often asked to review and they simply can't agree to review every article sent to them. Reasons for declining would need to be distinguished - is someone who suggests a qualified and keen colleague deserving of sanction? What if they were inappropriate, or didn't even receive the email as it landed in their spam filter? What if they had a genuine reason that they were unable to return the report, such as the several reviewers in New Orleans at the time of Hurricane Katrina who were somewhat understandably late with their reports, or those suddenly falling ill or with a family emergency? What if they needed more time to complete a thorough reanalysis, as one of our editorial board members did recently?

What can we do instead? We already have a reviewer discount, such that reviewers who return their reports on time to a journal within the BMC series are entitled to a 20% discount on the article processing charge the next time (within a year) that they are the submitting author of a manuscript submitted to a journal in the series (i.e. BMC Bioinformatics, BMC Cancer etc.).

Editors can be ruthless. If an agreed reviewer is late, we may well make a decision without them if we already have reports in from other reviewers - if a reviewer doesn't want their time in reading the manuscript wasted and their opinions ignored, they should get the report in on time.


Fostering a good relationship with reviewers and authors helps. Authors who receive timely reviews will feel inclined to review quickly themselves. Authors who don't may well refuse to review until they have received a decision, the flip side of Hauser and Fehr's proposal. Equally, authors and reviewers need to remember that editors are human too. We sometimes receive a level of bad tempered abuse from authors that if we dished back out we'd be fired. A way to remind reviewers that we're human is to phone them - email can give the false impression that journals are run by robots, although we do find that email reminders can be very effective. A look at the statistics for the times that reviews are returned shows that most are returned within a day of us sending a reminder email letting the reviewer know that their report is due within three days.

If we respect each other and agree with the aim of efficiently and effectively assessing scientific research, we're all better off. I'm not sure that penalties are the best way to achieve this.

Journalology roundup #6

Plagiarism is not fair play. "I beg to differ with the view ... that non-English-speaking researchers' plagiarism of scientific text should be dealt with leniently". There has been quite a lot of discussion on the World Association of Medical Editors list about plagiarism - I agree that it cannot be tolerated.

Valid Consent for Genomic Epidemiology in Developing Countries. "we discuss the practical challenges of defining and obtaining valid consent for genomic epidemiological research in developing countries".

Mistakes and misconduct in the research literature: retractions just the tip of the iceberg. "I recently wrote a systematic review of studies (published between 1972 and 2005) of growth in children taking stimulant medication for attention deficit hyperactivity disorder (ADHD), and I was astounded by the poor quality of much of the research".

Controversial fertility paper retracted. After determining the article was "duplicated," Fertility and Sterility bars corresponding author, but not other co-authors, for three years.

Open access and the reuse of images

Pedro on Public Rambling has written about the reuse of scientific images and notes that the Creative Commons license used by both BioMed Central and PLoS allows him and other bloggers to freely post images from our journals without the need to laboriously fill out forms or the worry of facing legal action: “From a user point of view this is absolutely liberating. I can not only read these manuscripts but I can use their pictures to comment on them and I can even think of creatively combining their content with other works”. All that is needed is an acknowledgement of the source of the figures (this wasn't given when an article was published in Cytometry Part A, hence this erratum).

Pedro's comments were well placed.
Shelley Batts on Retrospectacle, reports that she had a tangle with lawyers ... over the 'fair use' of a figure ... In short, I was threatened with legal action if I didn't take it down immediately. I used a panel a figure, and a chart, from over 10+ figures in the paper. I cited and reported everything straight forwardly. I would think they'd be happy to get the press. But alas, no.

John Hawks has pointed out that unfortunately Wiley (the publisher in question) might be within their rights to argue that 'fair use' does not extend to posts on a blog on a commercial platform that carries advertising. Shelley initially redrew the figures after being contacted by Wiley, but a check of the Wiley permissions FAQs confirms that
If you redraw a figure, you have created an adapted version of the original figure. You are still required to credit the original source. If the figure or figures you are redrawing exceed the limits of "fair use," you must request permission from the original source. Redrawing is not a way to by-pass copyright protection (my emphasis).

Although Wiley has now backed down since the blogosphere exploded on this issue - a good summary is on A Blog Around The Clock - this confused picture of permissions and rights only goes to bolster the argument that traditional closed access publishing damages the dissemination and discussion of science.

Peter Suber has noted before that open access solves not just the 'serials crisis' on journal pricing, but also the 'permissions crisis'. Although Wiley allows self-archiving, the instinct and practice of traditional publishers is to limit the reuse of work published by them unless they give permission and receive payment. If you believe that scientific work should be communicated and debated without barriers, publish in an open access journal.

23 Apr 2007

Journalology roundup #5

Science retracts major Arabidopsis paper. "Scientist acknowledges omitting data, but denies any impropriety". Another paper and probably a career bites the dust.

The strike rate index: a new index for journal quality based on journal size and the h-index of citations. "The SRI explains more than four times the variation in citation counts compared to the impact factor". I've yet to read this properly, but it looks interesting. At some point we need to settle on some stable and useful alternatives to the Impact Factor. As an aside, I love Biomedical Digital Libaries! I would say that as it's a BioMed Central journal, but it's carving out a great niche, publishing some work of real interest to librarians, editors and others interested in scientometrics.

NEJM punishes reviewer for breaking embargo. "The New England Journal of Medicine has banned Martin Leon, a cardiologist at the Cardiovascular Research Foundation, from reviewing studies and contributing editorials or reviews for five years, as a punishment for telling colleagues at an American College of Cardiology symposium that a trial comparing medication to stents for the treatment of clogged coronaries "was rigged to fail-and it did." The data was to be presented two days later, and published in NEJM soon after". That seems a bit harsh! There are worse crimes than breaking an embargo. If journals published work online when it was ready rather than hoarding it for months to fill paper issues then there would be less temptation to break embargos. If it is through peer review, make it public without delay!



A bumper crop from the Cochrane Collaboration!

Editorial peer review for improving the quality of reports of biomedical studies. "Little empirical evidence is available to support the use of editorial peer review as a mechanism to ensure quality of biomedical research".

Time to publication for results of clinical trials. "Trials with positive results are published sooner than other trials".

Grey literature in meta-analyses of randomized trials of health care interventions. "Published trials tend to be larger and show an overall greater treatment effect than grey [unpublished] trials".

Full publication of results initially presented in abstracts. "Only 63% of results from abstracts describing randomized or controlled clinical trials are published in full. 'Positive' results were more frequently published than not 'positive' results. The consequence of this is that systematic reviews will tend to over-estimate treatment effects".

Technical editing of research reports in biomedical journals. "Most journals try to improve articles before publication by editing them to make them fit a 'house-style', and by other processes such as proof-reading... There is some evidence that the overall 'package' of technical editing raises the quality of articles (suggested by 'before-and-after' studies). However, there has been little rigorous research to show which processes can improve accuracy or readability the most, or if any have harmful effects". I'm a big fan of Liz Wager's work, she's one of the few out there really trying to test what editors and journals do.

20 Apr 2007

Archivangelism - has the means become the end?

Stevan Harnad has always been insistent on the need for immediate, free access to academic research, and he sees self-archiving as the means to this end.

Now that he recognises that self-archiving may only be compatible with some publishers if there is a delay in access, Stevan (who is normally uncompromising, e.g. "OA itself is non-negotiable") seems to have accepted this fudge, which is not immediate free access: "Access to the immediate deposit can then either be set as Open Access immediately, or (in case of a publisher embargo), as Closed Access, provisionally". This is the "Immediate-Deposit & Optional-Access" (IDOS) policy. Even with a fancy name, and as Jan Velterop has noted, it's not open access.

Stevan has been adamantly against the crystal-ball-gazing that predicts a loss in subscriptions resulting from self-archiving, but his own crystal ball predicts that following a universal adoption of 'IDOS' repositories, "Embargoes will disappear very soon thereafter" [my emphasis].

Stevan has criticised advocates of open access journals who claim that open access journals will reduce costs, as he insists that it is access that is paramount and not costs, which is a fair point. Yet he now criticises open access journals and hybrid journals for the extra costs he says they impose, for example criticising CERN's decision to put up funds to pay for article processing charges as "diverting scarce research funds from research to paying publishers". It seems that if open access journals might save costs, that's a side issue; if they might cost more, we should take heed.

While there might be issues with double-payment in hybrid journals, that can be corrected by adjusting subscription costs - it isn't an insurmountable issue. Further, open access journals don't have this problem, but Stevan's criticisms of Springer Open Choice don't often allow for this distinction. How open access can be paid for is open to debate - the debate has usually focussed on the article processing charge, but as Peter Suber has noted not all open access journals charge author fees. Costs can also be met using advertising (though hardly an uncontroversial way for a medical journal to recover costs), grants from societies (this is how the Beilstein Journal of Organic Chemistry is funded), charitable and philanthropic donations, or even by cutting costs. An interesting aspect of article processing charges is that they can result in price competition on an article-by-article basis.

Rather than being a system with no barriers to access (the definition of open access as I understand it), under self-archiving each author (having signed over copyright to the publisher) needs to deposit their articles in their local repository (if one exists), then each reader needs to realise that the repository exists, find the article, and then possibly have to contact the author to get a copy, and then have the author respond and send it to them. Sadly, this seems to be laborious, incomplete and prone to failure. I had thought that the launch of Google Scholar opened up the possibility of self-archiving really being viable, but their practice of linking to all copies of an article that could be found on the web only lasted a few short months and links to free versions are now intermittent - possibly (probably), Google were sat on by publishers.

Libraries will apparently continue to subscribe to journals that their users can access at no cost, despite evidence that libraries are acutely attuned to cost, and an existing trend of university actions against high journal prices. Publishers will apparently be happy to have a business model that depends on their customers paying for a product that can be obtained for free. This business model is actually seen in shareware, though shareware is much rarer now than 10 to 20 years ago, and certainly the music industry isn't too keen on this business model. If self-archiving doesn't cause a collapse in subscriptions to closed access journals, I'd suggest that it will implicitly have failed to achieve its goal. Surely the aim is to provide immediate free access to peer reviewed academic research - if libraries and readers are unaware that they could get what they are still paying for at no cost, might that not imply that self-archiving doesn't provide universal access? Might there be those who aren't paying and who don't realise that the research is accessible, and therefore never read the article? We need to end this farcical situation of researchers not reading articles, and although I like the idea of self-archiving I don't believe that if offers a complete enough solution, or is sustainable.

Stevan's insistence on self-archiving has even extended to criticising central deposit in PubMed Central, arguing that articles must only be deposited in institutional repositories. What is BioMed Central to do; stop depositing in PubMed Central? It's hardly as though we block our authors depositing in their own local repositories.


Stevan lays claim to being the true voice of open access. In responding to an article by Ben Goldacre he argued that "It is not "two [Gold] OA publishing organisations" that have led the fight for OA, but one (Green and Gold) organisation -- the same one that first coined the term OA in 2002: the
Budapest Open Access Initiative (BOAI)". Actually, it's not true that BOAI coined 'open access'.

Stevan at times appears to be entirely opposed to the idea of open access journals (despite apparently supporting the idea a decade ago), for example raising criticisms against BioMed Central as a publisher from the outset.

I can't agree with Stevan's insistence upon local institutional self-archiving to the apparent exclusion of other approaches to open access.
It appears to me that with archivangelism the means has become the end.

Arms trade and publishing - strange bedfellows

Although the pen is mightier than the sword, involvement in publishing hasn't kept Reed Elsevier out of the defense industry.

The conflict between on the one hand being involved in advances to aid the treatment of patients and on the other arranging the sale of lethal weapons has garnered increasing criticism.

A letter organised by the Campaign Against Arms Trade pitted the Lancet against its own publisher, and the BMJ has waded in with a call for a boycott of the Lancet. This focus on the Lancet appears to be aiming to force a wedge between journal and publisher - imagine the fall-out if the Lancet left Elsevier?

A petition on Idiolect mentions that "DSEi's 2005 official invitees included buyers' delegations from 7 countries on the UK Foreign Office's list of the 20 most serious human rights abusing regimes, countries like Colombia, China and Indonesia... Reed Elsevier arms fairs have featured cluster bombs, depleted uranium munitions and torture equipment". Nice.

Another petition organised by Nick Gill has some real teeth as it calls for a boycott of Elsevier journals.

Richard Smith and some others have even been engaging in shareholder action at the AGM.

I won't sign either petition as I obviously have an ulterior motive, but I'm certainly not posting this simply to stick one in Elsevier's eye for their opposition to open access. Others might want to consider signing the petitions. The defence industry might be legal (but not always), but then as has been pointed out, so was slavery.

Political correctness gone mad!!!

'The term "blinding"—commonly used in clinical trials—is particularly inappropriate in the ophthalmological setting, not least because an outcome measure of a particular trial could indeed be blindness'. The authors suggest "masking" as an alternative. Is this political correctness gone mad?

Hardly. Political correctness just means being sensitive to people and their wishes. In another bout of political correctness, we recently changed the scope of
BMC Geriatrics to mention 'older' rather than 'elderly' people, as this is the preferred term. Equally, we always try to refer to patients as people, rather than letting them be defined by their condition. 'People with schizophrenia' is better than 'schizophrenics'.

That said, I think I'll continue to use the term "blinding" in a non-ophthalmological setting, but doctors running trials might want to think about "masking" being the term of choice.

Journalology roundup #4

Korean wolf cloning study pulled. "Journal removes paper describing the first cloning of gray wolves from its website after the authors acknowledge mistakes in the manuscript".
Editors will need to put cloning articles through the wringer from now on.


NEJM letter retracted over authorship
. "Co-authors of a letter about antipsychotic medications may have played no role in its content; first author claims misunderstanding".
Misunderstanding? Hmmm. He mentions a 'Mathew Hotopf' who is apparently a UK psychiatrist who he claims did discuss the letter, but is not the same Matthew Hotopf as he ended up listing as an author (an editorial board member of BMC Psychiatry). Except there is no trace of another 'Mathew Hotopf' on the web.

Publishers reveal increase in digital advertising.
"Further evidence of advertisers' growing enthusiasm for the net in the latest membership survey from the UK Association of Online Publishers. The AOP census 2007, carried out among some of the largest newspaper and magazine groups, found that digital ads now contribute an average 12% of their revenues. And all of them believe this is set to rise substantially in the coming year".

Problems with use of composite end points in trials
. "The use of composite end points in cardiovascular trials is frequently complicated by large gradients in importance to patients and in magnitude of the effect of treatment across component end points. Higher event rates and larger treatment effects associated with less important components may result in misleading impressions of the impact of treatment". Translation: Lumping together different outcomes such as heart attacks and death in a trial can confuse the picture of the effect of treatments.

Blame the drug companies… and yourself...
"So here's an interesting question. Lots of us wander around quite happily with a "dolphins good, drug companies bad" morality in our heads; and this is entirely reasonable, they are quite bad. But how easy is it to show that drug companies kludge their results, and to explain what they've done to a lay audience?". I think Ben's possibly being a bit kind when he labels the deliberate use of a low dose of a competitor's drug 'trivial', but I agree that a solution to problems relating to financial interests would be an increase in drug and other treatment development and clinical trials funded by government or other non-company organisations.

The Perils and Pitfals of Independent Research. "Ben Goldacre over at badscience is talking about research and big pharmaceutical companies. He says that while it is true we all firmly believe that pharmaceutical companies are inherently evil corporations (and dolphins are truly wondrous creatures) the research they produce is no more flawed than any other research. He goes on to say that there aren't very many independent RCTs (i.e. not funded by Big Pharma) and that he does not believe this is Big Pharma's fault. I believe I can shed some light into why that may be".

Science retracts major Arabidopsis paper. "Scientist acknowledges omitting data, but denies any impropriety". It seems that rarely a month goes by without an major error being uncovered in Science. One explanation may be that 'major breakthrough' work is much more vulnerable to the temptation to massage or fabricate data, but also that articles published in receive more scrutiny than the average article from readers, simply by virtue of being published in Science. A less kind interpretation would be that Science focusses more on importance than integrity, but after being burnt by the Korean cloning scandal that is less likely.

The latest issue of Research Information is online.

'OA creates new opportunities'.
"The open-access publishing model enables new types of journals that could not be published under the traditional subscription model, believes Matthew Cockerill, publisher of BioMed Central".

Study shows subscription price variations.
"Journal prices have risen by very different amounts over the past seven years".

Oxford Journals opens Chinese office.

Bentham announces OA growth strategy. They say that imitation is the best form of flattery; suffice it to say that at BioMed Central right now we've never been more flattered. We're positively blushing.

Encouraging innovation. "Egypt-based Hindawi Publishing has just converted its last two subscription journals to the open-access model. The company's president and co-founder Ahmed Hindawi tells us why".

Semantic enrichment boosts information retrieval. "RSC Publishing's Project Prospect is enhancing the chemical information available from the publisher's journal articles".

Fostering open-access in the research community. "Elisabetta Poltronieri of Italy's Istituto Superiore di SanitĆ  reports back from an international seminar on open access held recently at the research institute".

14 Apr 2007

Journalology roundup #3

The Relationship of Previous Training and Experience of Journal Peer Reviewers to Subsequent Review Quality. [The] inability to predict performance makes it imperative that all but the smallest journals implement routine review ratings systems to routinely monitor the quality of their reviews (and thus the quality of the science they publish).

Fraud in our laboratories? We must face the question of whether most research carries with it a whiff of corruption. It is clear that only a low barrier needs to be crossed to end up on the wrong side of scientific ethical standards. How often do we ponder about raw data in which everything fits with a given hypothesis except for one part of a figure? The following discussions could go in different directions. Was the figure mislabelled? Were the samples mixed up? Maybe one sample in a triplicate was distorting the results? Should the experiment be repeated until it provides unambiguous and reproducible results or should this one outlier just be excluded from a paper?

Fraud: causes and culprits as perceived by science and the media. The logic of science and the media's logic of news selection work together to portray publicly the problem of scientific misconduct as the fault of individuals. Neither side can be blamed for operating as they do; however, the way that science and the media deal with the issue of scientific fraud detracts from the underlying problem: the institutionally induced deviant behaviour of many scientists.

Push for open access to research. Internet law professor Michael Geist takes a look at a fundamental shift in the way research journals become available to the public.

Advertisement inappropriate. The NCH Healthcare advertisement found in the July issue should not have run, as written, in Canadian Family Physician (CFP). The advertisement contravenes the Canadian Medical Association Code of Ethics, as it promises financial reward for referring patients.

New details in Korean plagiarism case. A Korean scientist who co-authored a paper allegedly stolen from another scientist has turned the tables on the journal editor who spoke out on the paper in question, accusing him of defamation and threatening him with legal action.

6 Apr 2007

Journalology roundup #2

The Importance of Negative Results. There is a difference between negative findings that result from poor research design and negative findings from good research. In poor research, failure to find effects can stem from an insufficient number of observations, leading to lack of adequate statistical power.

Ethics approval requirement for CJEM research publications: a step forward for Canadian emergency medicine. A good precis of the reasons why journals require ethics approval for research.

What makes an expert? A very interesting report by Brian Deer in the BMJ on the controversy surrounding the research and publications of Mark Geier, a US autism and vaccines researcher, including a recent retraction prompted by the Neurodiversity blog.

Epidemiology and Reporting Characteristics of Systematic Reviews. These results substantiate the view that readers should not accept systematic reviews uncritically.
Many Reviews Are Systematic but Some Are More Transparent and Completely Reported than Others. An editorial in PLoS Medicine.

Statistical Reviewers Improve Reporting in Biomedical Articles: A Randomized Trial. This prospective randomized study shows the positive effect of adding a statistical reviewer to the field-expert peers in improving manuscript quality. We did not find a statistically significant positive effect by suggesting reviewers use reporting guidelines. This study used a Manuscript Quality Assessment Instrument developed by Goodman and colleagues. If this can objectively measure manuscript quality, shouldn't editors and peer reviewers routinely use this during peer review, rather than after it?

Life and times of the impact factor: retrospective analysis of trends for seven medical journals (1994-2005) and their Editors' views. The vulnerability of the IF to editorial manipulation and Editors' dissatisfaction with it as the sole measure of journal quality lend weight to the need for complementary measures.

Peer review and the Term Breech Trial. One of the original peer reviewers of a trial published in the Lancet in 2000 speaks out about the peer review of the study: "We have watched the subsequent scientific debate with concern. We are worried about the sanctity in which peer review is held and used to defend this research; investigators and editors usually do their best and in this case we were supportive of their findings. Peer review is, however, subject to all the pitfalls of any judgment process. In retrospect, fast track in particular might only be appropriate with unanimous support from review. Further consideration of the points might have reduced the subsequent controversy. Medical journals are becoming more transparent because this is thought to protect scientific integrity. Is it time to make peer review more transparent?". Sounds like a vote in favour of open peer review.