30 Jan 2007

Tools to search the literature, and PubReMiner plugin

Recently I came across PubMed PubReMiner, created by Jan Koster. I've been very struck by this tool, which I think is pretty much the best way to search PubMed.

I have previously tried a number of different tools (see my list of Tools to search the literature in the sidebar), and Google Scholar by far outstrips a standard PubMed search due to the use of the PageRank algorithm to pull the most prestigious work to the top of the results. The PageRank algorithm doesn't just look at citations, rather it weights them by how often that referring articles has itself been cited. A citation from a source that is itself heavily cited counts more than one from a source that nobody has ever cited.

I've tried out Kfinder, which takes an abstract or other text as the input, and suggests keywords based on the frequency of occurrence of improbable words. You select keywords, and it returns researchers who match that search in Medline at least twice. Kfinder is quite slow and limited to Medline, but it is intuitive, and a good start to selecting keywords if you haven't had much practice.

PubNet from the Gerstein lab looks really promising. It visualizes the network resulting from a query to Medline. The network to the left, focussed around Howard Ochman and Emmanuel Lerat, clearly shows a network of collaborating colleagues, but I only ran that search because I knew of the network already. It could be useful, but I've not found the time to devote to exploring its possibilities, and it takes a while to generate the visualization at times. If it were quicker and easier to navigate the results, I might use it.

I've only had a quick play with Authoratory, and while the concept is excellent (automatically mining information from the results of PubMed searches), the delivery is lacking. When Deborah Saltman, our Editorial Director for Medicine, tried it she found that she was missing, and the keyword search doesn't take Boolean searches yet. A definite work-in-progress.

e-Biosci is clever in that it accepts any text as input (an abstract, or even a whole manuscript, although it was quite sluggish!) and calculates the concepts contained within. You can add and remove concepts to refine your search, and weight how important they are, and then search using these concepts in Medline abstracts and some full text, including BioMed Central's. The advantage of this approach is that you never need to think about appropriate keywords or search terms; the disadvantage is that some concepts are quite diverse. A good example is that an abstract about physician uncertainty in medical decision-making returned some physics articles near the top! I find that it can return items that you probably wouldn't have found otherwise, and can be very accurate at times.

eTBLAST is one of the big hitters in the field. It runs searches against Medline automatically when given an input of text, much as e-Biosci does, and returns a list of related articles. You can then get list of experts in the field, journals to submit to, the history of publishing in this field and several more features. eTBLAST does all the thinking for you, but it does take its precious time. It can take minutes for the results to be returned, which makes me think that the option to have the results emailed is the only way it will get routinely used.

But, as I said at the top, PubMed PubReMiner is my current favourite. Why? Well, it takes standard PubMed queries, which makes it very easy to start using. It is quick and unfussy, and returns the results in easy-to-read columns: a list of the most common journals in the results, a list of the authors who appear most often, and a list of words that most commonly appear in the abstracts, as well as MeSH terms, affiliations and the publications by year. It is simple, but highly effective.

I liked it so much, that I made a Firefox search plugin for it. After vainly following a tutorial, I found that searchplugins.net has a plugins generator, which I've used to create one for PubReMiner, complete with a logo. It is set to the default of a 1000 abstract limit. You can view the source code, and search for it under PubMed or PubReMiner. You can also install it now.

Billions of genes, billions of articles?

We receive, and sometimes publish, manuscripts that "clone and characterize" a new gene in a species. The gene is usually identified from a cDNA library and is confirmed as present in the genomic DNA by PCR. The expression is measured using RT-PCR, and perhaps a phylogeny is drawn that relates this gene to others within the same species and to its relatives in other species.

There are something in the order of 1.5 million known species. A reasonable guess of the average number of genes per species is 5,000 (bacteria are in the range 500-4,000, eukaryotes 5,000-40,000). This gives the number of genes waiting to be cloned and characterized in known species alone at 7.5 billion, more than one for every woman, man and child alive today.

I think this illustrates quite well Mark Gerstein's recent comment in an article in BMC Bioinformatics that "academic journals alone cannot capture the findings of modern genome-scale inquiry". Philip Bourne has similarly said in the pages of PLoS Computational Biology that "Clearly, no one perceives a database entry of, say, a sequence, or a specimen in a museum collection, as being as valuable as the journal paper that describes it. But, ironically ... the database entry may indeed be more valuable".

28 Jan 2007

Mashups, mirrors, mining and open access

The Creative Commons Attribution License under which open access articles are made available by both BioMed Central and PLoS allows others to create sites that incorporate the content of these articles, so long as the original source is clearly acknowledged.

Two ways to do this are mashups and mirrors. According to Wikipedia, a mashup is a site that "combines content from more than one source into an integrated experience". A mirror is an exact copy of a website.

BioMed Central officially has four mirrors to which we feed content, at INIST in France, University of Potsdam in Germany, PubMed Central at the NIH, and the National Library of the Netherlands' e-Depot. I've come across some unofficial mirrors in specific areas like genomics and bioinformatics in the past.

PLoS ONE already has its own unofficial mirror, created by the people behind HubMed: PLoS Too. Rather than displaying the articles as they appear on the publisher's site, this is a pared-down view of the articles and it has a couple of good new features - auto-generated tags for each article, and a very quick live search box.

On the mashup side, Free Biomedical Images has made open access images available in a searchable database, mainly (entirely?) taken from BioMed Central articles, and fully attributed. Users can comment on the images, rate them, email them to a friend and jump to the published article.

A key feature of open access is that we don't hide away the full text of our articles. The entire 'corpus' of our open access research articles is available on our data mining page for anyone to download. Gerry Rubin has said that "the most important reason for Open Access is data mining".

The idea of mashups, scripts and extensions is just beginning to reach the bioinformatics community. A bioinformatics mashup by Alfonso Valencia is iHOP (Information Hyperlinked over Proteins), which links information about genes and proteins to text from PubMed. Not satisfied with just a mashup, Mark Wilkinson has created a Greasemonkey userscript called iHOPerator that enhances the iHOP website with tag clouds. You can read about in his BMC Bioinformatics article. Two other Greasemonkey userscripts link PubMed to social bookmarking sites, one to CiteULike, the other to Connotea. A third links Google Scholar to CiteULike. The iSpecies search engine pulls together information about any species you enter from disparate sources, including scores of biomedical databases and even Yahoo! Image search.

Mashups, mirrors and mining are definitely the future of science publishing.

27 Jan 2007

A case study in open peer review

Last year, BMC Anesthesiology published an article by Andrew Vickers and colleagues on the use of acupuncture for the pain caused by thoracotomy; it was a pilot study to examine whether a randomized controlled trial (RCT) was feasible. Thoracotomy is surgery to the chest to allow access to the lungs. Andrew Vickers is on the editorial boards for several of our journals, a respected statistician and trial methodology expert with an interest in testing complementary medicines. It was unlikely when he submitted the work that there would be any fatal flaws. However, we don't allow submissions from our editorial boards to escape peer review - and I've seen many manuscripts from editorial board members fail to pass muster.

Who would be suitable to review the manuscript? It's about acupuncture, so acupuncturists, right? Well, partly.

If we were to ask only acupuncturists to review, there are two potential drawbacks. Acupuncturists believe in the efficacy of acupuncture, otherwise they probably wouldn't be acupuncturists. If there were flaws in the study they might be inclined to give it the benefit of the doubt, whereas someone without a vested interest in the intervention under study might raise objections. The other issue is that they might be unfamiliar with pain relief in this particular setting. On the other hand, were we to only approach those who had never used acupuncture and who were otherwise experts in pain relief we would face two potential biases. For one, they might be skeptical that acupuncture works at all and thus be too picky, raising unreasonable objections to block publication. Another consideration is that they might themselves have a vested interest in the drugs used to relieve post-surgical pain, perhaps having received speaking fees or consulted for pharmaceutical companies.

To ensure rigorous and fair review, we needed someone who was familiar with acupuncture for pain relief, preferably with an additional experience of either randomized controlled trials, systematic reviews, or anesthesia for surgical interventions. Although it is not itself an RCT, the purpose of the trial was as a pilot to see if an RCT was feasible, and those who have conducted systematic reviews will have a good knowledge of critical appraisal. Secondly, we needed someone familiar with post-thoracotomy pain relief other than acupuncture, preferably with a knowledge of randomized controlled trials. Lastly, we needed someone with a familiarity with randomized controlled trials and anesthesiology for post-surgical pain, were either of the other two not themselves familiar with it.

BMC Anesthesiology is open access, but it is unusual among journals in another way. It has open peer review, as do all the medical journals in the BMC-series. By this we mean that the reviewers consent to their names being made to be known to the authors, and for their reports to be made public if the manuscript is accepted for publication. Open peer review allows me to sweep back the curtain, and reveal the peer review process, like a magician flaunting the secrecy of Magic Circle.

Among our reviewers were two complementary and alternative medicine experts, Betsy Singh and Hugh MacPherson. Betsy Singh has published on the use of several complementary medicines, including a systematic review of acupuncture for pain relief. Hugh MacPherson has published on ways to ensure the safety and accurate reporting of acupuncture trials, and is familiar with randomized controlled trials. We also had two reviewers who are researchers of pain relief, Deniz Karakaya and Jorge Dagnino. Deniz Karakaya is a thoracic surgeon has published on the use of conventional anesthetics in various procedures, including for post-thoracotomy pain. Jorge Dagnino has also published on the use of anesthetics for post-surgical pain, including thoracotomy. We had the full house.

What criticisms or points did they raise? Dr Singh had no complaints, and it isn't surprising to see a reviewer of a manuscript by Dr Vickers say this. Dr MacPherson was well-disposed to the study, but raised several points where the authors could better report details of their work, or better justify a statement. Dr Karayaka had only two relatively minor criticisms, asking for more detail on their procedures. Dr Dagnino raised the most objections, requiring many more methodological details, and questioning some of the conclusions. It is interesting to see that the two reviewers who asked the authors to make the most corrections were an acupuncturist and a traditional anesthesiologist. Both types of researchers applied their skills of critical appraisal to help the authors improve their work. Upon re-review, Drs Karayaka and Dagnino had some remaining questions, and the editorial staff determined that the authors' response to this second round of review was satisfactory and we proceeded with publication.


Although the study is limited in its scope and conclusions, inevitable for
a pilot, uncontrolled study in only 36 patients, and although it would be easy to dismiss by many journals as not interesting enough to publish, we thought it necessary and valuable to have enough qualified reviewers to assess it, and we obtained the advice of four reviewers who together were qualified to judge all the main aspects of the work. Judging soundness (technical validity) isn't easy, and is more difficult than measuring the level of interest. It isn't that uninteresting though - more than 3,000 readers have accessed the article from our website in the past 10 months, and more will have read it on PubMed Central, a level of interest for which many blog writers would willingly give up a kidney.

26 Jan 2007

No longer talking to the ether

One week into blogging, and Peter Suber at Open Access News has picked up my reply to the 10 problems with peer review. I'm not just talking to the ether now.

25 Jan 2007

The Evil Empire Strikes Back

Picture, if you will, Darth Vader. Big, black armour. Heavy breathing. Imagine that, tired of being right hand man to the Emperor, Darth Vader has decided to venture into scientific publishing. Imagine what his publishing house would be like.

Elsevier has a reputation as "evil" (as seen here, here, here , here, here, here, here and here, but not here or here). This is mainly due to making massive profits (total revenues for 2005 were £1.4 billion) while restricting access to scientific research. They have also been noted for an involvement in the arms trade, and the censorship of published work.

Open access has been a thorn in Elsevier's side, much as the Rebel Alliance were to the Imperials in Star Wars. Open access threatens the ability of publishers like Elsevier to maintain their hold over library budgets, which is why Elsevier has repeatedly criticised OA (several society publishers have joined in too as they perceive OA to be a threat to their society's income, drafting the Washington DC Principles).

However, Elsevier recently announced a Sponsored Article option on some of their journals. Their hand was effectively forced by CERN, doing in effect what PLoS' boycott failed to do. Elsevier refuse to call it open access, and authors do not retain copyright. They even allowed authors to post preprints, allowing self-archiving. It might have been thought that Elsevier had laid to rest their hostility to open access.

We'd have been wrong. In a brilliant piece of investigative journalism, Nature have revealed a fruitful relationship that Elsevier, Wiley and the American Chemical Society have with a PR guru, advising them on how to take on the open access movement. Not just any guru, but the same one who represented Enron and ExxonMobil. A director at Wiley is quoted as saying that "Media messaging is not the same as intellectual debate". This explains some of the outrageous spin we've seen about open access ("government censorship"; "no peer review").

The blogosphere is beginning to react. The IWR blog understates it somewhat when they say that this will "do little for the reputations of the publishers involved". Chris Leonard predicts that "they won't be able to use these arguments even if they wanted to", while Jonathan Eisen has confidently exclaimed that "Their ship is sinking and they are grabbing at the last little pieces of wood they can find".

Hopefully, this news thoroughly discredits the smears of these publishers against open access. Was this their open exhaust port?

24 Jan 2007

Response to '10 Problems with the Peer-Review Publishing Process'

Kevin Dewalt's blog on the 19th January includes 10 criticisms of peer review. I've posted a comment on his blog with my response to each of the points, but I'll copy them here as well.

Kevin's original points are italicised, and I've made a couple of additional comments since I replied on his blog that are indicated by square brackets. I hope I've corrected some misconceptions about peer review.

---
1.
Unstated real or perceived conflicts of interest. Reviewers and authors can have relationships with entities that have an ulterior motive in getting material published.

True, but many journals, such as mine, require authors and reviewers to declare their competing interests - in our medical journals, these interests are published with the article. Editors are used to watching out for this.
---
2.
Peer-review process advances slower than scientific progress.

Yes, but peer review doesn't stop someone first posting their article on their own web-site, discussing their work at conferences, or posting their work on a pre-print server like ArXiv. Anyway, scientific progress isn't as rapid as people believe, and without the checks and balances that peer review gives, all sorts of rubbish would be published, and scientists would have to follow even more blind alleys than they do already after reading profoundly flawed research. Peer review adds some rigour into the process of communicating scientific research. Less haste, more speed is an apt concept here.
---
3.
The current process does not provide authors and reviewers with basic collaborative web tools.

That's nothing to do with peer review, just the delays in the Web 2.0 revolution getting to publishers. PLoS ONE (published by Public Library of Science, another OA publisher) does now offer reviewers and authors interactive tools to annotate articles. Many journals, like mine and the
BMJ, allow any reader to comment on a published article.
---
4.
Authors lose copyright privileges when publishing yet are often forced to publish to continue career advancement.

Traditional journals insist on copyright transfer. Many open access journals, including those published by BioMed Central and PLoS, allow the authors to retain copyright. The article is published under a Creative Commons Attribution License.
---
5.
Peer-review networks tend to form around cliques. Those “outside the club” of a particular discipline - where often the best ideas surface - cannot get published because new ideas are rejected by the current establishment. As a result great ideas are often lost.

I don't believe that this complaint is really that valid. The complaints I've read about were by top scientists who couldn't get their idea published in Nature,
Cell or Science. Well, just publish it elsewhere. There are plenty of journals that aren't as picky as those journals, and if authors had a little more self-awareness they'd recognise that they aim too high sometimes. Besides, many journals don't use established lists of reviewers, but go straight to those publishing related work and ask them. So, yes, you usually have to be a published scientist to review, but then it is called *peer* review, isn't it? I doubt that "the best ideas" surface outside academic research, the lone researcher is more likely to be a kook than a genius. There are some geniuses out there, but they are the ones you read about in the news - there's a teensy bit of a selection bias going on...
---
6.
Precedence is often establish by those with the best personal contacts and not those who first introduce new theories.

I don't see the basis for this argument. Precedence does go to those who first raised a theory, so long as scientists are aware of it [this is the idea of 'priority']. Those who publish in languages other than English are at a disadvantage, admittedly, but some journals allow republication of work in English that was previously published elsewhere in another language, so that gives authors the possibility to widen their audience. Peer reviewers go out of their way to alert authors to work that first demonstrated something, and I have also insisted that authors cite certain studies. Scientists are very attuned to giving due credit for the origin of ideas or techniques.
---
7.
There is no medium for wider, instant dissemination. Doctors or researchers who prepare a presentation or speech cannot “publish it” to a wider audience.

Yes, they can. ArXiv and other pre-print servers allow the publication of non-reviewed work (see e.g.
Public Knowledge Project). Theses and dissertations can be published electronically (e.g. NDLTD, MIT on DSpace). This Portugese university repository, for example, allows the publication of reports, presentation etc. If a university doesn't have a repository for this kind of material, then it should do! Staff and students can take the lead, rather than waiting for journals to do it for them - journals are traditionally for peer-reviewed research, why would we necessarily expect them to post presentations? That said, Nature has recently launched Nature Protocols, so publishers are making some effort to include material that is outside their usual range.

8.
Participating in the review process has little benefit for the reviewer. Performing reviews can take an enormous amount of time and the written reviews are not themselves “published”.

Reviewing takes between 2-6 hours, according a survey I read [an average of 3 hours]. I've seen reviews done in 10 minutes...
Here are a few reasons for participating in peer review:
- Allows a researcher control over what is published in their field - they are the "gatekeepers of science".
- Allows researchers to ensure that what is published accurately reflects and acknowledges their field.
- Can help a scientist get promoted and get grants, as journals often list the names of their reviewers annually.
- In the case of the medical journals in the
BMC-series, published by BioMed Central, we *do* publish the reports, along with the name of the reviewer.
- Reviewers are actually paid by a small minority of journals [the
BMJ pays £25], and more commonly can get other perks such as discounts on reading or publishing in the journal.
- Reviewers get the opportunity to read their competitors' work months before it will be published, and unscrupulous reviewers can deliberately block publication.
- It's interesting! They're scientists, they enjoy critiquing science!

9.
Reviews and reviewers are not “reviewed”. An author who receives a biased review or one based on poor critical thinking has no recourse to publicly respond or invite others to comment.

Not true. Editors assess the reviewer reports and qualifications. Authors who receive what they perceive to be a biased review can appeal to the editor, and request a further opinion. If they are badly treated and the journal is a member of the Committee on Publication Ethics (such as BioMed Central,
BMJ, Lancet) then authors can even take a case to that body [currently only editors can submit a case, but often will in cases of a dispute]. Some journals (BMJ, Lancet) have an ombudsman.

10.
Journals can be prohibitively expensive for some in the developing world.

Yes - this is one of the reasons why open access is a good idea! The research is free to read for anyone with Internet access. Traditional pay-to-view journals are also members of a scheme called HINARI, a WHO project that allows some people in developing countries to read the research for free (but it does have limitations, as they need to be connected to an institution).

22 Jan 2007

Does peer review work?

There are now a reasonable numbers of studies from journals such as the BMJ and JAMA on the factors affecting peer review. For example, we know due to a piece of work done by my colleagues that while author-suggested reviewers appear to return reports of equal quality to editor-suggested reviewers, they are significantly kinder to the authors in their recommendations on publication.

One of those authors, Pritt Tamber, regularly makes clear his belief that peer review doesn't work, most recently arguing in a
BMJ Rapid Response that "Much of the research conducted at the BMJ [...] showed that there is little or no objective value to the process, yet journals and their editors persist with—and advocate—peer review; their only defence is that "there's nothing better," even though few have tried to find an alternative".

As Pritt notes, one alternative is the system used by Biology Direct, published by BioMed Central. The idea is that authors obtain reviews from three members of the reviewing board. If the author cannot find three members of the board to agree (or to themselves solicit an external review) the manuscript is considered to be rejected. If they can get three reports, then the manuscript will be published, no matter what the reviewers say. The twist is that the comments of the reviewers will be included at the end of the manuscript, as an integral part of the manuscript, and signed by the reviewers. The author can make revisions to the manuscript if they wish or even withdraw it, but equally they can ignore the comments and publish despite them. This is with the knowledge that readers will be able to see the reviewers' dissent. Other alternatives include the community peer review being tried by Philica, PLoS ONE and
Nature (Nature's experiment appears to have been unsuccessful, but that is no reason to write-off the idea). More journals, publishers and researchers need to go out on a limb to explore new and better ways to assess and critique scientific research.

Before we go too far with condemning peer review, it is worth remembering that without an evidence base, we won't be able to work out where peer review works, where it doesn't and why, and how to improve it.

Much of the research done into the effects of peer review has been, in my opinion at least, quite superficial. Reading it has really only told me what I knew already from working as an editor.

My wish-list for studies of peer review are:

  1. Creating a metric of "on-topicness" that editors can use to assess how relevant a reviewer's expertise is to a piece of research or an aspect of that work. This could be by simple similarity analyses, comparing their PubMed abstracts to the abstract of the submitted manuscript, or by more complicated semantic analyses.
  2. Comparing manuscripts that were accepted to those rejected to examine the predictive factors. Some of these have been done, but the analyses always strike me as simplistic. The sample size needs to be greater, and the journals chosen need to not be so highly selective - is it really that interesting to see the factors that influence publication in journals like Nature, The Lancet or NEJM? I really want to see are the factors that affect whether a study is ever published in a reputable journal.
  3. A side-by-side comparison of published articles with the original submitted version (before any peer review in any journal). This could be done by a paid panel who would be able to spend the time to do an in depth analysis; an alternative would be to invite journal clubs at universities worldwide to analyse manuscripts in this way (a sort-of Seti@Home for journalology). Did peer review noticeably improve the work?
  4. Examine the fate of articles rejected by journals. Several studies of this nature have been conducted, but they mainly focus only on the journal it is eventually published in and the Impact Factor of the publishing journal. Why not examine whether any changes had been made since rejection? What about whether the rejected work is cited and read? Do a panel and journal clubs agree that the work is now sound, even if it might be uninteresting?
  5. Compare the ability of different editors to assess a manuscript and select appropriate reviewers under time pressure, pitted against some of the new semi-automated tools available, like etBLAST. This would be like a peer review Olympiad.
It is tough to design and conduct good studies to examine peer review, but editors need to make the effort, else skeptics like Pritt will have a point. Now, just as soon as I have some spare time...

21 Jan 2007

Why reviewers decline, and paying for peer review

"Reviewers are more likely to accept to review a manuscript when it is relevant to their area of interest. Lack of time is the principal factor in the decision to decline". This nugget comes from a study by Sara Schroter of the BMJ (Tite L, Schroter S: Why do peer reviewers decline to review? A survey. J Epidemiol Community Health 2007, 61(1):9-12).

Well, blow me down with a feather - it had never occurred to me that reviewers might decline if they were off-topic or busy...

What is more interesting is that their respondents doubt that paying reviewers would make them more likely to review when their time was constrained, and they suggest various other non-financial perks for reviewers, such as being publicly acknowledged, or joining the editorial board.

I'm not sure that I agree that payment would fail to act as an incentive, but I do have doubts that journals should move to making payments. Those journals that do pay reviewers, such as the
Lancet sometimes does, find it easier to get people to agree than those that don't. This is confounded by the prestige of the Lancet, but the extra cash can't harm their chances. What the respondents appear to have forgotten is that every reviewer is asked to review by several journals on a regular basis. If one routinely offers financial compensation, and the others don't, the paying journal will be the more attractive choice. Once enough journals began to pay reviewers, those that didn't would begin to notice their declining success rate, and feel the need to switch to paying (this is classic Game Theory). Payment would no longer help journals to obtain reviewers more easily than their competitors, but no journal could opt out for fear of losing reviewers.

Another issue with paying reviewers is that quite often reports are returned late, and may be of low quality. Payment could be tied to the report being delivered on time, but if reviewers were used to receiving payment, the incentive to then return the report once already late and without payment would be diminished. Payment could be tied to review quality, but using the Review Quality Instrument on every report would be laborious, and from speaking to someone who has used this rating tool it appears to be less than perfect. Currently editors send invitations to some reviewers who reply that they are off-topic or not qualified to review. Would the promise of payment fog the memory of some as to whether they were a suitable reviewer?

The payment of reviewers is also connected to a promise to fast-track. The
Journal of Medical Internet Research offers authors the option of paying a fast-track fee (currently $350), of which part is used to pay reviewers to return reports rapidly. My concern about such a promise of speed is that it conflicts with the job of an editor to ensure a high-quality review process. Although the standard number of reviewers is 2, quite often because editors invite more than 2 reviewers at a time more will agree to review. Normally an editor will be glad of the extra advice (authors may be less keen). If each peer reviewer needs to be paid or a very rapid decision needs to be made, editors will be less inclined to keep more agreed reviewers. If there is the need to seek further advice to resolve a certain issue, an editor might rather simply reject the manuscript rather than pay for and wait for another report.

The House of Commons Science and Technology Select Committee when reporting on open access in 2004, stated their belief that “the introduction of modest incentives for peer reviewers is an imaginative way of rewarding the contribution of peer reviewers to scientific endeavour. By carrying out reviews, researchers add value to the services provided by publishers. Whilst it would be inappropriate to pay reviewers personally, some recognition, made to their department, of the value of their contribution would be welcomed, particularly in view of the fact that many researchers are paid from public funds". I agree that, if it comes to it, a payment to the reviewer's institution would be preferable to direct payments to individual reviewers.

A new meaning to the term "ghost author"

When searching for peer reviewers, I sometimes come across someone who looks extremely well qualified to review an article. I search for their homepage and to find an email address, only to discover that they are deceased. It is always a great shame when a brilliant researcher is no longer with us. The evolutionary biologist Nick Smith tragically died while I was handling his manuscript, and the respect his colleagues held for him is shown by the note in the published article by his co-author Sofia Berlin, and by the organisation of a conference in his memory.

However, my sympathy to the co-authors is sometimes turned to surprise by my discovery that they died several years previously (this is a more believable version of 'life after death' than the research conducted by Gary Schwartz).

Three of the most prolific authors post mortem that I have seen:

  1. Published more than 30 times since their death more than 5 years ago, most recently 3 months ago.
  2. Died three and a half years ago and has published more than 20 times since, up to a couple of months ago.
  3. Published 20 articles since their death five years ago, up until 9 months ago.
In publishing, there are the concepts of ghost and gift authors. Usually, a ghost author is someone who contributed to a piece of work, but is left uncredited, and a gift author is someone who is listed as an author without making the necessary contributions. Both gift and ghost authorship are quite common. The inclusion of a deceased author gives a whole new meaning to the term "ghost author". The American Geophysical Union explicitly allows inclusion of deceased authors, but this is dependent upon them meeting the authorship criteria and approving of submission.

I'm aware that there can be a considerable lag time to publication of results (and not only from delays during peer review!), but five years seems like a long time for someone to still be appearing on publications after their death. A study of articles rejected by the Annals of Internal Medicine found that the average lag time to publication elsewhere was 1 1/2 years. A Cochrane review found that publication of results of clinical trials can take five years, but that was from the time of ethics approval or patient enrollment.

Here are some possible scenarios to imagine:
  1. The authors did genuinely contribute to the intellectual development, planning and conduct of the study, and were involved in drafting, writing or revising a version of the manuscript. For obvious reasons they won't be able to meet the third requirement of the ICMJE guidelines, namely that they give final approval to the published study, but editors will often turn a blind eye to this requirement.
  2. The author did plan the study, and may have been involved in some of the experiments, but they did not analyse or write up the work. In this case, the motivation for inclusion may have been partly out of respect to the author, and partly in order to receive the benefit of their name on the paper.
  3. The deceased never knew anything of the work. This would be plain dishonesty, simply to profit from the reputation of the deceased co-author.
How we distinguish between these scenarios is hard to say, but as time passes the unpleasant options become more likely. I'm not alone in raising this issue, as The Online Ethics Center at Case Western Reserve University poses just this same question.

This is not just an issue to occupy the idle minds of editors and to upset the loved ones of those who have passed on. Jonathan Gornall writing in the BMJ about a study of sudden infant death noted that "John Emery [...] was listed as the paper's seventh author, although he died in May 2000, more than two years before the first draft was completed and three years before the paper was submitted to the Lancet. The six other authors acknowledged in the paper that Professor Emery was "largely responsible for the setting up of this study and for investigation of the earlier cases" but played no part in drafting it. However, they did not make clear that after Professor Emery's death they recategorised deaths that he had classed as unnatural or of indeterminate cause as natural deaths. Furthermore, evidence of Professor Emery's views shortly before his death in May 2000 suggests that his name has been used to support a conclusion with which he would not have agreed".

Whatever the truth of this tale, it acts to highlight that the only person who has a right to attribute opinions to someone is that individual. I believe that co-authors should be wary of including a deceased colleague as an author if they were not involved in drafting or approving the manuscript before submission, especially if more than two years have passed since they died. A full acknowledgement will give them the respect they deserve.

18 Jan 2007

Peer review lite at PLoS ONE?

PLoS ONE, the 'Open Access 2.0' journal trumpeted by the Public Library of Science, launched late last year. Editors and reviewers often make arbitrary decisions about importance, in a chase for the Impact Factor. The idea of removing the need for journals to select the most 'important' science, and instead concentrating on publishing solid, sound science is a good one. This is a philosophy already followed by the BMC-series journals, which I'm involved with, although we do frequently reject articles on the basis that they present no advance in the field. To assess the soundness of a manuscript you usually need to find at least two experts to judge the topic, methods and statistics, and when the journal was announced I was genuinely puzzled as to how PLoS ONE would run their peer review any differently to other journals. There has been debate in the blogosphere by some who were under the impression that PLoS ONE was doing way with peer review entirely. An example I have come across gives me cause to worry that rather than focussing on conducting solid peer review, the system PLoS ONE uses will indeed sometimes scrimp on peer review.

One of the really interesting features of PLoS ONE is the annotation and discussion system. It isn't the first journal to allow readers to post comments, but it is the first to allow them to attach comments to a certain part of the published article, like a post-it note. There is a list of the Most Annotated articles, and on the day I looked one of these was
A Large Specific Deterrent Effect of Arrest for Patronizing a Prostitute.

Alongside a comment discussing the use of the term "prostitute" is the Academic Editor's viewpoint. PLoS ONE may be experimental, but they don't have open peer review as standard
(i.e. named, rather than anonymous reviewing, with the reports published), so this is not standard practice (they do name the Academic Editor for each article). The editor commented that "Although this manuscript was quite far from my own field of expertise, I accepted to act as academic referee for this manuscript because I felt that it was important that this type of manuscript should be published in a open access mode, and that the possibility for further discussions offered by this new journal would be very positive. Although I am reasonably confident that the scientific content and the statistics performed have been conducted appropriately, this does not mean to say that I condone all that this manuscript contains".

This brought me up short. I handle the peer review of articles on which I am not an expert, but I never make a decision to publish based only on my assessment of the manuscript. This is what peer review is for. The Academic Editor was Etienne Joly, an immunogeneticist who is also a strong supporter of open access - he is also an editorial board member for Biology Direct, another experiment in peer review published by BioMed Central.

I respect Dr Joly, but there is something worrying about an article being accepted after only being assessed by someone who is not a peer of the authors. I'm not sure that an immunologist can assess the conduct and reporting of public health/social science research. This isn't peer review, it is editorial selection. Indeed, PLoS ONE states that:
"AEs [Academic Editors] can employ a variety of methods to reach a decision in which they are confident:

Based on their own knowledge and experience;
Through discussion with other members of the editorial board;
Through the solicitation of formal reports from independent external referees".

Chris Surridge, the Managing Editor of PLoS ONE, has said that "When papers are submitted they get assigned to one of these editors based on the content of the paper and the editor’s specific areas of expertise". Now whilst the board of Academics Editors is impressive, it is still only 200 people. As PLoS ONE has ambitiously opened itself to submissions from across all of science, not just biology and medicine, it is impossible that all submissions will find an Academic Editor who will be an expert. If an Academic Editor is pressed for time (as most academics are) might they not take the easy route and attempt to assess a manuscript themselves that they are not qualified to judge, rather than embarking on the process of selecting external peer reviewers?

Etienne Joly went on to say in his viewpoint that "
I have little doubt that this subject will lead to active debates. But this is exactly what PlosOne is about: Open Acess, and open discussions". PLoS ONE appears to be genuinely aiming to replace the pre-publication review process with "community-based open peer review", while at the same time not quite admitting this publicly, arguing
that "the pre-publication assessment of papers is definitely ‘peer-review’". What concerns me most about the discussion of peer review surrounding the launch of PLoS ONE is the perception that 'only' assessing the technical quality of a manuscript is somehow easy. It's not. If you don't have the fall back option of claiming that something is "out of scope", "not of interest to our readers", or "more suited to a more specialized journal", then the job of assessing manuscripts actually gets harder, not easier.

Editorial selection is the process already used by the Elsevier journal Medical Hypotheses, which states boldly that "
The editor sees his role as a 'chooser', not a 'changer': choosing to publish what are judged to be the best papers from those submitted". It has been said of the journal that it "exists to let people publish their craziest ideas". I would not imagine that this is a reputation that PLoS ONE hopes to emulate.
---

The most annotated article on PLoS ONE (aside from the testing 'Sandbox') now has 10 comments. Wonderful! However, they are all from an author of the article, to external links such as PubChem. Likewise, 5 of the 6 comments on the next most annotated article are links or notes added by the author. An annotation on another article is a note about the correct orientation of a figure. Is that not the sort of thing that is integral to the manuscript and needs correcting in the production process?