Showing posts with label plos one. Show all posts
Showing posts with label plos one. Show all posts

4 Oct 2010

Joining PLoS ONE

I'm excited to say I've just started as an Associate Editor with PLoS ONE at the Public Library of Science, after freelancing with them since the beginning of the year.

It's interesting timing in the wake of a surge in submissions post-Impact Factor and the recent brickbats hurled at the journal by PZ Myers and David Gorski, but I'm looking forward to helping the journal go from strength to strength.

28 Jun 2007

Open peer review & community peer review

There has been a lot of discussion about 'open peer review' lately - this letter to Nature is just the latest example. With all these opinions and hypotheses about peer review flying around, I think that it is useful to make some distinctions between the different types of 'open' review, so here goes.

Traditional peer review. Anonymous reports received pre-publication. Letters to the editor are considered by many journals, but especially in paper journals relatively few are published. All the BioMed Central journals accept signed comments from readers.

Open peer review. Named, pre-publication review, which is how the BMC-series medical journals work, and the BMJ too. The difference lies in that the reviews are available for readers to see in the BMC-series medical journals, but the BMJ never made this move. Comments can also be posted by readers: the BMJ's Rapid Responses should be envied by any journal. It is controversial as some reviewers don't wish to be named, and it can make finding peer reviewers harder, but to anyone who doubts the open peer review works I can point out that the BMJ has published hundreds of peer reviewed articles since it introduced open peer review, and the medical journals in the BMC series have published thousands of peer reviewed articles since they launched in 2000. Open peer review can work.

Open and permissive peer review. This is Biology Direct's approach. Articles are published if they receive reviews solicited by the author from at least 3 members of the reviewing board (aside from pseudoscience, which the editors will veto), with the comments included at the end of the article, unless the author withdraw the manuscript. More here, and I discussed their approach in a previous post. Comments can be posted by readers, as with the other BioMed Central journals.

Community peer review. The idea of community peer review is to avoid peer review being the domain of a biased subset of the scientific community, and it has a powerful philosophy that "given enough eyeballs, all bugs are shallow". It can be either anonymous or named, and still happens before formal publication, but the difference is that reviewers volunteer rather than being selected by the editors. The manuscript is public while under review, but explicitly is not 'published' at that point. This was how Nature's experiment worked (or didn't work), but it was alongside the usual anonymous editorially selected reviews, and the comments don't seem to have been treated as 'proper' reviews by the editors.

Atmospheric Chemistry and Physics uses a similar approach, apparently with much more success than Nature. The editors refuse articles that don't meet minimal scientific standards, then post the remaining articles for 8 weeks of Interactive Public Discussion (named or anonymous), then publish the final version. There doesn't appear to be any mention of rejecting articles after the initial public posting, so this permissive peer review resembles a community version of Biology Direct.

The Journal of Interactive Media in Education uses named reports, and invites review from the community. The two-step process involves private, named review by invited reviewers, followed by publication of a preliminary version that is reviewed further by the community before final, formal publication.

Permissive peer review, post-publication commentary.
This is PLoS ONE's approach. They have minimal peer review, with the expectation that the scientific community will then comment on and annotate the articles. I was already a bit skeptical of the merits of minimal peer review, as are others, and now a Nature news story, among others, has attacked the publication of a study on HIV and circumcision in PLoS ONE, arguing that peer review failed in this case. Sending out an unbalanced press release written by the author seems to have compounded the problem, and wasn't very responsible. A lengthy response has been posted to the article, showing that post-publication review can work, but plenty of journals have the option to post comments, and the horse has already bolted.

No peer review, post-publication commentary.
This is how Philica works, and now Nature Preceedings, part pre-print, part repository for preliminary work. I don't think that Philica is working; Nature Preceedings will probably fare better. An essential difference is that while Philica is clogged with pseudoscience, Nature Preceedings explictly won't post pseudoscience, and it has the Nature brand name to help it gather interest and comments. I found an optimistically titled Web 2.0 Peer Reviewed Science Journal, which has a website but no articles. "This page that you are reading now is a review site, and I (Philip Dorrell) am the intended reviewer. If you, as an author of a scientific paper, are interested in having me review your paper, all you have to do is publish your paper as a web page, and then send an email". Hmm... sorry Philip, but peer review involves more than just your opinion on articles. Web 2.0 requires users and content.

BioMed Central is open access, PLoS is open access, the BMJ is open access, Nature Preceedings is open access, and they are all experimenting with peer review. Matthew Falagas has commented in Open Medicine (the open access journal that arose out of the editorial dispute at the CMAJ), after spotting this pattern of a link between experimenting with peer review and open access. I think it is worth stating that despite this trend, open access and open peer review don't necessarily go together. The biology journals in the BMC-series still have anonymous review, as do the PLoS journals. The problem of access to an article is at a tangent to the problem of reviewing it - but, of course, community peer review can't work if not enough people have access to the article.

I think that if there is doubt in the integrity of peer review (and there is more and more doubt), this increases the imperative for exposing pre-publication review processes. Journals can't just be paternalistic or secretive about peer review, and readers shouldn't take it on trust that an article labelled as 'peer reviewed' has been rigorously critiqued by experts in the field. PLoS ONE is encouraging its reviewers to make their reviews public on the published article, which is a great step. Requiring reviewers to opt-out would be even stronger, but PLoS Medicine recently backed away from this policy.

If journals really want community peer review to work, we cannot just sit back and wait for comments to come in. Pre-publication peer review takes a massive effort on the part of editors to find qualified reviewers, and the chances of enough qualified reviewers stumbling across an article and feeling obliged to leave comments to make post-publication review viable and vibrant are low. Ways to solicit comments are essential, using email alerts for example. In a definite step in the right direction, PLoS ONE is organising virtual 'journal clubs'. Remember that anyone who has had a face-to-face journal club at their institute about a BioMed Central article, or a BMJ article, or a PLoS article, can and should post the results of the discussion as a comment on the article.

I think that open peer review and community peer review are the future of assessing scientific articles. It doesn't stop there - I've not even mentioned wikis!

8 Feb 2007

CNS disease, or [ney-cher-sahy-uhns-uhnd-sel]

I've read some comments to the effect that PLoS ONE is a new competitor to Nature. You know who you are. The confusion appears to be wrought by the fact that PLoS ONE is a general science journal, but in reality it is poles apart from Nature, Science or Cell. If Nature is aiming to be at the tip of the publications pyramid, PLoS ONE is the broad base, much as the BMC series is also part of the broad base. And that's a Good Thing.

Harold Varmus has complained about ‘CNS disease’, the tendency to regard publication in these journals too highly. These three journals are mentioned in one breath so often that perhaps a new word could be coined: naturescienceandcell [ney-cher-sahy-uhns-uhnd-sel] -noun: 1. General science journals that cause researchers to temporarily lose their sanity.

Jan Velterop
estimates that 1 million scholarly articles are published each year, and I read somewhere this week that there were around 680,000 abstracts added to PubMed in 2005, so that estimate looks reasonable. A quick look at PubMed tells me that the hallowed trio of Nature, Science and Cell between them published in the ballpark of 3,000 research articles that year. As only around 0.5% of publications appear in these journals, the attention paid to them is a little bit unwarranted. I'd wager that at least some of the other 99.5% of articles have some merit.

In a similar vein, Doug Altman has pointed out that although randomized controlled trials in general medical journals get such attention paid to them, 93% of trials are not published in general medical journals and 90% of medical publications are not trials. The focus on these "big headline" RCTs that make up 0.7% of the medical literature appears to be due to reprints being bought by pharmaceutical companies -- the Vioxx article by Merck brought in $700,000 to the NEJM -- and due to what Ben Goldacre calls
Humanities Graduates In The Media hyping medical stories in the press.

It is worth remembering that although these journals have high impact factors, the impact factor doesn't determine the number of citations an article published in a journal will receive: the causation is the other way around. To their credit, Nature have been honest about the fact that their 2004 impact factor mainly (89% of it) derived from 25% of their articles, including the mouse genome paper that has been cited over 1,000 times. Not all articles published in Nature receive that kind of response, yet people still refer to Nature publications in awed tones. Some people might "read" Nature each week, but, seriously, does anyone actually read the research articles if they're not in the field?

Science depends
more on a slow and steady accumulation of knowledge than upon "breakthrough" papers. Geoff Watts has argued in the BMJ that we should "pension off the major breakthrough". I'd echo this, and I'd agree with Harold Varmus: we need a cure for CNS disease.

28 Jan 2007

Mashups, mirrors, mining and open access

The Creative Commons Attribution License under which open access articles are made available by both BioMed Central and PLoS allows others to create sites that incorporate the content of these articles, so long as the original source is clearly acknowledged.

Two ways to do this are mashups and mirrors. According to Wikipedia, a mashup is a site that "combines content from more than one source into an integrated experience". A mirror is an exact copy of a website.

BioMed Central officially has four mirrors to which we feed content, at INIST in France, University of Potsdam in Germany, PubMed Central at the NIH, and the National Library of the Netherlands' e-Depot. I've come across some unofficial mirrors in specific areas like genomics and bioinformatics in the past.

PLoS ONE already has its own unofficial mirror, created by the people behind HubMed: PLoS Too. Rather than displaying the articles as they appear on the publisher's site, this is a pared-down view of the articles and it has a couple of good new features - auto-generated tags for each article, and a very quick live search box.

On the mashup side, Free Biomedical Images has made open access images available in a searchable database, mainly (entirely?) taken from BioMed Central articles, and fully attributed. Users can comment on the images, rate them, email them to a friend and jump to the published article.

A key feature of open access is that we don't hide away the full text of our articles. The entire 'corpus' of our open access research articles is available on our data mining page for anyone to download. Gerry Rubin has said that "the most important reason for Open Access is data mining".

The idea of mashups, scripts and extensions is just beginning to reach the bioinformatics community. A bioinformatics mashup by Alfonso Valencia is iHOP (Information Hyperlinked over Proteins), which links information about genes and proteins to text from PubMed. Not satisfied with just a mashup, Mark Wilkinson has created a Greasemonkey userscript called iHOPerator that enhances the iHOP website with tag clouds. You can read about in his BMC Bioinformatics article. Two other Greasemonkey userscripts link PubMed to social bookmarking sites, one to CiteULike, the other to Connotea. A third links Google Scholar to CiteULike. The iSpecies search engine pulls together information about any species you enter from disparate sources, including scores of biomedical databases and even Yahoo! Image search.

Mashups, mirrors and mining are definitely the future of science publishing.

24 Jan 2007

Response to '10 Problems with the Peer-Review Publishing Process'

Kevin Dewalt's blog on the 19th January includes 10 criticisms of peer review. I've posted a comment on his blog with my response to each of the points, but I'll copy them here as well.

Kevin's original points are italicised, and I've made a couple of additional comments since I replied on his blog that are indicated by square brackets. I hope I've corrected some misconceptions about peer review.

---
1.
Unstated real or perceived conflicts of interest. Reviewers and authors can have relationships with entities that have an ulterior motive in getting material published.

True, but many journals, such as mine, require authors and reviewers to declare their competing interests - in our medical journals, these interests are published with the article. Editors are used to watching out for this.
---
2.
Peer-review process advances slower than scientific progress.

Yes, but peer review doesn't stop someone first posting their article on their own web-site, discussing their work at conferences, or posting their work on a pre-print server like ArXiv. Anyway, scientific progress isn't as rapid as people believe, and without the checks and balances that peer review gives, all sorts of rubbish would be published, and scientists would have to follow even more blind alleys than they do already after reading profoundly flawed research. Peer review adds some rigour into the process of communicating scientific research. Less haste, more speed is an apt concept here.
---
3.
The current process does not provide authors and reviewers with basic collaborative web tools.

That's nothing to do with peer review, just the delays in the Web 2.0 revolution getting to publishers. PLoS ONE (published by Public Library of Science, another OA publisher) does now offer reviewers and authors interactive tools to annotate articles. Many journals, like mine and the
BMJ, allow any reader to comment on a published article.
---
4.
Authors lose copyright privileges when publishing yet are often forced to publish to continue career advancement.

Traditional journals insist on copyright transfer. Many open access journals, including those published by BioMed Central and PLoS, allow the authors to retain copyright. The article is published under a Creative Commons Attribution License.
---
5.
Peer-review networks tend to form around cliques. Those “outside the club” of a particular discipline - where often the best ideas surface - cannot get published because new ideas are rejected by the current establishment. As a result great ideas are often lost.

I don't believe that this complaint is really that valid. The complaints I've read about were by top scientists who couldn't get their idea published in Nature,
Cell or Science. Well, just publish it elsewhere. There are plenty of journals that aren't as picky as those journals, and if authors had a little more self-awareness they'd recognise that they aim too high sometimes. Besides, many journals don't use established lists of reviewers, but go straight to those publishing related work and ask them. So, yes, you usually have to be a published scientist to review, but then it is called *peer* review, isn't it? I doubt that "the best ideas" surface outside academic research, the lone researcher is more likely to be a kook than a genius. There are some geniuses out there, but they are the ones you read about in the news - there's a teensy bit of a selection bias going on...
---
6.
Precedence is often establish by those with the best personal contacts and not those who first introduce new theories.

I don't see the basis for this argument. Precedence does go to those who first raised a theory, so long as scientists are aware of it [this is the idea of 'priority']. Those who publish in languages other than English are at a disadvantage, admittedly, but some journals allow republication of work in English that was previously published elsewhere in another language, so that gives authors the possibility to widen their audience. Peer reviewers go out of their way to alert authors to work that first demonstrated something, and I have also insisted that authors cite certain studies. Scientists are very attuned to giving due credit for the origin of ideas or techniques.
---
7.
There is no medium for wider, instant dissemination. Doctors or researchers who prepare a presentation or speech cannot “publish it” to a wider audience.

Yes, they can. ArXiv and other pre-print servers allow the publication of non-reviewed work (see e.g.
Public Knowledge Project). Theses and dissertations can be published electronically (e.g. NDLTD, MIT on DSpace). This Portugese university repository, for example, allows the publication of reports, presentation etc. If a university doesn't have a repository for this kind of material, then it should do! Staff and students can take the lead, rather than waiting for journals to do it for them - journals are traditionally for peer-reviewed research, why would we necessarily expect them to post presentations? That said, Nature has recently launched Nature Protocols, so publishers are making some effort to include material that is outside their usual range.

8.
Participating in the review process has little benefit for the reviewer. Performing reviews can take an enormous amount of time and the written reviews are not themselves “published”.

Reviewing takes between 2-6 hours, according a survey I read [an average of 3 hours]. I've seen reviews done in 10 minutes...
Here are a few reasons for participating in peer review:
- Allows a researcher control over what is published in their field - they are the "gatekeepers of science".
- Allows researchers to ensure that what is published accurately reflects and acknowledges their field.
- Can help a scientist get promoted and get grants, as journals often list the names of their reviewers annually.
- In the case of the medical journals in the
BMC-series, published by BioMed Central, we *do* publish the reports, along with the name of the reviewer.
- Reviewers are actually paid by a small minority of journals [the
BMJ pays £25], and more commonly can get other perks such as discounts on reading or publishing in the journal.
- Reviewers get the opportunity to read their competitors' work months before it will be published, and unscrupulous reviewers can deliberately block publication.
- It's interesting! They're scientists, they enjoy critiquing science!

9.
Reviews and reviewers are not “reviewed”. An author who receives a biased review or one based on poor critical thinking has no recourse to publicly respond or invite others to comment.

Not true. Editors assess the reviewer reports and qualifications. Authors who receive what they perceive to be a biased review can appeal to the editor, and request a further opinion. If they are badly treated and the journal is a member of the Committee on Publication Ethics (such as BioMed Central,
BMJ, Lancet) then authors can even take a case to that body [currently only editors can submit a case, but often will in cases of a dispute]. Some journals (BMJ, Lancet) have an ombudsman.

10.
Journals can be prohibitively expensive for some in the developing world.

Yes - this is one of the reasons why open access is a good idea! The research is free to read for anyone with Internet access. Traditional pay-to-view journals are also members of a scheme called HINARI, a WHO project that allows some people in developing countries to read the research for free (but it does have limitations, as they need to be connected to an institution).

22 Jan 2007

Does peer review work?

There are now a reasonable numbers of studies from journals such as the BMJ and JAMA on the factors affecting peer review. For example, we know due to a piece of work done by my colleagues that while author-suggested reviewers appear to return reports of equal quality to editor-suggested reviewers, they are significantly kinder to the authors in their recommendations on publication.

One of those authors, Pritt Tamber, regularly makes clear his belief that peer review doesn't work, most recently arguing in a
BMJ Rapid Response that "Much of the research conducted at the BMJ [...] showed that there is little or no objective value to the process, yet journals and their editors persist with—and advocate—peer review; their only defence is that "there's nothing better," even though few have tried to find an alternative".

As Pritt notes, one alternative is the system used by Biology Direct, published by BioMed Central. The idea is that authors obtain reviews from three members of the reviewing board. If the author cannot find three members of the board to agree (or to themselves solicit an external review) the manuscript is considered to be rejected. If they can get three reports, then the manuscript will be published, no matter what the reviewers say. The twist is that the comments of the reviewers will be included at the end of the manuscript, as an integral part of the manuscript, and signed by the reviewers. The author can make revisions to the manuscript if they wish or even withdraw it, but equally they can ignore the comments and publish despite them. This is with the knowledge that readers will be able to see the reviewers' dissent. Other alternatives include the community peer review being tried by Philica, PLoS ONE and
Nature (Nature's experiment appears to have been unsuccessful, but that is no reason to write-off the idea). More journals, publishers and researchers need to go out on a limb to explore new and better ways to assess and critique scientific research.

Before we go too far with condemning peer review, it is worth remembering that without an evidence base, we won't be able to work out where peer review works, where it doesn't and why, and how to improve it.

Much of the research done into the effects of peer review has been, in my opinion at least, quite superficial. Reading it has really only told me what I knew already from working as an editor.

My wish-list for studies of peer review are:

  1. Creating a metric of "on-topicness" that editors can use to assess how relevant a reviewer's expertise is to a piece of research or an aspect of that work. This could be by simple similarity analyses, comparing their PubMed abstracts to the abstract of the submitted manuscript, or by more complicated semantic analyses.
  2. Comparing manuscripts that were accepted to those rejected to examine the predictive factors. Some of these have been done, but the analyses always strike me as simplistic. The sample size needs to be greater, and the journals chosen need to not be so highly selective - is it really that interesting to see the factors that influence publication in journals like Nature, The Lancet or NEJM? I really want to see are the factors that affect whether a study is ever published in a reputable journal.
  3. A side-by-side comparison of published articles with the original submitted version (before any peer review in any journal). This could be done by a paid panel who would be able to spend the time to do an in depth analysis; an alternative would be to invite journal clubs at universities worldwide to analyse manuscripts in this way (a sort-of Seti@Home for journalology). Did peer review noticeably improve the work?
  4. Examine the fate of articles rejected by journals. Several studies of this nature have been conducted, but they mainly focus only on the journal it is eventually published in and the Impact Factor of the publishing journal. Why not examine whether any changes had been made since rejection? What about whether the rejected work is cited and read? Do a panel and journal clubs agree that the work is now sound, even if it might be uninteresting?
  5. Compare the ability of different editors to assess a manuscript and select appropriate reviewers under time pressure, pitted against some of the new semi-automated tools available, like etBLAST. This would be like a peer review Olympiad.
It is tough to design and conduct good studies to examine peer review, but editors need to make the effort, else skeptics like Pritt will have a point. Now, just as soon as I have some spare time...

18 Jan 2007

Peer review lite at PLoS ONE?

PLoS ONE, the 'Open Access 2.0' journal trumpeted by the Public Library of Science, launched late last year. Editors and reviewers often make arbitrary decisions about importance, in a chase for the Impact Factor. The idea of removing the need for journals to select the most 'important' science, and instead concentrating on publishing solid, sound science is a good one. This is a philosophy already followed by the BMC-series journals, which I'm involved with, although we do frequently reject articles on the basis that they present no advance in the field. To assess the soundness of a manuscript you usually need to find at least two experts to judge the topic, methods and statistics, and when the journal was announced I was genuinely puzzled as to how PLoS ONE would run their peer review any differently to other journals. There has been debate in the blogosphere by some who were under the impression that PLoS ONE was doing way with peer review entirely. An example I have come across gives me cause to worry that rather than focussing on conducting solid peer review, the system PLoS ONE uses will indeed sometimes scrimp on peer review.

One of the really interesting features of PLoS ONE is the annotation and discussion system. It isn't the first journal to allow readers to post comments, but it is the first to allow them to attach comments to a certain part of the published article, like a post-it note. There is a list of the Most Annotated articles, and on the day I looked one of these was
A Large Specific Deterrent Effect of Arrest for Patronizing a Prostitute.

Alongside a comment discussing the use of the term "prostitute" is the Academic Editor's viewpoint. PLoS ONE may be experimental, but they don't have open peer review as standard
(i.e. named, rather than anonymous reviewing, with the reports published), so this is not standard practice (they do name the Academic Editor for each article). The editor commented that "Although this manuscript was quite far from my own field of expertise, I accepted to act as academic referee for this manuscript because I felt that it was important that this type of manuscript should be published in a open access mode, and that the possibility for further discussions offered by this new journal would be very positive. Although I am reasonably confident that the scientific content and the statistics performed have been conducted appropriately, this does not mean to say that I condone all that this manuscript contains".

This brought me up short. I handle the peer review of articles on which I am not an expert, but I never make a decision to publish based only on my assessment of the manuscript. This is what peer review is for. The Academic Editor was Etienne Joly, an immunogeneticist who is also a strong supporter of open access - he is also an editorial board member for Biology Direct, another experiment in peer review published by BioMed Central.

I respect Dr Joly, but there is something worrying about an article being accepted after only being assessed by someone who is not a peer of the authors. I'm not sure that an immunologist can assess the conduct and reporting of public health/social science research. This isn't peer review, it is editorial selection. Indeed, PLoS ONE states that:
"AEs [Academic Editors] can employ a variety of methods to reach a decision in which they are confident:

Based on their own knowledge and experience;
Through discussion with other members of the editorial board;
Through the solicitation of formal reports from independent external referees".

Chris Surridge, the Managing Editor of PLoS ONE, has said that "When papers are submitted they get assigned to one of these editors based on the content of the paper and the editor’s specific areas of expertise". Now whilst the board of Academics Editors is impressive, it is still only 200 people. As PLoS ONE has ambitiously opened itself to submissions from across all of science, not just biology and medicine, it is impossible that all submissions will find an Academic Editor who will be an expert. If an Academic Editor is pressed for time (as most academics are) might they not take the easy route and attempt to assess a manuscript themselves that they are not qualified to judge, rather than embarking on the process of selecting external peer reviewers?

Etienne Joly went on to say in his viewpoint that "
I have little doubt that this subject will lead to active debates. But this is exactly what PlosOne is about: Open Acess, and open discussions". PLoS ONE appears to be genuinely aiming to replace the pre-publication review process with "community-based open peer review", while at the same time not quite admitting this publicly, arguing
that "the pre-publication assessment of papers is definitely ‘peer-review’". What concerns me most about the discussion of peer review surrounding the launch of PLoS ONE is the perception that 'only' assessing the technical quality of a manuscript is somehow easy. It's not. If you don't have the fall back option of claiming that something is "out of scope", "not of interest to our readers", or "more suited to a more specialized journal", then the job of assessing manuscripts actually gets harder, not easier.

Editorial selection is the process already used by the Elsevier journal Medical Hypotheses, which states boldly that "
The editor sees his role as a 'chooser', not a 'changer': choosing to publish what are judged to be the best papers from those submitted". It has been said of the journal that it "exists to let people publish their craziest ideas". I would not imagine that this is a reputation that PLoS ONE hopes to emulate.
---

The most annotated article on PLoS ONE (aside from the testing 'Sandbox') now has 10 comments. Wonderful! However, they are all from an author of the article, to external links such as PubChem. Likewise, 5 of the 6 comments on the next most annotated article are links or notes added by the author. An annotation on another article is a note about the correct orientation of a figure. Is that not the sort of thing that is integral to the manuscript and needs correcting in the production process?