29 Jun 2007

Science hype it up

"First genome transplant turns one species into another".

Wow! Really? That sounds amazing!!

"Scientists have converted an organism into an entirely different species by performing the world's first genome transplant, a breakthrough that paves the way for the creation of synthetic forms of life".

No kidding! "An entirely different species"?! What was it, turning a whale into a petunia!?!

And it's a paper in Science!?

This must be big!!!!!

Wait for it....

Here it is..........

"In the experiment, researchers extracted the whole genetic code from a simple bacterium, Mycoplasma mycoides. They squirted the DNA into a test tube containing a related species, Mycoplasma capricolum. They found that some of the bacteria absorbed the new genome and ditched their own. These microbes grew and behaved exactly like the donor".

Oh.

Can I piss on their fire now?

These mycoplasma are very closely related:
"The members of the M. mycoides cluster are very closely related, as judged from biochemical, physiological, serological, and 16S rRNA sequence data, but cause different diseases in various animals. M. capricolum subsp. capripneumoniae has a property unique among members of the M. mycoides cluster in that it has an unusually large number of polymorphisms in the two 16S rRNA genes. There are, in fact, more sequence differences between the rrnA and rrnB operons of M. capricolum subsp. capripneumoniae than between the 16S rRNA genes of homologous operons of different species within the M. mycoides cluster. This characteristic can possibly be explained by more rapid evolution due to a relatively recent change to a host to which this mycoplasma has not completely adapted".
Note: "relatively recent change". And M. capricolum is more diverse than M. mycoides, so you might expect these cells to be able to take up M. mycoides genomes. They're practically the same species. Bacteria are quite fuzzy about species anyway.

It is an interesting experiment, but this has been hyped up beyond a joke.

28 Jun 2007

Open peer review & community peer review

There has been a lot of discussion about 'open peer review' lately - this letter to Nature is just the latest example. With all these opinions and hypotheses about peer review flying around, I think that it is useful to make some distinctions between the different types of 'open' review, so here goes.

Traditional peer review. Anonymous reports received pre-publication. Letters to the editor are considered by many journals, but especially in paper journals relatively few are published. All the BioMed Central journals accept signed comments from readers.

Open peer review. Named, pre-publication review, which is how the BMC-series medical journals work, and the BMJ too. The difference lies in that the reviews are available for readers to see in the BMC-series medical journals, but the BMJ never made this move. Comments can also be posted by readers: the BMJ's Rapid Responses should be envied by any journal. It is controversial as some reviewers don't wish to be named, and it can make finding peer reviewers harder, but to anyone who doubts the open peer review works I can point out that the BMJ has published hundreds of peer reviewed articles since it introduced open peer review, and the medical journals in the BMC series have published thousands of peer reviewed articles since they launched in 2000. Open peer review can work.

Open and permissive peer review. This is Biology Direct's approach. Articles are published if they receive reviews solicited by the author from at least 3 members of the reviewing board (aside from pseudoscience, which the editors will veto), with the comments included at the end of the article, unless the author withdraw the manuscript. More here, and I discussed their approach in a previous post. Comments can be posted by readers, as with the other BioMed Central journals.

Community peer review. The idea of community peer review is to avoid peer review being the domain of a biased subset of the scientific community, and it has a powerful philosophy that "given enough eyeballs, all bugs are shallow". It can be either anonymous or named, and still happens before formal publication, but the difference is that reviewers volunteer rather than being selected by the editors. The manuscript is public while under review, but explicitly is not 'published' at that point. This was how Nature's experiment worked (or didn't work), but it was alongside the usual anonymous editorially selected reviews, and the comments don't seem to have been treated as 'proper' reviews by the editors.

Atmospheric Chemistry and Physics uses a similar approach, apparently with much more success than Nature. The editors refuse articles that don't meet minimal scientific standards, then post the remaining articles for 8 weeks of Interactive Public Discussion (named or anonymous), then publish the final version. There doesn't appear to be any mention of rejecting articles after the initial public posting, so this permissive peer review resembles a community version of Biology Direct.

The Journal of Interactive Media in Education uses named reports, and invites review from the community. The two-step process involves private, named review by invited reviewers, followed by publication of a preliminary version that is reviewed further by the community before final, formal publication.

Permissive peer review, post-publication commentary.
This is PLoS ONE's approach. They have minimal peer review, with the expectation that the scientific community will then comment on and annotate the articles. I was already a bit skeptical of the merits of minimal peer review, as are others, and now a Nature news story, among others, has attacked the publication of a study on HIV and circumcision in PLoS ONE, arguing that peer review failed in this case. Sending out an unbalanced press release written by the author seems to have compounded the problem, and wasn't very responsible. A lengthy response has been posted to the article, showing that post-publication review can work, but plenty of journals have the option to post comments, and the horse has already bolted.

No peer review, post-publication commentary.
This is how Philica works, and now Nature Preceedings, part pre-print, part repository for preliminary work. I don't think that Philica is working; Nature Preceedings will probably fare better. An essential difference is that while Philica is clogged with pseudoscience, Nature Preceedings explictly won't post pseudoscience, and it has the Nature brand name to help it gather interest and comments. I found an optimistically titled Web 2.0 Peer Reviewed Science Journal, which has a website but no articles. "This page that you are reading now is a review site, and I (Philip Dorrell) am the intended reviewer. If you, as an author of a scientific paper, are interested in having me review your paper, all you have to do is publish your paper as a web page, and then send an email". Hmm... sorry Philip, but peer review involves more than just your opinion on articles. Web 2.0 requires users and content.

BioMed Central is open access, PLoS is open access, the BMJ is open access, Nature Preceedings is open access, and they are all experimenting with peer review. Matthew Falagas has commented in Open Medicine (the open access journal that arose out of the editorial dispute at the CMAJ), after spotting this pattern of a link between experimenting with peer review and open access. I think it is worth stating that despite this trend, open access and open peer review don't necessarily go together. The biology journals in the BMC-series still have anonymous review, as do the PLoS journals. The problem of access to an article is at a tangent to the problem of reviewing it - but, of course, community peer review can't work if not enough people have access to the article.

I think that if there is doubt in the integrity of peer review (and there is more and more doubt), this increases the imperative for exposing pre-publication review processes. Journals can't just be paternalistic or secretive about peer review, and readers shouldn't take it on trust that an article labelled as 'peer reviewed' has been rigorously critiqued by experts in the field. PLoS ONE is encouraging its reviewers to make their reviews public on the published article, which is a great step. Requiring reviewers to opt-out would be even stronger, but PLoS Medicine recently backed away from this policy.

If journals really want community peer review to work, we cannot just sit back and wait for comments to come in. Pre-publication peer review takes a massive effort on the part of editors to find qualified reviewers, and the chances of enough qualified reviewers stumbling across an article and feeling obliged to leave comments to make post-publication review viable and vibrant are low. Ways to solicit comments are essential, using email alerts for example. In a definite step in the right direction, PLoS ONE is organising virtual 'journal clubs'. Remember that anyone who has had a face-to-face journal club at their institute about a BioMed Central article, or a BMJ article, or a PLoS article, can and should post the results of the discussion as a comment on the article.

I think that open peer review and community peer review are the future of assessing scientific articles. It doesn't stop there - I've not even mentioned wikis!

18 Jun 2007

omg web 2.0 is kewl

I'm not going to try any sort of systematic assessment of 'Web 2.0' and science, or even bother with a definition. This is just a stream-of-consciousness post, prompted by my wide-eyed wonder at the explosion of social networking and 'user generated content'.


Nature have completely bought into all this Web 2.0 malarkey - they've got umpteen blogs, Nature Network, Connotea, a new aggregator thingy called Scintilla, Postgenomic... they've even got a group on the "social notworking" site Facebook and an island in Second Life.

Social networking is a huge part of Web 2.0. Myspace is a mess, Facebook is more for play than for work, but The Scientist recently profiled the use of the 'Facebook for professionals' site, LinkedIn, by scientists. Another site that I've yet to look at properly is SciLink.

Social bookmarking is quite the thing. As well as Connotea, there's citeulike, another shared record of articles people came across and liked (hence the name). All the BioMed Central journals have feeds on citeulike, and we're collaborating with them.

A hybrid of social networking and social bookmarking is Stumbleupon. This is a brilliant way to find random websites that might of interest
- each user can give a thumbs up or thumbs down to every website they visit, and write a review of it.


You specify the topics you're interested in (music, film, science), and then hit the 'Stumble' button, and a page in that area appears. It is tuned by the sites you've said you like or dislike. As well as 'stumbling', you can also view the sites a user has rated and reviewed, which acts as a kind of blog of their browsing. This is a glorious way to waste time, but the same functionality applied specifically to scientific literature would give a great way to serendipitously browse the literature.

Another great 'overlay' to the web is WOT, or the Web of Trust. Very simply, you rate how trustworthy a site appears, generally speaking or specifically as a business partner, in keeping personal information, and as a safe site for children. A little icon appears by hyperlinks, green for OK or red if warning you that you are about to visit a site others have deemed untrustworthy or unsafe. Genius.

JournalReview gets around the barrier of some publishers not taking comments and having restrictions on letters by letting anyone comment on any article in PubMed. It's got a bit of a medicine bias, could still be more user friendly, and could really develop into a mature post-publication review process if it insisted on named reviews (or at least pseudanonymous reviews), and used trust metrics. And could someone use Greasemonkey to reveal comments from Journalreview when viewing PubMed abstracts, or even publisher sites? David Rothman has profiled a plethora of similar sites, in a great series of posts titled 'Digg for the medical literature'. Another good aggregator of information about scientific articles is the brilliantly simple PublicationsList - researchers can use it to, well, list their publications. They're going to need a name disambiguator pretty soon though. ScientificCommons is doing much the same, and I like their layout more.

One of the most notable Web 2.0 pages is Youtube, the video sharing site. In a wonderful example of circularity, an anthropology professor, Michael Wesch, has posted a 5 minute video guide to Web 2.0, The Machine is Us/ing Us -
I've had it embedded at the bottom of my blog for a while now. You can also find out why the banana is the atheist's nightmare. Youtube doesn't seem to be of much practical use to scientists, but there is a 'Youtube for science', the Journal of Visual Experiments, or JoVE.

Anyone can now get involved in scientific research without being a guinea pig in a drug trial. Distributed computing first came into the public consciousness with SETI@Home, and now similar applications are popping up in biology. I've got Folding@Home running on my laptop; it's helping to calculate the folding of supervillin at the moment.


No mention of Web 2.0 would be complete without referring to Wikis. Wikipedia is the best known and biggest - the encyclopedia that anyone can edit. The accuracy of Wikipedia, or rather the lack of accuracy, has received quite a lot of flak, in particular the entries about science. I'm almost left speechless by these criticisms. If you spot an error in Wikipedia, and it exercises you so much that you complain about it in public - just edit it! You don't even need to register! Sheesh. Another criticism is that it can be written by experts to be impenetrable to the outsider. As someone noted on that blog post, "don’t bitch when your encyclopedia gives you too much information. That’s its freakin job". Knowing something about serpins, I checked out the entry, and it's frankly excellent. The entry for antitrypsin is not so great, so I gave it a quick edit and flagged it as needing more references.

Of course, blogs are now reviewing articles post-publication. There is a Greasemonkey script from Pedro Beltrao that adds trackbacks to posts from Postgenomic to articles on Nature. I just tweaked it very simply to also be active on www.biomedcentral.com/* , and it works on our sites too. Cool. And now I see that Noel O'Blog has extended it to automatically work on PLoS, PNAS and BioMed Central. Thanks Pedro and Noel! Once you have it installed (which takes seconds if you have Greasemonkey), you can see it in action. Doesn't it look pretty?

By the way, the ungrammatical title of this post owes a debt to Ratcatcher's blog (which itself owes a debt to this Wondermark cartoon) and is reminiscent of a recent (and entirely useless) example of user generated content: lolcats. In case this craze has passed you by, it involves pictures of cats, with deliberately poorly written and humorous captions in bold text. Otherwise sensible linguists on Language Log have been led astray by this 'meme', and there's even a programming language based on the unique grammatical style of lolcats, lolcode. The web doesn't have much to top this.

14 Jun 2007

Well done, Reed Elsevier

As has been reported in many places, Reed Elsevier will stop their involvement in defence exhibitions this year. It's an impressive example of how protest can give results. Congratulations to Reed Elsevier for listening to the calls for them to do this.

The perils of editing

The editor of Fertility and Sterility has apologized to authors he accused of plagiarism and lying in The Scientist. Without getting into the rights and wrongs of this particular case, the legal threats flying back and forth serve to highlight why you need to tred carefully when accusing authors of misconduct!

Brian Deer in the BMJ recently reported on the controversy surrounding Mark Geier, a brave move for the journal's editors considering Dr Geier's familiarity with litigation, and also considering that they had recently apologised and paid £100,000 to Matthias Rath, another controversial doctor who they had accused of fraud.

It is fortunately not too often that journals receive threats of legal action, but I remember one author saying that he would sue us for rejecting his article; I remember, as I had handled that manuscript. Peter Newmark, our Editor-in-Chief at the time, gave this pretty short shrift, and sent the authors a response that put quite plainly his dim view of them threatening one of his editors like that. They didn't sue.

Journalology roundup #8

Sean Eddy Celebrates Open Access in Franklin Speech. "Sean Eddy [editorial board member of BMC Bioinformatics] accepted the 2007 Benjamin Franklin Award and then proceeded to poke a few good-natured holes in Franklin’s sterling open access reputation".
I blogged about this in my first post on the official BioMed Central blog.

Peer review in open access scientific journals
. "Open access publications should be at the forefront in experimenting with strategies to foster what might be called an increasingly open science. As the open access movement blossoms, its supporters should continue to critically evaluate the parallel development of openness and transparency in the peer review process. We need to ensure that a commitment to high-quality peer review is maintained... Open access journals are in an ideal position to test the merits of open, unblinded, peer review". Although BioMed Central is open access and our medical journals have open peer review, there's no necessary connection between the two. However, I was surprised to see no mention of us at all in Matthew Falagas' article, considering that we have been consistently running full open peer review on more journals and for longer than any other publisher I know of.

Open journals' records to give reviewers their due. "I ... propose that journals' records should be made publicly available after an adequate lapse of time, including the names of reviewers and the confidential comments exchanged between editors and reviewers".

Diverse journal requirements for data sharing. "Conclusions: kudos to Nature and Science. I’m surprised that the policies of other journals are so lax". Point taken - BMC Bioinformatics was included in this comparison, and although we didn't fare too badly, we'll take another look at our policies

Hwang case review committee misses the mark. "The Hwang committee's report indicates that it is becoming unacceptable for journal editors to hide behind the veil of peer review".

Factors Associated with Findings of Published Trials of Drug-Drug Comparisons: Why Some Statins Appear More Efficacious than Others. "This study examined associations between research funding source, study design characteristics aimed at reducing bias, and other factors that potentially influence results and conclusions in randomized controlled trials (RCTs) of statin-drug comparisons....RCTs of head-to-head comparisons of statins with other drugs are more likely to report results and conclusions favoring the sponsor's product compared to the comparator drug".

Modellers seek reason for low retraction rates. How scientific literature is shaped by withdrawn manuscripts.

Clinical trial registration: looking back and moving ahead. "Three years ago, trials registration was the exception; now it is the rule. Registration facilitates the dissemination of information among clinicians, researchers, and patients, and it helps to assure trial participants that the information that accrues as a result of their altruism will become part of the public record". Take your pick where to read it!

Stem cell figure retracted by Nature
. Stem cell research seems dogged by errors and misconduct!

Science, being Green, and the precautionary principle

I'm feeling the conflict between being involved in science and being in the Green Party. A lot of members of the Green Party are instinctively opposed to many modern technologies and scientific practices, such as animal research, GM and lately, mobile phone and WiFi radiation. This attitude often rests on the precautionary principle, the idea that if something might cause harm it is better to act as though it does cause harm rather than to hope that it won't. I'm not opposed to this principle, but I despair at the tendency of the green movement (and newspaper weekend supplements) to succumb to hype and scaremongering. A prime example is Julia Stephenson, who is the Kensington and Chelsea Green candidate, and is a columnist for the Independent. She wrote a column recently, titled "My war on electrosmog", describing her efforts to rid her life of electromagnetic radiation after having her feelings of fatigue 'diagnosed' by her naturopath (not her GP, mind) as due to 'electrosmog'. Sigh. Particularly infuriating is the advertisement at the end of the article of "Magnetic field protection boxes" (start at £235!), "Q-Link Pendants", "Anti-radiation mobile phone headsets" etc. at the bottom of the article. If anyone is interested, for a few pence I can fashion a tin-foil hat, which I can guarantee will be as effective. Luckily, Bad Science has come to the rescue, so I don't need to tackle this in exhaustive detail, but as I have responded to others in the Green Party on this issue before, I thought I might share my thoughts on the matter. Julia has responded to the outpouring of scorn from scientists, but her response that "Disconnecting my Wi-Fi made me feel better. End of. I don't need a degree in physics to work out if I feel well or ill" exactly highlights the problems with assessing public health issues or medical treatments on the basis of personal experience or anecdote.

One thing to highlight
(as Julia has correctly noted) is that with all the research in this area, skepticism is a virtue. Experience of the biases prevalent in the reporting of industry-sponsored pharmaceutical trials teaches us this, and the arena of electromagnetic radiation is no exception. A recent systematic review by Matthias Egger (who knows a thing or two about systematic reviews) found that studies sponsored by the telecomms industry are less likely to report significant effects of electromagnetic radiation. So don't take any of the conclusions of research in this area at face value!

My opinion is that some people might be sensitive to electromagnetic radiation from mobile phone masts or WiFi - but if it were a ubiquitous problem, then many more people would have reported problems. Anecdotally, I have Wireless broadband at home and I never get headaches or joint pain. I also regularly walk through WiFi hotspots with no noticeable symptoms. The problem with symptoms like headache, nausea etc. is that they are very non-specific, and can be psychological. An underlying condition may exist that causes migraine, but the symptoms may be misattributed to an external factor that happens to be present at the onset of the symptoms. I'd advise anyone experiencing a sudden onset of such symptoms to visit their GP (and not a naturopath).

There's a fair amount of evidence that strongly suggests that the symptoms experienced by those who believe that they are sensitive to electromagnetic radiation are not caused by that radiation. A systematic review by Simon Wessely (an editorial board member of BMC Psychiatry) found that "The symptoms described by "electromagnetic hypersensitivity" sufferers can be severe and are sometimes disabling. However, it has proved difficult to show under blind conditions that exposure to EMF can trigger these symptoms. This suggests that "electromagnetic hypersensitivity" is unrelated to the presence of EMF, although more research into this phenomenon is required".
A randomized provocation trial by the same authors (comparing those reporting EMR sensitivity to controls, and exposing some to mobile phone radiation, some to a carrier wave, and some to a sham exposure) found that "No evidence was found to indicate that people with self reported sensitivity to mobile phone signals are able to detect such signals or that they react to them with increased symptom severity. As sham exposure was sufficient to trigger severe symptoms in some participants, psychological factors may have an important role in causing this condition". Another systematic review by Prof Wessely and colleagues suggested that cognitive behaviour therapy may be useful for those reporting sensitivity to electromagnetic radiation.

As the experience of symptoms that patients associate with electromagnetic radiation exposure is likely to be psychological, this has possible implications for the precautionary principle. It has been argued that the precautionary principle, if coupled with overhyped warnings of risk, is potentially damaging: "Evidence is emerging that prior beliefs about the risks from modern technology are an important predictor of symptoms from perceived exposures. Thus, by distorting perceptions of risk, disproportionate precaution might paradoxically lead to illness that would not otherwise occur".

All this talk of electromagnetic radiation risks is very reminiscent of the fake documentary programme Brass Eye, and their take on 'Science'. This featured 'heavy electricity' caused by "particle accelerators sending huge jolts of power into domestic power lines... the devastating result is that huge masses of heavy electricity start randomly falling out of wires, and crashing on to anything below... Basically it is like getting hit by a ton of invisible lead soup". That pseudoscientific babble was read out (and believed) by the actor Richard Briers. If people will believe that, unfortunately they will believe anything.