What is the scientific paper? 3: The metric
This is a guest post by Joe Dunckley
Continuing the series exploring the question "what is the scientific paper?", reposted from my old blog, and originally written following Science Online 2009. The topic of this post was originally discussed on FriendFeed, here.
On my recent post, what is wrong with the scientific paper?, Steve Hitchcock said that the most important problem with the paper is access, and that when we solve the problem with access, everything else will follow. I agree that access is hugely important, I recognise that we haven't won everyone over yet, and I know we do have to continue working away at the access problem, so I will devote a future post to reviewing that topic. But having thought about it a little longer, I am more convinced than ever that it is not access that is the big problem which is holding back the paper and journal, and open access is not the solution from which all others follow and fall into place.
There is one big problem, a single great big problem from which all others follow. The great ultimate cause is not, as I said last week, the journal. It is more basic than that. It is the impact factor. The journal is the problem with disseminating science, but the reason it has become the problem, the reason people let the problem continue is the impact factor. The impact factor is a greater problem than the access problem, because the former stands in the way of solving the latter. The impact factor is a great big competition killer; by far the greatest barrier to innovation and development in the dissemination of science.
Scientists can look at all of the problems with disseminating science, and they can look at us proposing all of these creative and extravagant solutions. They might agree entirely with our assessment of the state of the scientific paper and of the journal, and they can get as excited as us at the possibilities the flow from new technologies. But blogs and wikis are mere hobbies, to be abandoned when real work starts piling up; databases a dull chore, hoops to jump through when preparing a paper. So long as academics can get credit for little else besides publishing in a journal — a journal with an impact factor — any solution to publishing science outside of the journal will never be anything more than a gimmick, a hobby that takes precious time away from career development.
In a worse position than blogs and wikis, where cheap easy products are openly available, are the wonderful but complicated ideas that would benefit from financial backing to implement — the databases, and open lab notebooks, and the like — but which are currently artificially rendered unviable because no scientist could ever afford to waste time and money on a product that isn't a journal with an impact factor. No scientist can try something new; no business can offer anything new. Even such an obviously good idea and such a tame and simple advance as open access to the scientific paper has taken over a decade to get as far as it has in part because it takes so long for start-up publishers with a novel business model to develop a portfolio of new journals with attractive impact factors.
I am not a research scientist. I don't have to play the publish-or-perish game. So I have no personal grudge; no career destroyed or grant lost by rejection from a top-tier journal. It doesn't bother me how much agony, absurdity, and arbitrary hoop-jumping research scientists have to go through in their assessments and applications. But it bothers me greatly that, by putting such weight on the publication record — not actual quantity and quality of science done, but a specific proprietary measure of the average impact of the journals (and journals alone) that it's published in — public institutions across the world are distorting markets, propping up big established publishers, and destroying innovation in the dissemination of science. End the malignant metric and everything else will follow.
No comments:
Post a Comment