Article-level metrics
This is a guest post by Joe Dunckley
Guys, are you sure you've thought this through? I mean, they're nice. They're fun. Data is fun. Seeing that somebody somewhere has read something you've written is satisfying and reassuring. It's good to know that you've sparked a conversation, and gotten people recommending you to their friends. But you think that it can't possibly go wrong? You think we should roll it out as the universal metric right now, and sort out the details later?
The impact factor is just data. It's nice for a publisher to know that people are reading the papers that they publish, and that all their hard work is having some effect. I can't believe that it would ever have been intended that such an absurd situation as the current practice of making and breaking careers according to a journal citation index should have arisen. Give out article-level metrics and they're soon going to stop being a bit of fun. People are going to use them and abuse them. It's what people do with data.
If you're going to reduce somebody's life's work to a number, it would certainly be less absurd to pick a number that is in some way relevant to that work, rather than relevant to the work that a whole bunch of other people did several years earlier. But only a little bit. What are article-level metrics representing? The quality of the work, or the controversy you've stirred up? The web-savviness of the field? The number of friends you have? There is already huge variation between the kinds of impact factors that medical journals get compared to, say, the sort that ecology journals get. What if, at the article level, breast cancer turns out to be inherently more comment-worthy than bowel cancer? If tenure and funding committees are willing to use something as absurd as an impact factor when making a decision, do you think that they're going to give a damn about the inherent variation in readership between fields? All those bowel cancer researchers better start reading up on their breast cancers now.
But what happens when article-level metrics really start to mean something? Give a large enough number of people an incentive to cheat and some of them are going to cheat. Remember when the World Journal of Gastroenterology boosted its impact factor with a little citation loading? How are you going to stop academics from doing what flickr users do to get themselves into the site's front-page "Explore" section for the day's best photos -- posting their stuff to "I'll leave an inane comment on yours if you leave an inane comment on mine" groups? What happens when academics spend ever increasing hours marketing their work to each other? How long is it going to be before journals and universities are competing for researchers by advertising how good their average article-level metrics are? Before journals and universities open departments dedicated to pressuring people into reading and commenting and blogging their articles?
What happens when pharmaceutical companies get in on the act? What about the ideas that get ignored for ten years, before it becomes apparent how important they are -- do the metrics count for the original paper, or for the review article that reignites the interest? What about the assholes, the trolls, the groupthink...?
Article-level metrics are a bit of fun. It's possible for them to remain a bit of fun. But it's going to take a lot of forethought and vigilance to make sure that is so.
If you're going to reduce somebody's life's work to a number, it would certainly be less absurd to pick a number that is in some way relevant to that work, rather than relevant to the work that a whole bunch of other people did several years earlier. But only a little bit. What are article-level metrics representing? The quality of the work, or the controversy you've stirred up? The web-savviness of the field? The number of friends you have? There is already huge variation between the kinds of impact factors that medical journals get compared to, say, the sort that ecology journals get. What if, at the article level, breast cancer turns out to be inherently more comment-worthy than bowel cancer? If tenure and funding committees are willing to use something as absurd as an impact factor when making a decision, do you think that they're going to give a damn about the inherent variation in readership between fields? All those bowel cancer researchers better start reading up on their breast cancers now.
But what happens when article-level metrics really start to mean something? Give a large enough number of people an incentive to cheat and some of them are going to cheat. Remember when the World Journal of Gastroenterology boosted its impact factor with a little citation loading? How are you going to stop academics from doing what flickr users do to get themselves into the site's front-page "Explore" section for the day's best photos -- posting their stuff to "I'll leave an inane comment on yours if you leave an inane comment on mine" groups? What happens when academics spend ever increasing hours marketing their work to each other? How long is it going to be before journals and universities are competing for researchers by advertising how good their average article-level metrics are? Before journals and universities open departments dedicated to pressuring people into reading and commenting and blogging their articles?
What happens when pharmaceutical companies get in on the act? What about the ideas that get ignored for ten years, before it becomes apparent how important they are -- do the metrics count for the original paper, or for the review article that reignites the interest? What about the assholes, the trolls, the groupthink...?
Article-level metrics are a bit of fun. It's possible for them to remain a bit of fun. But it's going to take a lot of forethought and vigilance to make sure that is so.
1 comment:
This post was cited in Jason Priem, Bradely H. Hemminger. Scientometrics 2.0: New metrics of scholarly impact on the social Web First Monday 2010, 15:7.
Post a Comment