Everyone and their grandmother knows that Impact Factor is a crude, unreliable and just wrong metric to use in evaluating individuals for career-making (or career-breaking) purposes. Yet, so many institutions (or rather, their bureaucrats – scientists would abandon it if their bosses would) cling to IF anyway. Probably because nobody has pushed for a good alternative yet. In the world of science publishing, when something needs to be done, usually people look at us (that is: PLoS) to make the first move. The beauty of being a path-blazer!
So, in today’s post ‘PLoS Journals – measuring impact where it matters’ on the PLoS Blog and everyONE blog, PLoS Director of Publishing Mark Patterson explains how we are moving away from the IF world (basically by ignoring it despite our journals’ potential for marketing via their high IFs, until the others catch up with us and start ignoring it as well) and focusing our energies in providing as many as possible article-level metrics instead. Mark wrote:
Article-level metrics and indicators will become powerful additions to the tools for the assessment and filtering of research outputs, and we look forward to working with the research community, publishers, funders and institutions to develop and hone these ideas. As for the impact factor, the 2008 numbers were released last month. But rather than updating the PLoS Journal sites with the new numbers, we’ve decided to stop promoting journal impact factors on our sites all together. It’s time to move on, and focus efforts on more sophisticated, flexible and meaningful measures.
In a series of recent posts, Peter Binfield, managing editor of PLoS ONE, explained the details of article-level metrics that are now employed and displayed on all seven PLoS journals. These are going to be added to and upgraded regularly, whenever we and the community feel there is a need to include another metric.
What we will not do is try to reduce these metrics to a single number ourselves. We want to make all the raw data available to the public to use as they see fit and we will all watch as the new standards emerge. We feel that different kinds of metrics are important to different people in different situations, and that these criteria will also change over time.
A paper of yours may be important for you to be seen by your peers (perhaps for career-related reasons which are nothing to frown about) in which case the citation numbers and download statistics may be much more important than the bookmarking statistics or the media/blog coverage or the on-article user activity (e.g., ratings, notes and comments). At least for now – this may change in the future. But another paper you may think would be particularly important to be seen by physicians around the world (or science teachers, or political journalists, etc.), in which case media/blog coverage numbers are much more important for you than citations – you are measuring your success by how broad an audience you could reach.
This will differ from paper to paper, from person to person, from scientific field to field, from institution to institution, and from country to country. I am sure there will be people out there who will try to put those numbers into various formulae and crunch the numbers and come up with some kind of “summary value” or “article-level impact value” which may or may not become a new standard in some places – time will tell.
But making all the numbers available is what is the most important for the scientific community as a whole. And this is what we will provide. And then the others will have to start providing them as well because authors will demand to see them. Perhaps this is a historic day in the world of science publishing….