The journal impact factor is a sham and a crock and a delusion, let's just take that as read. (If you don't care to take that as read, which is a healthy and sane attitude—take no one's word as gospel, especially not mine!—start here or perhaps here and keep going.) Using it to judge individual researchers' output, never mind the researchers themselves, verges on the criminal, is my strong belief. I'm not against heuristics, but some heuristics are plain broken, and the journal impact factor is one of those.
So it really hurts my heart to see librarians giving this flawed number credence. Librarians! We who call ourselves information experts!
I won't link-and-shame, despite temptation. I'll just say that in the last week, I've come across one library blog posting a list of "here are high-impact journals in X discipline" and another library doing a workshop on "use the impact factor to choose where you publish!"
We are better and wiser than this. I hope we are better and wiser than this.
Of course we can't ignore the impact factor. That's a very long way from saying that we ought to celebrate, support, or draw positive attention to it. When we mention it, we should wrinkle our librarianly noses in disdain. When we teach workshops on it, our attitude should be "look, this system is bad and wrong and I'll happily show you why, but we're stuck with it until everyone wises up, so here's how to game it as best you can." When we look over serials subscriptions, we should frankly ignore it.
It doesn't hurt that by breaking the back of the impact factor, we're reducing the influence of many of the very journals whose inflated prices are breaking our backs.
We have authority and power. This is one very serious situation in which we owe it to our researchers and ourselves to use it wisely.
- Log in to post comments
I hope my library's workshop isn't one you're railing against. We do a workshop called "Got Impact" which is mostly about finding citations to one's work. We touch on impact factor and how to use JCR but very much put it in context and distribute a reading list with articles critical of it as a measure. The problem in my opinion lies not with individual researchers but with the dept chairs, hiring and P&T committees. Those people tend to be hard for libraries to reach.
Up there with using download counts as a primary method to assess critical journal subscriptions. Of limited value when consulted along with a range of other metrics and measurements. Next to useless if not dangerous when deployed as the primary method of assessment.
It's funny, jenjen, I've gotten three private emails already asking if I was targeting their program.
I'm not targeting anybody specifically; that's why I declined to link. This isn't a problem of a few people. It's endemic.
Your approach seems sane, as these things go. Hook 'em with the I-word, then teach 'em better.
It is flawed, but it does provide *some* information. When combined with local and global measures as well as local community needs, it can be useful for evaluating journals. Never should a single number be used to evaluate a person - particularly a single number meant to provide information about a journal - but I do not agree that there is no value the measure at all.
Christina, there is potential for an impact-factor-like citation-analysis measurement to be worthwhile.
I don't think the Impact Factor (tm) is that measurement, I genuinely don't -- not least because it appears that not even Thomson can reproduce its own numbers reliably.
And as we both agree, the ways it is typically used are pure unadulterated wrongness.