I just found out that the journal impact factors for 2005 were recently released, and as usual, the journals with the highest impact factors are not necessarily the ones that would be considered the most prestigious. Therefore, the following post from the archives, about an alternative rating scheme for scientific journals, seemed relevant. Enjoy!
In regards to my statement below about my old site's PageRank, I did finally get one but apparently the new site hasn't made an impression on Google yet....
(17 February 2006) How do you know how important a website is? You probably already have a good idea without looking at any numbers, but if you want to get all quantitative about it, one way would be to look at how many hits it gets per day. The more commonly-used measure, though, is its Google PageRank, which is based not only on how many other webpages link to that page, but also how important the pages are that are doing the linking (based on who links to those pages).
Editorial note: I hope that we don't put too much weight on PageRanks, because for some reason The Scientific Activist doesn't have one yet, despite boasting over 100,000 visitors in its first month! What's wrong, Google? Was it something I said? I can make it up to you. I promise. Pleeeeeaaaaaase....
Anyways, where was I?
Oh, yes, so today Nature magazine reported that some researchers are pushing to use this same PageRank technology to rate scientific journals. Currently, the importance of a journal is quantitatively measured by a number called the ISI Impact Factor. The Impact Factor of a journal is the number of times articles from that journal are cited by other articles, divided by the number of articles it publishes (citations per article). Not everyone is satisfied with this system, and in another entry in the eternal debate on quality versus quantity, Nature explains:
Now Johan Bollen and his colleagues at the Research Library of Los Alamos National Laboratory in New Mexico are focusing on Google's PageRank (PR) algorithm. The algorithm provides a kind of peer assessment of the value of a web page, by counting not just the number of pages linking to it, but also the number of pages pointing to those links, and so on. So a link from a popular page is given a higher weighting than one from an unpopular page.
The algorithm can be applied to research publications by analysing how many times those who cite a paper are themselves cited. Whereas the IF measures crude 'popularity', PR is a measure of prestige, says Bollen. He predicts that metrics such as the PR ranking may come to be more influential in the perception of a journal's status than the traditional IF. "Web searchers have collectively decided that PageRank helps them separate the wheat from the chaff," he says.
So, why do we even need to measure the relative importance of scientific journals at all? For bragging rights, of course!
Well, maybe there's a little more to it than that:
Ranking journals and publications is not just an academic exercise. Such schemes are increasingly used by funding agencies to assess the research of individuals and departments. They also serve as a guide for librarians choosing which journals to subscribe to. All this puts pressure both on researchers to publish in journals with high rankings and on journal editors to attract papers that will boost their journal's profile.
People in the scientific community already have a basic idea of the relative importance of different journals. For example, a scientist knows that The Journal of Biological Chemistry is more prestigious than Protein and Peptide Letters, just like you probably know that nytimes.com is a more influential website than, well, scientificactivist.blogspot.com. Scientists can already agree on the basic rankings, at least at the top. The vast majority of scientists would consider Science and Nature to be the "best" journals to publish in. There would probably be a pretty strong consensus on the next set of journals as well, which would include Proceedings of the National Academy of Sciences and Cell. The further down you go, though, the messier it gets. That's where the Impact Factor comes into play.
Near the top, though, the Impact Factor does not line up with this general consensus, but the PageRank does... sort of.
Where the Impact Factor and PageRank fall short, a different measure called the Y-Factor - a combination of the Impact Factor and the PageRank - seems to really get the job done. Of course, this all depends on what exactly we want to measure, but if we are just interested in attaching numbers to what we already know, then the Y-factor puts the other systems to shame. Apparently, Bollen agrees as well:
Bollen, however, proposes combining the two metrics. "One can more completely evaluate the status of a journal by comparing and aggregating the different ways it has acquired that status," he says. Some journals, he points out, can have high IFs but low PRs (perhaps indicating a popular but less prestigious journal), and vice versa (for a high-quality but niche publication). Using information from different metrics would also make the rankings harder to manipulate, he adds. So Bollen and his colleagues propose ranking journals according to the product of the IF and PR, a measure they call the Y-factor....
...But for Bollen, ranking journals more effectively by combining different ranking systems could help protect the integrity of science. He warns that scientists and funding agencies have used the ranking system well beyond its intended purpose. "We've heard horror stories from colleagues who have been subjected to evaluation by their departments or national funding agencies which they felt were strongly influenced by their personal IF," he says. "Many fear this may eventually reduce the healthy diversity of viewpoints and research subjects that we would normally hope to find in the scholarly community."
So, the message here is that if people are going to abuse or overuse these rankings, we might as well make them as accurate as possible. I could buy that. However, if we really want to be scientific about this, we need to see detailed studies that show that the Y-Factor works the best in all parts of the rankings, not just at the top. This could be based on surveys of scientist opinion, or on other less subjective measures. If the Y-Factor really does prove to be a better measure of a journal's impact, then the scientific community should embrace the improvement.
- Log in to post comments
As someone who is not deeply involved in molecular studies, it saddens me that things turned out the way they have. If you look at the top list, there's really just medicine and molecular sciences. I think this doesn't take the edge off of the sheer number of people working in a certain field. This effectively makes other journals "less important". If my awareness is correct, this poses a threat to marginal science groups like taxonomists who can barely get funding for their research, yet their work is of monumental importance (how are you going to cure malaria if you don't know how to identify the right Anopheles?). It's similar with other groups as well.