I Unwittingly Manipulate A Citation Index

The list of misdemeanours that identifies an Open Access science journal as predatory and not bona fide is long. One of them is attempts on the part of the publisher and editors to manipulate the journal's citation index, for instance by demanding that authors cite earlier work published in the same journal. If many scholars cite papers in a given journal, then that journal's index improves -- even if the citing only goes on inside the covers of the journal itself.

When I first read about this criterion I was a little embarrassed, because I do that all the time when editing Fornvännen. I don't demand that authors cite earlier papers in our journal, but I often suggest that they should, because it's part of my job as editor to make sure that authors acknowledge the latest relevant work in their fields. Still, ours is not a predatory operation.

To start with, few scholars in the Scandinavian humanities pay any attention to citation indices. Ours aren't global fields of inquiry such as those covered by Nature and Science. I have no idea what Fornvännen's citation index is and I don't know how to find out. Our authors wouldn't even notice if our citation index improved due to shenanigans.

Secondly, the number of journals in our fields is tiny. We're not one of a hundred journals competing for the same papers. Thirdly, we practice green Open Access, so we don't make any money off of authors, or at all actually. And fourthly and most importantly, Fornvännen is on its 109th year of uninterrupted publication and has no need to reinforce its brand. Within the parameters of a regionally delimited field in the humanites, for us to try to manipulate our citation index would be like Science or Nature doing it.

More like this

Perennial Aard favourites N-A. Mörner and B.G. Lind have published another note in a thematically unrelated journal. It's much like the one they snuck past peer review into Geografiska Annaler in 2009 and which Alun Salt and I challenged in 2011. The new paper is as usual completely out of touch…
Those of us who publish technical research papers like to see our work cited by our colleagues. Indeed, it's integral to one's success as a researcher (whatever 'success' means) that others cite your work, in whatever context. You might not like to see the publication of a stinging attack that…
The Wall Street Journal has an article -- unfortunately behind their subscription paywall -- about how scientific journals appear to be attempting to game the impact factor system which claims to offer an unbiased rating of a journal's influence. The article describes John B. West's experience in…
The gold standard for measuring the impact of a scientific paper is counting the number of other papers that cite that paper. However, due to the drawn-out nature of the scientific publication process, there is a lag of at least a year or so after a paper is published before citations to it even…

There is a fine distinction here. Insisting that authors cite papers on the subject, which may be in your journal or some other publisher's, is reasonable. The issue is journals which push authors to cite papers specifically in that journal, not necessarily because they are relevant. The trick is to distinguish between the two cases.

By Eric Lund (not verified) on 06 Feb 2014 #permalink

I am bloody tired of the American glorification of the Citation Index. In some subfields there is little if any positive correlation between the number of journal papers that cite a paper and the number of people who may directly or indirectly make use of the paper's data or conclusions out in the real world. Valuing the former while ignoring the latter turns science into a form of mental, let's say, self-gratification. And let's not even talk about the ridiculousness of judging people's worth as scientists based upon how many citations are received by average (i.e., OTHER) authors in the journals in which they publish. That's like saying my intellectual quality should be rated higher because I once sat next to a [well-known field-specific] prize winner at dinner.

There's also the well-known sociological factor that a scholar with power over academic resources will get cited more than others.

I don't understand the point of journal ranks or citation indices either, nor of rewarding people for publishing in famous journals. These days it is easy enough to get a copy of most things that I think anything which sounds interesting and passed peer review is worth glancing at. But I know that university administrators want a way to measure research productivity, and think that just counting peer-reviewed publications is too simple.

The trouble with citation indices is that they measure certain things, which are not necessarily what you want or need to know. A citation is a citation, whether the citing author is praising or criticizing the work in question (some years ago, a senior scientist in my field confessed that his then most-cited paper got most of its citations from a rival who claimed that his paper was incorrect). But because they actually measure something, bean counters tend to use them, even for comparison between fields--a purpose for which they are worse than useless, because citation practices differ between fields, and even between subfields within a field.

By Eric Lund (not verified) on 09 Feb 2014 #permalink

Here is a link for a well-known fail at least in the field of crystallography:

www.niscair.res.in/jinfo/ALIS/ALIS 58(1) (Correspondence).pdf

What I found especially embarassing at that time is how primitive widely-used citation indices are, if they can already fail due to a single paper.

By Ulf Lorenz (not verified) on 11 Feb 2014 #permalink

Damn it, spaces should have been protected. In any case, they are part of the url. Otherwise, a google search for "impact factor acta cryst" will also turn up relevant entries.

By Ulf Lorenz (not verified) on 11 Feb 2014 #permalink