Inappropriate citations?

ResearchBlogging.org Kevin Zelnio of Deep Sea News tweeted the title of this piece and sent my mind going over the various theories of citation, what citations mean, studies showing how people cite without reading (pdf) (or at least propagate obvious citation errors), and also how people use things but don't cite them in certain fields... I was also thinking, I know what inappropriate touching is, but what's inappropriate citing?  So let's take a look at the article:

Todd, P., Guest, J., Lu, J., & Chou, L. (2010). One in four citations in marine biology papers is inappropriate Marine Ecology Progress Series, 408, 299-303 DOI: 10.3354/meps08587

According to the authors, inappropriate citations intentionally or unintentionally misrepresent the meaning of the work cited. Here are some aspects they mention:

  • citing a review article instead of the primary work (hmmm)
  • citing something that asserts the idea based on another citation, not based on the work presented in that paper ("empty" citations)
  • misunderstanding or misinterpreting an article (or citing without reading)

I guess the first author's been on sort of a kick about this, the method they use comes from his earlier paper in ecology. They also reference similar studies in a number of areas in medicine.

They selected a couple of articles from recent issues of 33 marine biology journals, and for each article they picked a citation that was provided to support one assertion. They rotated where in the article they got the citation from - the intro, methods, results/discussion. They retrieved the cited article and coded whether it provided clear support, no support, ambiguous, or empty citation for the assertion. Here's an issue: majority ruled, ties went to the author - the more typical thing is to negotiate disagreements and/or to come up with an inter-rater reliability measure. You see how this could be problematic for the ambiguous category which has the following scope note:

"The material (either text or data) in the cited article has been interpreted one way, but could also be interpreted in other ways, including the opposite point. The assertion in the primary article is supported by a portion of the cited article, but that portion runs contrary to the overall thrust of the cited article. The assertion includes 2 or more components, but the cited article only supports one of them"

The assertions were clearly supported 76% of the time, but another 10% were ambiguous.  It didn't matter which section the citations were in, the number of authors, the number of references in the list (that would be interesting, really, because might indicate some sort of padding), the length of the article or the journal impact factor (again, would have been interesting if there was some correlation with some proxy for "journal quality").

They suggest that this practice could undermine an entire discipline and that this padding to try to get a paper accepted is dirty business. I'm not really sure it's as widespread as all that or that it's as pernicious. Based on their methods, we don't know if the percentages found could be accounted for by inter-rater disagreements.  How often/well did the raters agree? Particularly in the ambiguous category that could make a big difference.

They use this to make the case that citations might not be the best thing to use to judge people and institutions (well, yeah!). They also rest the responsibility on the authors, and suggest they be more careful, not provide whole lists at the end of sentences, and cite correctly. They suggest journals could require some random audits - seems like most journals are more likely to go the other way, to suggest new citations.

The biggest problem with this is that all citations have to support assertions - some citations might be of other ways a method was used or for sources of additional information. A citation is an indication of utility, not quality, really. Also, some mistakes happen - the wrong article by the right author (maybe clicked the wrong button when inserting the citation into the manuscript or faulty memory) - and I think no one is really suggesting that an article should be retracted or an erratum should be issued if this is discovered. Dunno, I'm under-impressed by the article and the severity of the issue... you?

More like this

Science is supposed to be a project centered on building a body of reliable knowledge about the universe and how various pieces of it work. This means that the researchers contributing to this body of knowledge -- for example, by submitting manuscripts to peer reviewed scientific journals -- are…
I attended this one-day workshop in DC on Wednesday, December 16, 2009. These are stream of consciousness notes. Herbert Van de Sompel (LANL) - intro - Lots of metrics: some accepted in some areas and not others, some widely available on platforms in the information industry and others not. How are…
On June 21, 2007, the American Psychiatric Association issued a press release on the subject of video game addiction.  Apparently, it had been rumored n the media that the APA was going to have a vote on whether to classify excessive video gaming as an addiction. That was never a possibility.  …
The gold standard for measuring the impact of a scientific paper is counting the number of other papers that cite that paper. However, due to the drawn-out nature of the scientific publication process, there is a lag of at least a year or so after a paper is published before citations to it even…

Hey Christina: Interesting to see that our USC linker/resolver gives me the "Find it at USC" branded link even when I'm not logged in via the proxy!
You know what has amazed me in this century, how INACCURATE ACADEMIC ENGINEERS (sorry guys, I love you otherwise) are in citations! As you say - citing abstracts, or someone else's vague reference too, etc. I hafta say I found the physicists at Fermilab more accurate in general.
Now, this COULD WELL be a phenomenon of INFO OVERLOAD, too much to read, let alone cite?
Sara

By Sara Tompson (not verified) on 05 Jun 2010 #permalink

"empty" citations could definitely add complication for newcomers to the field. (I suspect some of these are inferences, others are citing the most recent work by the author to whom these are attributed.) Many papers are directed to specialists in the field who are deeply familiar with the work going on, and have their own impression of it. For that audience, making incorrect assertions (or overstatements) about what a cited paper said is less problematic -- the BS or sloppy thinking is more clear.

But it puts me in mind of citation typing.

If you have the libx tool bar installed it puts the resolver link there - I see the one from my place of work (a cool thing about using ResearchBlogging.org - they add COINS data to the citations). A lot of people are really messy about citations - wrong or missing volumes, pages, the names of journals wrong - librarians see this all the time. Another reason I don't think this is really as bad a thing as the authors do!

I don't know, if you follow a citation and it goes to a review article, then that article will probably cite the standard articles - so it just adds another link in the citation chain but it might also point out a lot more than just that one article. I see their point if an article is miscited a lot - a bunch of people cite that it says one thing and it says something different, but I'm not as worried about citations to review articles.

I think it's bad form to cite a review without saying so (e.g. "reviewed in Bloggs et al 1867"), but it doesn't seem harmful.

Inaccurate or empty cites are a different matter. As a reviewer I quite often find these, and it's not as though I check every reference (I probably should, but who has the time?). JS at #2 makes a good point about specialists being able to sort through the BS, but as Open Access becomes more prevalent, and as patient advocate groups like PatientsLikeMe get stronger, the risk increases that readers with less well-prepared defenses against such misinformation will be exposed to it.