Doctoring Your Way to a Doctorate

If you have not read it, go check out Nicholas Wade's article on doctored images in scientific publications. This is especially pertinent given the recent Hwang Woo Suk stem cell debacle. There is nothing all that revolutionary, but Wade gives a nice review and introduces us to some of the editors who are trying to catch the cheaters. In commenting on the article, John Hawks brings up a good point regarding Photoshop:

"I don't worry too much about Photoshopping illustrations of fossils. Instead I worry about two things.

"One is picture selection. It is easy to choose pictures that support your argument and hide pictures that don't. Sometimes these are really obvious, like when two bones are shown side by side in different orientations. When that happens, you just know that something isn't what the paper is saying."

I'm not too concerned with fossils -- I think I've only seen one scientific talk that involved fossils -- but I do see a lot of talks with pretty pictures of cells and tissues (one of the side effects of being an evolutionary geneticist is that you end up getting exposed to developmental and cellular genetics as well). Keep in mind, I'm not talking about doctoring pictures of gels (where I think the biggest concern lies). More on images in publications after the jump . . .

About a year ago, I was talking with another graduate student, and the topic of statistical rigor in different areas of genetics came up. He also studies population geneticists, and we commented how in evolutionary genetics it's impossible to not be exposed to statistics that go over your head, and you have to do your best to tread water and make sense of all the math. We also agreed that it was aggravating that in other areas of research, a lot of information is presented as a "sample size of one" (I'll get to what I mean by that in a moment).

Now, I am not arguing that the mathematics in other areas of genetics is not important. In fact, I think it's often overlooked how important statistics are in all of science. Researchers that use genetics as a tool to study development and cell biology do perform statistical tests that show significant differences in some quality between two genotypes. I'm often surprised, though, by how limited the understanding of statistics is in the non-quantitative areas of genetics.

What bothers me most, however, is that aforementioned sample size of one. Anyone who has seen a talk in which mutant and wild-type phenotypes are shown knows that there is usually one representative for each phenotype. This is often the best picture the researcher could find of what they consider the archetypical mutant. There could be different penetrance, depending on environmental conditions which lead to variations in the expression of the mutant phenotype, but we never know because we are only shown a single picture. Oftentimes, it's not clear whether the mutant phenotype shown is found in all individuals with a particular genotype or just a few, and we're seeing a picture of one of them. I got so frustrated with this sample size of one that I began asking anyone who showed a picture of a mutant phenotype whether all of the individuals have the phenotype or if they just picked out a particularly good picture. I think I stopped going to these types of talks before anyone admitted cheating (and I really don't think it's a very big deal, but it bothered me none the less).

But why do people manipulate images? There's the obvious reason that they needed to cheat in order to get their paper published (a major concern). But there is also a lot of competition for who can produce the prettiest pictures. If you want your paper to make the cover of an issue, it doesn't hurt if you have some eye-catching illustration. Being on the cover equals more exposure for your research, and every scientist wants exposure. The Drosophila community also gives out awards for the best published image of the year (I bet other research communities do this too), which is great for exposure. I don't know how big of a problem cheating is in these areas, but it's not as big of a concern if you doctored your image to make it look better than if you did it to change your results. It's not like these people are adding or removing bands from gels, but there is the pressure to manipulate your images.

More like this

I've been guilty of teaching bean-bag genetics this semester. Bean-bag genetics treats individuals as a bag of irrelevant shape containing a collection of alleles (the "beans") that are sorted and disseminated by the rules of Mendel, and at its worst, assigns one trait to one allele; it's highly…
In today's Chronicle of Higher Education there's an article about the methods journal publishers are deploying to detect doctored images in scientific manuscripts. From the article: As computer programs make images easier than ever to manipulate, editors at a growing number of scientific…
Last week, an antivaxer "challenged" me to look over a paper purporting to show that aluminum adjuvants in vaccines cause inflammation of the brain and therefore contribute to autism, a paper that she would be "citing frequently." Being someone who lives by the motto, "be careful what you wish for…
Josh Harkinson at Mother Jones recently posted an item called "Scores of Scientists Raise Alarm About the Long-Term Health Effects of Cellphones." I like Josh's work, but there are some problems with this article I want to point out, some of which parallel problems in the more general discussion…

It's not just an issue that comes with the advent of Photoshop, of course. When Vesalius came out with On the Fabric of the Human Body in 1543, it was illustrated with stunning engravings that students of anatomy sometimes regarded as more authoritative guides to human anatomy than the cadavers right in front of them.

Part of it may turn on the tension between Platonist and Aristotelian impulses. The Aristotelian anatomist would look at oodles of cadavers (or drosophila, or whatever), try to pin down the features common to all of them, then put those in the representation. The Platonist, instead, tries to grasp the more perfect "form" of human body (or drosophila, or whatever) from the imperfect material instantiation of it in any given cadaver (drosophila, etc.) -- representing what you'd see if you were seeing the form without the intervening imperfection of material stuff.

The Platonist isn't dishonest per se in his representations. I think the problems come when a Platonist passes himself off as, or is mistaken for, an Aristotelian (i.e., when it's assumed that all or most of the specimens looked like the "representative" specimen pictured).

One thing that I never hear discussed is that much of this gel picture doctoring is pretty silly stuff. People change the contrast to emphasize the bands they care about, and to remove spots and what have you. All of this is wrong and somewhat annoying to me, but in general there is some confidence that the researcher believes his results. He just wants to make it really easy for others to agree with him. Not defending that, just saying.

What scares me is that if I wanted to fake a gel, I would be able to do it really easily. I would just run a gel of something else, and then lie in the labels. How often do we see Northerns which are just a strip of bands, with no identifying information? How do we know that the RNA is what they say it is? We have to trust them. And really, that is all that reviewers can do. Trust that what they say they did is what they did. We can't wait for independent replication before publishing everything. It would simply take too long. But replication is the real test, and it always has been. Even perfectly honest scientists can be wrong.

Part of the problem is the media hype, of course. If the media did a better job of treating science as constantly in flux, and all discoveries as tentative, we would be better off. At the same time, I wonder how long Dr. Suk really thought he would be known as a future nobel prize winner. I can't imagine wanting the brief fame in exchange for the long infamy. Perhaps he was like Stalin, driving his underlings so hard that they started to make things up to keep him happy. But that raises a whole different set of issues...

I've been thinking about this a bit more (thanks to the two comments), and it's cake to fake data. My bitching about the pressure to doctor images wasn't all that serious. But the simplicity in manipulating gels (and, apparently, other pictures) is troubling. We pretty much have to take everyone's word for their images -- they may not even be doctored, but just something other than what they claim to be. You can run anything out on a gel and say it's whatever you want it to be.

Hell, we can even fake quantitative data. Just generate some data to fit your hypothesis using a coalescent simulation. I don't think hope is lost, though. I'd like to think that people wouldn't sacrifice their career to forge data. And if they do, I'd like to think someone would blow the whistle.

Of course, we only hear about the people who get caught. How many people get away with it?

One simple solution to the problem (and I agree with the guidelines, and follow them when I prepare images, btw), is not to rely on images as evidence. There are a few areas (histology, fluor. microscopy, probably a few others) where images are the data, but most of the time, they are eye candy, or, true to the name, illustrative. The conclusion of the text should not rely on images as evidence. The "cherry-picking" problem goes away when cherry picking is the intent (i.e. the picture is the best illustration of the idea, not the evidence for it)

There are, obviously, many, many places where this is not possible. Imaging, fluorescence, and measurement are indispensable tools in biology. In these instances, investigators must be rigorous, and editors must demand it. No image should be permitted as evidence without very precise documentation of how it was taken, and how it was processed. Any conclusion drawn should be reviewed very critically. In the end, however, we trust scientists to tell the truth, an we verify important results by repeating them. It's not perfect, but it works pretty well, IMHO.

By Paul Orwin (not verified) on 24 Jan 2006 #permalink

A few comments...

"...there is usually one representative for each phenotype. This is often the best picture the researcher could find of what they consider the archetypical mutant..."
Journal articles have severe space limitations on figures, so it's generally not feasible to show a range of phenotypes for a certain genotype. Authors often state phenotypic ranges in the text of the paper, or include it in supplementary figures.

"...How do we know that the RNA is what they say it is? We have to trust them. And really, that is all that reviewers can do. Trust that what they say they did is what they did. We can't wait for independent replication before publishing everything. It would simply take too long...."
Wabin correctly points out that replication is the gold standard; in practice, any experimental result worth repeating is repeated. The replication may not be published, but when other labs publish related work, you can bet some of their unpublished results are replications of the initial result. If you're taking a line of research forward, based on someone else's previous work, you have to be certain the early work is correct. And the great majority of published biology research is just that: building on an earlier line of research.

"....Any conclusion drawn should be reviewed very critically. In the end, however, we trust scientists to tell the truth, an we verify important results by repeating them..."
I can tell you that in my field (developmental genetics) there have been some high profile retractions in the last several years, in each case because the initial results could not be replicated. This is the part that leaves me scratching my head: yes, it's possible to fake data, it's not that hard to do, but it's utterly self-defeating in the end. Leaving aside the morality issues, what is your motivation for concocting a brilliant paper? Advancement in your field, a faculty position, tenure, a grant, a patent, etc. etc. Basically people do it to get ahead. But it never works. If the result you faked is good enough to get you a lab or a grant, it's a sure thing that other people will try to reproduce it. You can't win. If you think it through, there's no incentive, unless you win some big monetary award and then abscond to Monaco.

JDL,

It's good to see that results are being reconsidered if they can't be reproduced. Are people no longer citing the original publication? Does anyone else remember the study of publications in the medical literature that had been "overturned" by a later study? It turned out that the original publications were still cited despite contradictory evidence.

It seems that it's very difficult to get references to overturned/retracted articles out of any body of literature, especially if they received heavy press upon release. One major vector for continued transmission occurs through related research areas, where folks justify the reasons for their work by citing research in areas that they might not pay that much attention to, beyond their initial citation in a first grant proposal or article, and therefore might not see the retractions etc. for a long time.

(says the chemist reading the genetics blog)

((And of course, if you were so inclined, you could draw parallels with any of the nonsense perpetrated during any recent election campaign that, while demonstratably proven false, still gets spewn as truth in a range of public circles.))