More Risky Business

A couple of days ago, I told the story of GEP, how a tobacco company tried to get epidemiologists to adopt a bogus principle that risk factors of less than 2should be ignored. I noted that Iain Murray was still peddling this bogus principle in a Tech Central Station article. That wasn't the only time Murray had tried advancing the tobacco company's risk-factor-of-two principle. He also did it in this Tech Central Station article, which prompted an actual epidemiologist to send him the following email:

Thank you for your thoughtful article "Epidemiology beyond its limits", which highlighted some of criticisms of my discipline as it is currently practiced. I agree in general that epidemiologic research should be of the highest quality and conclusions should be reported in context. However, your article leaves the lay reader the impression that epidemiologists have been changing the rules for some reason. Not so.

First, the Bradford-Hill criteria were always meant to be rules of thumb, except temporality, which is metaphysically necessary. The BMA statement merely boils down to the proposition that an association may still be causal if one of the the criteria (save temporality) is not fulfilled. Put another way, fullfillment of most of the criteria can be sufficient to surmise causality. I think that is quite different from the impression you've given your readers, which is that 5 of 7 steadfast rules have been chucked by the side of the road.

Second, the NEJM quite explicitly told Gary Taubes that it was a rule of thumb that they only accepted papers with relative risks of 3 or more. I presume that since JAMA uses a similar rule of thumb, one of the things that played into their decision to publish the air pollution study with an RR of 1.12 is that it was very large, it was longitudinal, and it was methodologically sound. I am certain that JAMA would not have published a case-control study on the same topic that reported an odds ratio of 1.12, since this study design is much more susceptible to bias. You ask in your article on the air pollution study why anyone is concerned with a 12% increase in risk (assuming that is the correct number). The answer is that a nearly ubiquitous exposure which increases the risk of a disease slightly can impact far more people than can a very rare exposure which vastly increases disease risk. This should be apparent if you play around with the formula for population attributable risk.

Third, you should know (and should let your readers know) that epidemiologists themselves share many of your concerns. We recognize that individual epidemiologists have an incentive to overstate the importance of findings from their particular studies. The peer review and editorial process mitigates some of this tendency, though of course the extent to the process works depends on the journal. Thankfully, epidemiologic studies are published in a wide variety of journals dedicated to particular disease areas or more generally to epidemiology, and not just in NEJM and JAMA.

Lastly, epidemiologists are also concerned about data dredging. Your readers might be interested to know that so-called data dredging did not come about because of the sudden desire of epidemiologists to implicate everything under the sun as a risk factor. Rather, it followed the advent of the computational power to perform multiple regression with ease. I am among those who think that there is little wrong with data-dredging per se when used for hypothesis generation. It is only the reporting of such results as conclusions rather than as leads that is problematic.

Though we epidemiologists are a very critical bunch, occasionally an article with a sensationalistic spin will slip through the cracks. Please don't let that sour you on an entire field.

That was in February. How did Murray respond to being corrected by an epidemiologist? Well, in March he wrote the other article where he said:

Epidemiologists generally agree that one cannot ascribe medical causation to a risk factor if the factor is associated with less than double the occurrence than normal.

He'd just been told by an epidemiologist that epidemiologists did not agree with that statement but he immediately turned around and wrote that they did. Now Murray himself might agree with the statement, but that's not what he wrote. He wrote that epidemiologists generally agree with the statement, and that is something that Murray knew to be false.

Tags

More like this

When I wrote earlier about Steve Milloy, I commented on his attack on a study that found that the introduction safe-storage laws was followed by a 23% reduction in unintentional shooting deaths of children. Milloy claimed: The reported 23% decrease in injuries is a pretty weak…
This post is part of The Pump Handle's Public Health Classics series. By Sara Gorman Does cigarette smoking cause cancer? Does eating specific foods or working in certain locations cause diseases? Although we have determined beyond doubt that cigarette smoking causes cancer, questions of disease…
[Today is Blog Action Day, where bloggers of all political stripes and subject interest are encouraged to put up a post on an environmental topic. Here is the second of two.]] The January 2005 good news press release from the DuPont company was not exactly "the gospel truth." No, not exactly.…
I'm all for scientific -- and statistical -- literacy, but sometimes the calls for it exasperate me. Just a little. Not significantly. If you know what I mean. Or you think you know what I mean. Anyway. Yesterday Wired carried a piece by Clive Thompson, Why We Should Learn the Language of Data. It'…

and its way over my head

By ThinkTank (not verified) on 16 Apr 2004 #permalink