Drug Bust Paper Blowback: Responses and implications to the Kirsch antidepressant study

The Kirsch study I wrote about a couple days ago, which found that antidepressants seem to have no more effect than placebo, has generated a wide variety of reactions in the blogosphere and press. Several things of note here:

1) In a pattern I've noticed repeatedly of late about other types of stories about things in the U.S., this story got much more attention in the British press than it did here in the U.S. (The authors were from the UK, but the paper was published in a U.S.-based journal, and antidepressant use is a huge issue in the U.S.)

2) The responses -- some by bloggers, writers, and other critics, some by doctors -- are a fascinating mix of hard-line rhetoric (from both sides) and more nuanced points about the difficulties of drawing definite conclusions from meta-analyses that are by their nature heavily statistical. Pointers to a few are below. Most intriguing is the exchange on the the PLOS Medicine site itself (where the paper was published), which involves mainly doctors. My thoughts on that are at bottom, below a far-from-complete annotated list of responses here and there

My thoughts on that are at bottom, below this shorter list of worthwhile responses:

Ben Goldacre, who writes the column "Bad Science" for the UK's Guardian, points out some of the more troubling implications of this study.

The Washington Post's Kevin Drum drew brief notice to it. His post is notable mainly for the lively and long string of reader comments it produced -- an exchange that suggests reader interest in the U.S. is perhaps more intense than editorial interest.

The journalist/blogger James Hrynshyn, of North Carolina, wrote a thoughtful post on his blog at Seed, as did Jonah Lehrer at his Seed blog The Frontal Cortex.

The Socratic Gadfly takes a shot at some of the study's limitations.

PsyBlog takes a measured, educational approach.

Among posts noting the study's limitations, the most damning I came across is perhaps that of Henry Gee, an editor at Nature. Gee is one of several writers who point out that a major caveat of this study is that it is limited to patients taking the drugs over only about 8 weeks or less, thus missing anyone who would have benefited from taking the drug for a longer period. He finds this completely damning:

This will affect the conclusions of the study, as every doctor (and patient) knows, antidepressants are drugs for the long haul. It takes weeks for them to have much effect, and this study seems to have had a cutoff before any such effect could be manifested. The results of this study are therefore compromised, and people who have been distressed by it have, I think, been misled....

So, shame on PLoS Medicine for touting what looks like a sensationalist story that grabs headlines on the distress of others; and shame (as usual) on the hog-whimperingly low standards of science news editing and reporting that have failed to pick up on this important flaw.

I think Gee goes too far. SSRIs do sometimes take weeks to kick in; but in most cases, they kick in inside of 8 weeks -- and some users get an initial lift then that then fades. So this time limit (created primarily by the drug companies who nevertheless repeatedly claimed to find high benefit during that period) strikes me as one of several limitations on the study rather than a fatal flaw. And the criticism ignores the fact that the drug companies repeatedly claimed to find a significant therapeutic effect inside that 6- to 8-week window. I'm not sure why we're supposed to accept on one hand that the drugs have proven themselves effective within an 8-week window ... but reject a study that finds they were not effective in that window because the drugs supposedly aren't effective till later. If they're not effective till later, how did the drug companies ever find an effect inside of 6 or 8 weeks? Strange logical territory.

As Gee notes, however, quite a few people, many of them doctors, lodged similar critiques at the PLOS Medicine "responses" site, as well as other objections more substantive. This is the juiciest reading on the paper I've found -- well worth perusing to get a sense of the debate and an education in the problems with meta-analyses, especially as applied to placebo effects. It's a serious debate even among doctors.

One doctor, for instance, says the study is a needed wake-up call to doctors who have been essentially falling for a placebo effect themselves; another doctor argues that "dozens of clinicals trials plus decades of clinical practice plus millions of content patients can't be that wrong. Whatever the bias in whatever the study, common sense clearly says: the sum of the parts attesting antidepressants efficacy blatantly outnumbers the evidence showing the opposite. The use of these antidepressants is now deeply rooted and well-established in medical society worldwide, it's safe, it works, and there's no shadow of doubt about it. Instead, this study insists in a different truth."

Overall, the discussion among doctors on the PLOS Medicine Responses page is The meta-analysis is a strange animal, in some ways more reliable than individual studies, since it looks at many; but problemmatic for the same reason, because it has to use some sometimes sophisticated (even obscure) statistical methods to extract (hopefully) reliable, consistent information from studies that may differ in structure and method.

How this all sugars out (no pun intended), I'm not quite sure myself. Two paradoxes jump to my mind, however. One is that the drug companies, with nods from the FDA, dug much of this hole themselves by structuring studies and often filtering results in ways designed to highlight advantages and minimize disadvantages. The short timespan of these studies is an example: When psych drugs work, they generally work their best early on, and the 6- or 8-week drug trial took advantage of that. That's just one way in which the drug companies created a clinical trial system that pretty much begs for harsh criticism; it worked for a while, but now it has cast the industry's credibility into question, making it extremely difficult to ferret out what really works and what does not.

The other paradox, even more painful, is that many, many people, both clinicians and patients, have found these drugs genuinely helpful. In a highly limited but important sense, whether these drugs help through biological mechanisms or through placebo effect is almost a moot point for those they help: They've given quite a few people the buoyancy to float atop life again instead of getting tugged under. The question I tried to raise in my earlier post remains: If these drugs lack a genuine biological effect (or if they have that effect only for a very few) but work well as placebos, how the heck do we replace them?

*update later 2/28/08: Another interesting thread of comments from doctors is this one at the Herald (the one from the UK).

Categories

More like this

I've noticed that whenever I have the temerity to suggest (e.g., here and here) that maybe the word of the Cochrane Collaboration isn't quite the "last word" on the subject and indeed might be seriously flowed, I hear from commenters and see on other sites quelle horreur reactions and implications…
The other day the British Medical Journal (BMJ) published a clutch of articles about whether Tamiflu was as useful a drug as some have touted. I read the main article, another one of the Cochrane Collaborative meta-analyses of the studies they deem useful about any particular subject, and it didn't…
Yesterday (today as I am writing this) the British Medical Journal published another Cochrane meta-analysis on the efficacy of neurimminidase inhibitor antivirals (the only two in use now, being oseltamivir [Tamiflu] and zanimivir [Relenza]). Their conclusions have made the news, so I guess I…
A study just published in the British Medical Journal (full disclosure: I haven't read it, only seen wire service reports of it, but I have absolute confidence it is true -- or, more accurately, I'd say it accords 100% with my prior beliefs so I'd have no reason to question it), says that US…