In which I disagree with Brian Deer on the issue of how to deal with scientific fraud

I admire Brian Deer. I really do. He's put up with incredible amounts of abuse and gone to amazing lengths to unmask the vaccine quack Andrew Wakefield, the man whose fraudulent case series published in The Lancet thirteen years ago launched a thousand quack autism remedies and, worst of all, contributed to a scare over the MMR vaccine that is only now beginning to abate. Yes, Andrew Wakefield produced a paper that implied (although Wakefield was very careful not to say explicitly) that the MMR vaccine caused an entity that later became known as "autistic enterocolitis" and later implied that the MMR vaccine causes autism itself. Aided and abetted by the credulous and sensationalistic British press, Wakefield then played the myth he helped create for all it was worth. Were it not for Brian Deer and his dogged investigation into Wakefield's perfidy, the fraud at the heart of the myth that the MMR vaccine causes autism might never have been discovered and Wakefield might still have his medical license, rather than having been struck off the U.K. medical register.

All of which is why it pains me to have to disagree with Deer now.

Of course, this is not the first time I've had a problem with something that Deer's written, and I'm guessing that it won't be the last. No one expects that we'll agree on everything, nor, I hope, would anyone expect that I'd hold my fire when even a usual ally like Deer makes a misstep. My admiration for his having exposed Wakefield is enormous and buys Deer a lot of credit in my estimation when it comes to giving him the benefit of the doubt about what he writes, but that credit and benefit of the doubt only go so far. Unfortunately, in the wake of his vindication with respect to Wakefield's fraud, Deer seems to have developed a bug up his posterior over science and scientists. Now, I can sort of understand why that might be true, given the seven or eight years of relentless abuse he's suffered at the hands of the anti-vaccine movement and the chilly reception he's received from some scientists. I can even kind of understand why Deer has lashed out at Paul Offit, Ben Goldacre, and Michael Fitzpatrick, although I think he made a big mistake in doing so. I also think he's gone a bit overboard in his latest article, Scientific fraud in the UK: The time has come for regulation. He begins in a very derogatory, ad hominem fashion:

Fellows of the Royal Society aren't supposed to shriek. But that's what one did at a public meeting recently when I leapt onto my hobbyhorse: fraud in science. The establishment don't want to know. An FRS in the audience - a professor of structural biology - practically vaulted across the room in full cry. What got this guy's goat was my suggestion that scientists are no more trustworthy than restaurant managers or athletes.

Now, obviously I wasn't at this public meeting, its being in the U.K. and all, but I highly doubt that this particular Fellow of the Royal Society literally "shrieked" at Deer. I really do. Even granting a bit of artistic license, his characterizing it that way is clearly meant to paint a picture of someone who disagrees with him as being shrill and unreasonable, "shrieking" and "vaulting across the room at full cry" at him. Not "strenuously disagreed" with the proposal Deer is arguing for. Not "criticized it harshly." "Shrieked" and "vaulted across the room at full cry." Now, maybe the FRS described really did shriek, or maybe scientists did raise their voices to be heard. Who knows? Even if they did, this is not an auspicious start to Deer's argument. Deer is usually much, much better than that; he usually doesn't usually take cheap shots like this. Sadly, he takes an even cheaper shot at scientists later, as you will see. But first he has to dump on scientists a bit more:

Restaurant kitchens are checked because some of them are dirty. Athletes are drug-tested because some of them cheat. Old people's homes, hospitals and centres for the disabled are subjected to random inspections. But oh-so-lofty scientists plough on unperturbed by the darker suspicions of our time.

Is Deer actually proposing surprise inspections of labs? Probably not, but if he's suggesting through his analogy that this would be likely to catch fraud, he's going to be sorely disappointed. Such inspections would be even less likely to detect overt fraud than the peer review system. So what is Deer proposing? I'm really not sure, and, rereading Deer's article, I'm not entirely sure he knows what he's proposing, either. In any case, the analogy is really, really bad. Unsanitary conditions and practices in kitchens can usually be easily detected by surprise inspections. Ditto hospitals. Random drug tests don't work quite as well, but they do certainly weed out those who aren't clever enough to evade them. Often such inspections distort the very thing being regulated. For instance, we just went through our JCAHO inspection a few weeks ago, and I'm not sure that the months of preparation that went into getting ready for the inspection actually made us better as a hospital. All that preparation did get us ready to pass the test. True, inspections of the laboratory used by Wakefield to do PCR looking for vaccine strain measles virus might have turned up the rank incompetence there, but in the majority of cases I highly doubt that such inspections would find evidence of data falsification.

In any case, Deer airily dismisses scientists pointing out that there already exist checks and balances in science, referring to appeals to the scientific method as a method "which separates true from false, like a sheep gate minded by angels." Damn. I'll give Deer credit for being a really good writer, but the contempt dripping from his prose is palpable as he dismisses arguments he doesn't agree with even while admitting he has little evidence to support his assertions:

They heard, of course, that there's no evidence of a problem: no proof of much fraud in science. Publishing behemoth Reed Elsevier, for example, observed that of 260,000 articles it pumps out in a year, it will typically retract just 70. And for nearly all of these the reason was that the stuff was "plain wrong", not because it was shown to be dishonest.

This sounds like the old Vatican line about priests and child abuse. Or Scotland Yard and tabloid phone-hacking. And, although I know that the plural of "anecdote" isn't "data", the anecdotes of science fraud are stacking up.

Comparing scientists to pedophile priests is the cheapest of cheap shots, and, quite frankly, I resent it. If he were to use that sort of simile at a hearing that I attended, I might be tempted to "shriek" at him too.

As for whether anecdotes of science fraud are "stacking up," maybe I'm just blinded by being a--you know--actual scientist, but quite frankly, I just don't see it. To me, data talks, and, quite frankly, Deer doesn't have much in the way of data. Actually, he doesn't have any (or at least he doesn't present any), and he's actually right about one thing: The plural of "anecdote" isn't "data." Yet all he presents are two anecdotes. He has Wakefield, and he has "Woo-Suk Hwang's fabricated claims in Science about cloning embryonic stem cells." But he has no real hard data on how common scientific fraud and misconduct are. If anecdotes are what Deer is dealing with, then what does he make of my anecdote? In my 20+ years in science I have never witnessed or had personal knowledge of anyone where I've worked falsifying data. In fact, when The Lancet editor Richard Horton is quoted as saying that "flagrant" scientific fraud is "not uncommon," I have to wonder what, exactly, he referring to. If it were that common, presumably my colleagues and I would have seen some. Don't get me wrong. I'm not holding up my experience as necessarily being representative. Maybe I am blind. Who knows? What I am doing is trying to point out how relying an anecdotes can easily lead to a distorted picture. The anti-vaccine movement taught us that.

Also, let me just repeat yet another time that I detest scientific fraud, and have written about it on multiple occasions. But fraud is a continuum. Far more common than outright data falsification à la Wakefield is the reporting of half-baked research, the use of inappropriate analyses or selective data reporting to squeeze positive-appearing results out of what are really negative results, or exaggerating the strength and/or importance of their work.

As an example, let's do a little thought experiment and imagine a situation, for the moment, in which Andrew Wakefield's Lancet paper was not fabricated in the least, as Deer showed that it was. Pretend, for the moment, that it was a perfectly well-executed case study, with clinical histories accurately reported. Even if that were the case, his paper was merely the result of a measly twelve patient case study. At best, it could generate hypotheses. Making any sort of firm conclusions from such results is profoundly irresponsible, as was promoting such results so publicly. In any case, science actually did eventually sort it out; other studies were done and failed to find any association between the MMR vaccine and autism or enterocolitis in autistic children. The problem was that no one seemed to be listening. Science was self-correcting in Wakefield's. The damage caused by Wakefield's fraud was not so much to science, but rather to the public opinion of the safety of the U.K. vaccination program. It is thus the public perception of that science became the problem, not science itself. Look at it this way. Even if there were no fraud and Wakefield's initial paper had been meticulously carried out, the damage would still have been done because it was how Wakefield reported his results to the press and how the British press credulously lapped up his line of BS, coupled with the irresponsibility (well documented by, yes, Brian Deer) of The Lancet editors and the leadership at the Royal Free Hospital that caused the MMR scare.

Deer continues to report on the House of Commons science and technology committee proceedings, including the report it recently issued, quoting and heartily endorsing its conclusion:

Finally, we found that the integrity of the peer-review process can only ever be as robust as the integrity of the people involved. Ethical and scientific misconduct--such as in the Wakefield case--damages peer review and science as a whole. Although it is not the role of peer review to police research integrity and identify fraud or misconduct, it does, on occasion, identify suspicious cases. While there is guidance in place for journal editors when ethical misconduct is suspected, we found the general oversight of research integrity in the UK to be unsatisfactory. We note that the UK Research Integrity Futures Working Group report recently made sensible recommendations about the way forward for research integrity in the UK, which have not been adopted. We recommend that the Government revisit the recommendation that the UK should have an oversight body for research integrity that provides "advice and support to research employers and assurance to research funders", across all disciplines. Furthermore, while employers must take responsibility for the integrity of their employees' research, we recommend that there be an external regulator overseeing research integrity. We also recommend that all UK research institutions have a specific member of staff leading on research integrity.

Deer concludes with a tweak:

The fellows of the Royal Society, I'm sure, won't like it.

Of course, the question is: Why won't they like it? It might not be, as Deer seems to think, because they are reflexively resistant to any oversight (although that might be true). Might it not also equally be because this proposal is ill thought out and in general a bad idea? It might.

For example, consider these questions:

  1. What, exactly, would each specific member of each staff of UK research institutions charged with "leading on research integrity" actually do? Seriously. Think about it. What would such a peson do? Would he pop into investigator's labs? Would he inspect lab notebooks? Would he peruse computer hard drives? Watch students, postdocs, and technicians do experiments? Reanalyze random data? And if he doesn't do those things, then what, exactly, would he do to stop fraud in his own institution that would have any hope of actually making a difference?
  2. What, exactly, would a regulatory body do? David Colquhoun is spot on in the comments when he asks, "Would it reanalyse each of my single ion channel records to make sure I'd done it right? Would it then check all my algebra to make sure there was no misteke? Even if you could find anyone to do it, that would take as long as doing the work in the first place. I fear that Deer's suggestion, though made from the best of motives, shows that he hasn't much idea of how either experiments work or how regulatory bodies (don't) work." And if it were just a passive surveillance system that only acts after a complaint is filed, how is that any better than the situation in the US?

The bottom line is that we really don't know how common fraudulent research, specifically examples of blatant fraud like Wakefield's, really is. There is evidence that perhaps 2% of researchers admit to having manipulated data, but perhaps 33% admit to having at least once engaged in "questionable" research practices. Of course, such results depend upon how you define "questionable." Whatever the true incidence of scientific misconduct is, we do know that peer review isn't very good at catching outright fraud and never has been. So what to do? Another consideration is that any regulatory body could be hijacked by ideologues. I don't know how much of a problem this is in the U.K., but imagine, for example, someone like global warming denialist Senator James Inohofe taking over a panel on science integrity and what he could do with climate scientists.

Even though peer review isn't particularly good at finding fraud, it is still one major tool to be used to do so. What, however, should be added to it? The problem with many ideas, such as oversight panels or institutional science integrity officers is in the details. They sound like good ideas on the surface, but when you start to think a bit about the details and what, exactly, such mechanisms would mean and how they would be set up, suddenly it's not so easy at all. Let's go back to my thought experiment in which Wakefield's work was not fraudulent. Now switch back to reality, where Wakefield did commit fraud, but imagine that the ideas advocated by Deer were policy at the time he was signing up patients for his case series. Would Deer's and the committee's proposals have stopped Wakefield or exposed him earlier. Would a science integrity officer at the Royal Free Clinic or a science oversight board have made a difference? Maybe, but I highly doubt it. At what point in Wakefield's fraud would either such mechanism have been able to intervene? Afterward, what, specifically, would have triggered an investigation by either the institution or the regulatory body proposed? After all, it took even Brian Deer himself years to dig up the evidence that started to suggest fraud.

Still, it's not all bad. Deer is right about one thing, and that's that the "R" word (responsibility) has to be injected into the system. Here in the U.S., government granting agencies, such as the NIH, investigate allegations of scientific fraud in research funded by the U.S. government, the penalty for fraud being banned from receiving federal grants for a period of time and possibly even criminal charges for defrauding the government. The problem is that the NIH Office of Research Integrity is underfunded and overwhelmed.

More important than any external regulatory force is changing the culture of science so that it is considered acceptable to report suspected scientific misconduct. Science is a means to an end: to find out how nature works, to plumb its mysteries and discover rules by which it works and that we can use to make predictions. Science, to its credit, also tends to work more or less on the honor system, which is why Deer is probably correct that scientists tend not to consider perhaps as much as we should the possibility that an investigator is lying or falsifying data when anomalous results such as Wakefield's are reported. On the other hand, he appears not to understand that the vast majority of anomalous results are not due to fraud but rather differences in experimental design, analysis, bias (often subtle, but sometimes not), publication bias, and many other factors that can lead scientists astray. Some results are just wrong through sheer random chance. Yet Deer seems to assume based on his experiences with Wakefield but without strong empirical evidence that much of these problems are due to fraud, rather than just bad science, biased science, or random flukes that produce .

Even if Deer were correct that scientific fraud is a massive problem, it doesn't mean that his blanket condemnation of science in the U.K. or that his likening the culture to the Roman Catholic Church shielding pedophile priests wasn't way over the top. Unfortunately, burned by his experience pursuing Wakefield, I fear he's become a bit cynical. It's also clear that, for all his amazing skill as an investigative journalist, he hasn't really developed a firm grasp of the nitty gritty of how science actually works and how research is actually done. In the end, science usually does correct itself regardless of the source of error, be it fraud or, well, error. Results in issues that matter will eventually be corrected because other investigators interested enough to expand on a line of investigation need to start by replicating the results that interested them. In the case of fraud, they won't be able to. Meanwhile, results that other investigators don't bother to try to replicate as a prelude to moving beyond them usually don't matter much and have little or no effect on science. It's a slow and messy process, sometimes maddeningly so, but over time it does work. Science does correct itself.

The problem is, many of the cures to accelerate the process of discovering fraud are potentially worse than the disease. Then there's another question to consider: How does one determine whether highly novel or potentially groundbreaking work is correct or a mistake or a dead end when there's nothing to measure it against? One can't; other scientists can by examining the results and trying to reproduce them so that they can move on.

Categories

More like this

A series of articles just published in The Sunday Times reports that it appears likely that Andrew Wakefield falsified much of the data that was used in the 1998 Lancet article that first identified the MMR vaccine as a potential cause of autism. If the charges leveled by the paper are remotely…
by revere, cross-posted from Effect Measure My sciblings at Scienceblogs have done a pretty thorough fisking of the Andrew Wakefield affair.To recap breifly, a paper by Wakefield and others in The Lancet in 1998 raised an alarm that the widely used measles-mumps-rubella (MMR) vaccine was the cause…
My sciblings at Scienceblogs have done a pretty thorough fisking of the Andrew Wakefield affair.To recap breifly, a paper by Wakefield and others in The Lancet in 1998 raised an alarm that the widely used measles-mumps-rubella (MMR) vaccine was the cause of some cases of childhood autism and a…
Poor Andy. Once upon a time, he had the power to kill children just by doing some very bad science and writing a few very bad papers, and now he's reduced to living in Texas and being supported by mobs of New Age cranks. He's powerless and bored, but his ego is still being inflated by sycophants…so…

"...Deer seems to have developed a bug up his posterior over science and scientists. Now, I can sort of understand why that might be true, given the seven or eight years of relentless abuse he's suffered at the hands of the anti-vaccine movement and the chilly reception he's received from some scientists."

Considering the torrents of abuse and false accusations against Deer by antivax zealots hoping to smear him as an agent of Big Pharma, it wouldn't surprise me if at least the tone of Deer's condemnation of scientific integrity is related to his need to demonstrate independence of the scientific establishment.

By Dangerous Bacon (not verified) on 05 Aug 2011 #permalink

There is a fine line between wanting to prove independence & being a douchebag - I hope Deer knows where that line is.

I share your skepticism about surprise inspections doing much good, but I'd be willing to give it a try. To test the hypothesis that it scares a few more of the bad apples into keeping legit

By DrugMonkey (not verified) on 05 Aug 2011 #permalink

"As for whether the anecdotes of science fraud are "stacking up," quite frankly, I don't see it."

There certainly more accusations of science fraud. However, since the count of claims can only ever increase, this is tautologically true and therefore a worthless metric.

"Even though peer review isn't particularly good at finding fraud, it is still one major tool to be used."

Peer review (as I see it) is NOT about finding fraud. Any finding of fraud is merely happenstance and has no power even when it does (see, for example, G&T's paper about how GHG effect disobeys the second law of thermodynamics).

Peer review is supposed to weed out papers not worth the time reading (from well meaning but deluded people, e.g. people with perpetual motion machines) and to act as spotters for mistakes that were overlooked because the author knows what they meant, not what they wrote.

Someone who got a startling result because they forgot to include Pi in one equation would be spotted and fixed. Conclusions that are not supported by the paper would be spotted and fixed.

The only fraud that publication and peer review can be involved with are when the publication itself is trying to perpetrate the fraud.

Scientific fraud such as cold fusion are found out not by peer review but by other scientists replicating the work.

Because as far as SCIENCE is concerned, it doesn't matter if it's fraudulent, all that matters is if it's right. And deliberate fraud is by definition wrong. The fraud is irrelevant. Science discards it because it's wrong.

IMO the main problem is not scientists, but the media. With some honorable exceptions they simply swallow tripe whole and hype it beyond all reason, if it makes a good headline.

Wakefield's fraud was a problem, but if the UK journalists reporting on it had had a modicum of scientific literacy to realize that such a small case series didn't warrant such conclusions even if legitimate, it would not have produced public policy implications.

Journalists reporting on politics would smell a rat if a senator claimed that the national debt could be eliminated by cutting off foreign aid, because they have some knowledge of the background and would recognize that the numbers don't add up. So that would only make headlines about how wrong the claim was. Journalists reporting on science similarly need background on their subject.

What jumped out at me was the potential for whistleblowers. I suspect that at the moment very few people are willing to rat out a colleague, and almost none to accuse a superior, simply because of the damage it'll cause to their careers. Developing a culture where students and juniors don't have to choose between reporting fraud and keeping their placements and publication record will do more than anything else to open up fraud instances. I'm not sure how this would work in detail, thoguh.

By stripey_cat (not verified) on 05 Aug 2011 #permalink

Another concern about installing some regulatory body is the question of confidentiality. Especially in clinical trials, subjects may have very sensitive medical conditions which, if confidentiality is lost, may adversely affect their quality of life (e.g., through social repercussions). Already, subjects' personal health information can be reviewed by a lot of people involved with the research, beyond just the research team itself. You have the ethics review boards, billing & reimbursement, lawyers, DSMBs and other government regulators looking at drug and device trials. Adding yet more individuals who may have access to PHI increases the risk of loss of confidentiality without a corresponding increase in benefit to the subjects.

@stripey_cat

Very good point. Some kind of protection for whistle-blowers built into system might actually work. Maybe some kind of independent ombudsman agency or some such.

I'm always a bit embarrased when I hear about whistleblowers that suffer retaliation, etc. No one should have to choose between their job / career & doing the right thing. Unfortunately, this is an across-the-board problem that we need to address as a society.

Whistleblowers are tattle-tails and nobody likes tattlers.

What a stupid attitude we have on people who risk their careers (and sometimes lives) in order to protect the public. As far as I'm concerned, any whistleblower that presents a real issue that needs to be exposed should be revered.

Want honesty in science, engineering, government, etc?
Stop punishing people for exposing the evil deeds of others!

By JoeKaistoe (not verified) on 05 Aug 2011 #permalink

I must admit I have no real problem with Deer's approach.

I don't know when Ben Goldacre was filmed so his statements may have been reasonable at the time, assuming he had not read Deer's reports. If he had then I think he was being a bit ingenous.
However:
"It doesn't matter that [Wakefield] was fraudulent," Dr Paul Offit, a vaccine inventor and author in Pennsylvania, was quoted in the Philadelphia Inquirer the next day as saying. "It only matters that he was wrong." suggests to me that we may have a problem. Offit seems to be saying, "Hell yes. Invent the data, lie about the findings, just hope you're lucky".

Of course, since doctors have recently begun talking about "evidence-based" medicine this leaves me wondering what medicine was doing for the last 1000 years. Mind you, I do hear that leeches are enjoying a come-back.

If I have a rough timetable , Deer started questioning Wakefield's results in 2004 and in 2010 (2011?) there was a result? Glad the medical establishment moves so quickly.

By jrkrideau (not verified) on 05 Aug 2011 #permalink

I keep pushing the idea that we need more replication within scientific journals. Most journals don't have space or time to print replication studies, unfortunately, especially when equipment and time are so expensive, there is little return to work hard to replicate research. So how much out there can't be replicated? I don't think we can come close to approximating it.

Oh, Orac, you're just not jaded enough! Wait until you turn 50! You'll see!**

But seriously, Deer, probably my contemporary, was around when Sir Cyril's massive fraud was uncovered ( which, like Wakefield's, had far-ranging societal consequences) and he has been involved in investigating fraud- both scientific and financial ( Wakefield, Stansfield). I think that that might lead someone to be rather suspicious... for good reason.

Personally, I feel fraud will occur in all fields because of human nature itself. Are scientists or doctors exempt?. No . The only question is that of extent: is it 1% or 10%? ( Perhaps surveying pseudo-science makes me too suspicious?)

A prof of mine worked as an assistant on a study of delinquent boys: the final results showed that the delinquents had lower cognitive and moral judgment levels than did average boys. "Yes", he said, "Proving only that the lower level ones were the *ones who got caught*".

It took as long time to catch Bernie Madoff: I personally know a woman who was a potential investor, "Oh, he was as nice as *you* are," she said, " But somehow, it seemed too good to be true". I am comparing potential scientific frauds to Bernie? Of course not.

However, governmental regulation ( the other "R" word) involving fraud- financial or scientific- would be a difficult proposition. The type of scientific "fraud" I'm concerned about would be rather subtle stuff, wouldn't it? I'm thinking more of the definitional or statistical variety (unlike Wakefield's *in flagrante delicto*). How exactly would regulation be implemented? Like in the financial world, smart people are usually miles ahead of regulation, continuously inventing new instruments. The situation appears to be awful and its very nature may tie our hands: it is afterall primarily about abstractions. I have to admit that although I am usually a strong advocate for regulation of just about everything.

I also would hope that personality factors that *lead* people to become researchers - rather than becoming financial wizards hell-bent on conquering the Markets - would have an inhibitory effect.

** and I'm not really that jaded despite what's going on in the markets.

By Denice Walter (not verified) on 05 Aug 2011 #permalink

I pretty much completely agree with Orac, what Brian Deer (who I also have great respect for) proposes is not workable. The real solution is more scientists doing science, not more people trying to uncover fraud. It costs more to investigate a case of suspected fraud than to replicate the experiments a few times. Replication a few times would uncover fraud, it would also uncover honest error and might even uncover something new.

The real problem isn't âscientific fraudâ, the real problem is that committing scientific fraud can provide real benefits and uncovering scientific fraud usually has real adverse effects. Whistle blowers who disclose fraud don't get rewarded, they lose their jobs and the lab they are working in gets shut down (at best), at worst their careers are destroyed, and for doing the right thing.

There are several reasons for these things. I think the most important is the cult of the ârock starâ scientist. Rock star scientists are still just people too. They put their pants on one leg at a time, they can make mistakes, they need their associates to check their work to ensure there are not trivial (or non-trivial) mistakes. This can be difficult when society cultivates a cult of rock star scientists. Wakefield's cult is still going strong, even after he is a proven fraud and has been struck-off.

The second issue, that of doing right by whistle blowers is important too. Wakefield's student, Chadwick, knew the positive PCR results that Wakefield reported were false positives and had told Wakefield that. He couldn't report that elsewhere (who precisely would he have reported it to?) because the PI (Wakefield) held career life-or-death power over him. Workers in a lab have so little power than they don't have the luxury of standing up to the PI when the PI is committing fraud.

Both of these problems stem from the intense (and I think counterproductive) competition for funding. When you make people desperate for funding, some of them will do desperate things and some of them will quit. Those who make it to the top are not âthe best scientistsâ, they are the best at competing in the scrabble for funding. Those who make it to the top aren't doing it for the science, they are doing it for the rock-star status.

The problem is that making people desperate is a good way to get them to do what you want. Why is there so much AGW denialism passed off as âscienceâ? Because there are people willing to fund it if it is passed off as science. What has happened to some of the scientists who have done good work demonstrating AGW, such as Mann? They have been investigated for fraud by the politically powerful who don't like the results of their research. There is the same problem in the criminal justice system, the cult of the rock star prosecutor. This is the problem of using social status as a marker for expected success at a particular task.

@ jrkrideau:

"It doesn't matter that [Wakefield] was fraudulent," Dr Paul Offit, a vaccine inventor and author in Pennsylvania, was quoted in the Philadelphia Inquirer the next day as saying. "It only matters that he was wrong." suggests to me that we may have a problem. Offit seems to be saying, "Hell yes. Invent the data, lie about the findings, just hope you're lucky".

No, he was saying that we already knew Wakefield was wrong LONG before the fraud was exposed, so it made no difference to the scientific conclusion. Discussed at length in earlier posts linked from this one.

Of course, since doctors have recently begun talking about "evidence-based" medicine this leaves me wondering what medicine was doing for the last 1000 years. Mind you, I do hear that leeches are enjoying a come-back.

The same thing CAM is still doing - making it up and guessing.

If I have a rough timetable , Deer started questioning Wakefield's results in 2004 and in 2010 (2011?) there was a result? Glad the medical establishment moves so quickly.

Again, it was known the results were wrong long before they were known to be fraudulent. Various other research groups tried to replicate, couldn't, the result was rejected, and the field moved on.

"very few people are willing to rat out a colleague, and almost none to accuse a superior, simply because of the damage it'll cause to their careers"

However, posting a paper that rebuts the work of a superior will enhance their careers.

So even if you were right (and I dispute that: your message reeks of conspiracy theory), it's irrelevant. There are better ways of getting the truth out there than "ratting out" another for a scientist: do better science solves both problems.

The UK House of Commons science and technology commitee proposed that fraud be tackled with a national body to co-ordinate action, and that designated research integrity officers be appointed in academic institutions.

My broad endorsement of such an arrangement has provoked varying degrees of abuse and condemnation.

I'm surprised that neither my critics, nor evidently Orac, appears to know that this system is already in place: in the United States of America.

http://ori.hhs.gov/

"Those who make it to the top aren't doing it for the science, they are doing it for the rock-star status."

Please tell me the name of a scientist with the same recognition as Mick Jagger, Status Quo, Justin Bieber or Britney Spears.

Roy Spencer does it for the lecture circuit, not the science status. Monckton does it because he's a nutcase. Sen Inholf does it because of his doctrine.

Rock-star science is a very poor recruiter. Money outside science and the lecture circuit outside science are much better.

Wakefield's reputation is in the shitter. But he doesn't care because he's getting paid better anyway and away from other scientists, among an audience that wants to believe him, he gets the proper adulation.

Science doesn't get rockstars.

"My broad endorsement of such an arrangement has provoked varying degrees of abuse and condemnation."

But by saying that it is NECESSARY to have a fraud office for science, you're saying all the other scientists are a bunch of frauds.

That's abuse and condemnation.

PS have you considered maybe the idea is worth condemning? Proper skepticism is being skeptical of your own ideas. Rather than cry out "Help! help! I'm being oppressed!", why not consider the responses to the condemnation and explain both how it is going to be implemented and how it is going to be shown to be cost and result effective.

Unfortunately, Hollywood has done a great job of casting the "Rock Star Scientist" as the villian in many films - the guy who lets his own ego get the in way of the "TRUTH."

Of course, the type of "rock star scientist" myth I'm talking about is the lone genius who sweeps in and proves his theory in one stroke, earns fame, and is henceforth revered as an Authority who can never be questioned.

Woos often try to cast their favorite gurus like that, and claim we're just jealous of their fame and wealth. Or they misrepresent a real scientist's work, and act shocked when we protest, claiming that we think we're smarter than the idol.

At the same time, they sometimes try to cast us as being enamored by the fame of some past scientist, like Charles Darwin or Louis Pasteur, even if the argument is being made from experimental evidence.

The first makes an argument from authority, and the latter assumes we're just making an argument from authority. Double standard.

@Wow: here are some rockstar scientists you may have heard of...

Brian May, PhD astrophysics, Queen.
Brian Cox, PhD particls physics, D:Ream
Greg Graffin, Ph.D. in zoology, Bad Religion
Milo Aukerman, doctorate in biochemistry, The Descendents
Dan Snaith, doctorate in biochemistry,stage name "Caribou."
Tom Scholz, master's degree Mechanical Engineering, Boston
Mira Aroyo, Ph.D in genetics, Ladytron
Diane de Kerckhove, Ph.D materials science, professional jazz singer
Dexter Holland, Ph.D candidate molecular biology, The Offsping
Art Garfunkel, masters in mathematics, Simon and Garfunkel

proves his theory

No, he has the "secret formula"

My favorite example is from The Saint, where "the secret formula" for cold fusion was, apparently, a mathematical equation written on a series of notecards that had to be arranged in the correct order.

By Marry Me, Mindy (not verified) on 05 Aug 2011 #permalink

Mr. Deer:

As you know, my admiration for you and your work with regards to Wakefield is boundless. However, that does not mean I won't express disagreement when I feel the need. I will admit to being a bit peeved at your decision to use such inflammatory language and to invoke the specter of pedophile priests. After I settled down, I was more disappointed than angry; such cheap shots and appeals to emotion are beneath you. Despite my having taken on the 'nym of a computer, I'm human too, and sometimes I get annoyed when I see someone taking a seriously cheap shot at my fellow scientists. In actuality, there are a lot of scientists, myself included, who would support more rigorous methods to monitor and respond to research misconduct and fraud. Unfortunately, your inflammatory attacks alienate the very people who could be your allies.

I am also quite aware of the NIH Office of Research Integrity; in fact, I even mentioned it in my post. I'm also rather surprised that someone who has been so admirably, doggedly, meticulously data-driven in the past in going after Wakefield is so lacking in anything resembling hard data to support his arguments for more regulation of scientists.

In any case, the Office of Research Integrity of the NIH is not really the same thing as what is being proposed. For one thing, its jurisdiction is only over PHS-funded research. For another thing, institutions in general set up ad hoc committees when an allegation of misconduct is made; most don't have standing committees or officers whose primary duty is to stop research misconduct. Finally, by invoking the examples of food and restaurant inspectors and the like, it was you who implied you wanted a system in which inspections or some form of other active surveillance for fraud is made. The US system is entirely a passive system; nothing happens until an allegation of research misconduct is made. Certainly at most institutions there is no one charged with proactively rooting out such misconduct or trying to prevent such misconduct. If you want to make the argument that there should be, then by all means do. I'd even go so far a sot say that the NIH system is too lax. Most of the time the penalties for research misconduct are ludicrously light, usually only being declared ineligible for federal funding for a period of a couple of years, perhaps as long as five years.

In any case, is it really true that UK institutions don't have policies for covering allegations of research misconduct? After all, that's all the NIH system requires, that institutions have a protocol in place for dealing with allegations of misconduct that meets certain standards. Is it really so different in the UK than in the US?

One way to catch a large amount of fraud (and simple error) would be a requirement to submit ALL of the raw data a paper is based on before the publication date. Scanned lab notebooks, original files from the lab equipment, all of it. It could be submitted confidentially, it could be submitted encrypted with the author holding the key, it really doesn't matter how or if the data is expected to be made public, just that all of the data necessary to validate the paper is in the box. The stinger would be that if the editors of the journal become convinced that there is a reason to revisit the paper, the box gets opened, with a possibility of retraction/correction if the data in the box doesn't support the data and conclusions in the paper. If the author "lost" the encryption key or forgot to include some of the data, there should be a significantly detailed resubmission/appeal process both to ensure the error was innocent and to convince authors that it's incredibly time consuming to not get it right the first time.

This would also benefit the "memory" of labs. People come and go, not everyone is scrupulously organized. A single depository for each paper would pretty much guarantee the PI or future lab members could easily go back and find all the relevant data and methods even if everyone who worked on the original project had left, as opposed to digging through various computer backups and lab notebooks.

Orac:

You say: "The Office of Research Integrity of the NIH is not really the same thing as what is being proposed. For one thing, its jurisdiction is only over PHS-funded research."

Actually, its jurisdiction is over institutions which carry out PHS-funded research, which are required to designate a member of staff to whom those individuals (for example graduate students and post-docs) with concerns can go.

With regard to UK institutions, very many have no policies whatsoever for such concerns. In the Wakefield case, for example, University College London - an elite academic institution - introduced procedures I think last September.

I find it difficult to believe that you expect much "hard data" in an 800-word Guardian blog item, much as I find it difficult to believe that you can't see an entirely reasonable parallel with sexual abuse in the Catholic church, which is perhaps the seminal example of institutional denial in the face of decades of relentless anecdote.

In a time when science academies are seeing funding heavily cut, it seems a poor use of money to pay someone else to essentially redo analysis and research which has already been done. As Orac and others have pointed out, how, exactly, this would work is unclear. One person probably won't have the specialized expertise required to audit the research of numerous labs even in the same building. You'd need not one auditor, but a team of auditors with expertise in different disciplines.

In addition to the funds needed to pay these teams, where would you get these people? Most people with that level of expertise are doing research, and it may be hard to recruit them to auditing other peoples' research. And if you're on some cutting edge research, would you want another researcher in your field, now playing auditor, going over your research?

Imagine Linus Pauling playing auditor and looking over the work at the Watson, Crick, Franklin labs--would he have realized he was on the wrong track, then gone home and beat Watson and Crick to the DNA model?

Getting out of the lab, how would you audit field research which is often conducted in remote locations at great expense, and with limited room for researchers and equipment, much less an auditor of some type. Perhaps the auditor would check the field notes and data analyses back in the office--which, essentially, is what peer-review does anyway.

In attempting to fix a system which hasn't been demonstrated to be broken (imperfect, yes; broken, no), might you not introduce greater problems? Think of the climate "auditors" who combed through every email looking for scraps to take out of context, who looked over every data point so they could cry "fraud" without really understanding what they were looking at, or why the data was analyzed the way it was, and why their way of analyzing it introduced more errors. Thanks to people who didn't fully understand what they were auditing but were quick to shout "Fraud" anyway, scientists have been investigated by congress (and cleared in every investigation), and have had politicians trying to criminalize them.

That's not limited to climate either. See this transcript,
peer.org/docs/doi/7_28_11_Monnett-IG_interview_transcript.pdf

where a person, who didn't understand grade 5 math, has created a great deal of trouble for a biologist. That he was subject to questioning by a criminal investigator over his work, because some desk jockey couldn't grasp simple math, is ludicrous.

By Daniel J. Andrews (not verified) on 05 Aug 2011 #permalink

"@Wow: here are some rockstar scientists you may have heard of...

Brian May, PhD astrophysics, Queen.
Brian Cox, PhD particls physics, D:Ream"

Yes, they became scientists AFTER becoming rock stars.

Oopsie :-)

Greg Graffin, Ph.D. in zoology, Bad Religion
Milo Aukerman, doctorate in biochemistry, The Descendents
Dan Snaith, doctorate in biochemistry,stage name "Caribou."
Tom Scholz, master's degree Mechanical Engineering, Boston
Mira Aroyo, Ph.D in genetics, Ladytron
Diane de Kerckhove, Ph.D materials science, professional jazz singer
Dexter Holland, Ph.D candidate molecular biology, The Offspiring

Aye, never heard of them.

Art Garfunkel, masters in mathematics, Simon and Garfunkel

Science. He may have a masters but he's known as Simon and Garfunkel, not Masters in Mathematics.

Feynman.
Einstein.
Sagan.

They're about the only ones that "the man on the street" will know to the same level as a rock star, and only Einstein could I be confident of that statement.

"28

One way to catch a large amount of fraud (and simple error) would be a requirement to submit ALL of the raw data a paper is based on before the publication date"

It might catch it, but it doesn't seem to have any effect. See the ongoing refusal to persue Wegman's fraud. It didn't even need all the raw data etc.

PS raw data was available to McIntyre, but he demanded it from someone who couldn't give it.

That was fraud too.

Hasn't even raised an eyebrow.

Brian Deer: "I find it difficult to believe that you expect much "hard data" in an 800-word Guardian blog item..."

We've come to expect meticulous documentation from you.

"...much as I find it difficult to believe that you can't see an entirely reasonable parallel with sexual abuse in the Catholic church, which is perhaps the seminal example of institutional denial in the face of decades of relentless anecdote."

Is comparing people to pedophiles and their enablers an appropriate analogy to use any time one criticizes a particular institution for not doing enough to eliminate misbehavior?

Again, you've set a high enough standard that we expect better from you.

By Dangerous Bacon (not verified) on 05 Aug 2011 #permalink

What about the recreation of studies that happens all of the time? Or studies that are done that only provide new data to an already established hypothesis? Where do those get published in order to increase the credibility of good science, versus those experiments that are done once and accepted?

My area of expertise, psychology of terrorism, is farking horrible with a lack of reproducible research. We're forced to accept small effect studies and nearly statistically insignificant results in order to develop large and small scale trauma programs because of the rarity of the events. No one wants to try to create reproductions of these, because there would be no benefit and a lot of risk.

There has to be a better way to communicate and evaluate results.

@Wow #31:
Hawking. And, back in the day, Salk and Sabin. But, yeah, you're absolutely right.

By Queen Khentkawes (not verified) on 05 Aug 2011 #permalink

And in such a short piece, using inflammatory language - like the comparison to the priest sex abuse scandal without some inclusion of context is unnecessarily emotionally charging the issue.

@Brian Deer

I'm 21 years old, and not very politically or socially active (I don't watch the news or read newspapers) so I don't have a very keen appreciation of governmental or regulatory functions, ergo I'm bound to miss ALOT of your points reguarding such functions.

I'd really like to know however, if you think this could be cost-effective?

I work in retail, and as a parallel (albeit a VERY small one) we have shoplifters on a daily basis. I have two straightforward ways of handling this : accept the loss, or don't. If I choose not to, how am I going to prevent shoplifting? It's going to take alot of my time, and alot of my companies money to reduce theft to, say, 5% of our total revenue.

My vague point being: won't regulation mean both a big financial strain on government and a potentially crushing slowdown of publication/research speed?

By VikingPhil (not verified) on 05 Aug 2011 #permalink

wow @31

I find it very funny that you're saying scientists aren't as well known as rock stars and then admit you apparently know of a lot of rock stars. 30% recognition it seems.
And so what if they became rock stars and then scientists, the order doesn't have a lot to do with it.
Neil deGrasse Tyson is fairly well known these days too. Heisenberg, Crick & Watson, Curie, Oppenheimer, there are plenty of names that a decent segment of the population knows.

By Lynxreign (not verified) on 05 Aug 2011 #permalink

Sorry, that was supposed to be "you apparently don't know a lot of rock stars"

By Lynxreign (not verified) on 05 Aug 2011 #permalink

Another scientist that a lot of people may know due to numerous television appearances is Michio Kaku. He's been on Discovery Channel quite a bit, History channel, Syfy. He's been on programs about UFOs and such. He's a well-established and respected scientist, so when he says something on one of those channels, he gives some weight to the argument. Unfortunately, he also comments on quite a bit that is outside his expertise, thereby giving weight to specious claims. Now, most people probably wouldn't question him too much because, hey, he's an established and respected scientist! I.e., he's a "rock star."

As to whistle-blowers and career jeopardy, it's not really such a conspiracy theory. In fields that are relatively small, even anonymity is not an assurance against retribution, especially if allegations are against a prominent figure in the field. If you only have a handful of labs, it can be figured out pretty quickly where the charges are coming from. It wouldn't take much to spread a rumor that essentially casts a shadow over everything the whistle-blower does, especially with the weight of authority behind the rumor. However, in fields where there are a lot of labs and a lot of equally prominent figures, there is probably less chance of retribution.

"Sorry, that was supposed to be "you apparently don't know a lot of rock stars""

This may be true.

It does lend itself to the idea that more people know rockstars than they know rockstr scientists, however.

After all I can list quite a lot of high-profile rock stars.

The list of high-profile scientists is a lot shorter.

Despite my lack of knowledge of rock stars!

"As to whistle-blowers and career jeopardy, it's not really such a conspiracy theory."

Well that IS true. I'll agree to that. It DOES happen. It IS a problem.

But the problem being so large? That's a conspiracy theory.

I find it difficult to believe that you expect much "hard data" in an 800-word Guardian blog item

Why shouldn't I expect some data? After all, I managed to sprinkle a little hard data into my post in the form of a link to a metaanalysis with a couple of sentences describing its results, and I'm not even a journalist, much less an award-winning journalist. There's no reason you couldn't have done the same, had you chosen to do so. All it would have taken is leaving out a sentence or two of your inflammatory language about pedophile priests and "shrieking" scientists and inserting a brief description of a study or two, with a link or two. You didn't do that. Instead, you chose to rely on anecdotes and ad hominems.

What's also disappointing is that there actually is a real case to be made for finding a way to decrease scientific misconduct. You just didn't make it.

much as I find it difficult to believe that you can't see an entirely reasonable parallel with sexual abuse in the Catholic church, which is perhaps the seminal example of institutional denial in the face of decades of relentless anecdote.

In context, you clearly chose that example to be the most inflammatory example you could think of. What is disappointing is that you of all people should know the dangers of relying on anecdotes. After all, how was the whole MMR scare started? What sustains the anti-vaccine movement, both here and in the UK? Anecdotes. It's the anecdotes of parents who thought their children regressed into autism after receiving the MMR jab (to use the British parlance) or another vaccine. You know as well as I do that science failed to find the link between the MMR and autism that Wakefield promoted. Anecdotes in this case were profoundly misleading. Or have you so soon forgotten how dangerous it is to rely on anecdotes? Even when anecdotes are well documented to be true, they are at best hypothesis-generating, not hypothesis-confirming.

I'm with Dangerous Bacon on this one. You've set a high enough standard that I really do expect better from you.

One way to catch a large amount of fraud (and simple error) would be a requirement to submit ALL of the raw data a paper is based on before the publication date.

This was a quite literal LOL moment for me. Clearly you've never dealt with interesting data volumes. In some fields it might possibly be vaguely credible. In others, this would require journals to accept and store hundreds of petabytes (even exabytes) of data for a single paper

And that's completely leaving aside the important little facts that (a) proprietary formats are often used, (b) looking at the data in any meaningful way would require a complete replication effort, (c) in many experiments the overwhelming majority of data is discarded immediately because there's no way to store such volumes (i.e. those exabytes are the 0.01% that looked most interesting at a quick automated glance).

And, of course, in many genomics experiments, the petabytes of data are already stored and available to researchers via the web.

I do not care to draw lines between fraud and the 1001 ways to lie about what results you actually obtained and what your models say about the interpretation of that data. Lumping all those together under "being false" or "cheating", I think the problem is very large in some fields. I could minimize the problem by more narrowly defining what "really horrible cheating" means, but I don't get why I would want to do that. More than half the papers I review carefully in massively parallel mRNA assay work contain apparent cheats (as some papers have shown, you usually can't perfectly prove whether it's a cheat or not, cause authors, perhaps defensively, don't provide enough data, or methods, to enable checking). Maybe only nerdy folks can really notice how common it is. There are labs with huge numbers of publications on certain topics where most competent people in the field are agreed that the authors are utter slime - but we only say this in the hallways. For example, I have several particular labs in mind right now, but I sure ain't going to name them - those people review my papers and grants! I may even have to collaborate with them in future. I can't even post anonymously on the internet when I think or know I see cheating, for fear the offenders will figure out it must be me.
IOM in the US has a committee looking at things like that right now, partly stemming from the mess from Anil Potti, which I offer as a better example than Wakefield, except that it's technically more complicated. If you prefer the wild-west of proteomics and perhaps an even more extreme example (I am not asserting I know what the real truth is, and I don't wanna get sued) look into the story of Robert Getzenberg. I think better review could have helped in both cases, and greater sharing of actual data and code as well. (Actually in those two cases there were heroic people who stuck their necks out in public, saying they were skeptical, but that is rare.) So there are things we can do. Finding more and better ways is a worthy goal, and IOM is working on it, and facets of it have been undoubtedly been discussed here and many other science blogs. Questions like how can we obtain good review, or what data and methods can be demanded, and by what mechanism is that enforced (e.g. you can cough up microarray data into GEO, but fail to give the clinical variables, or do other things to make it unusable).
That science fixes itself eventually doesn't tell us anything about how to better deal with the problem. It doesn't stop millions of dollars going to fund people who create smashing results built on utter crap. It doesn't stop me from making mistakes and wasting money cause someone was lying about gene G. I refuse to tone troll about Brian Deer, cause I think his arrows close enough to the mark, and the problem is very real, at least in my world.

"Heisenberg, Crick & Watson, Curie, Oppenheimer"

You're not going to get anyone under 30 to know who Oppenheimer was unless they're already studying science seriously.

C&W is old news, again, 10 years after the brouhaha and nobody knows what the problem was.

None of those are a "Britney". Einstein is. Hawkin was at least 8 years ago, not so sure now. Sagan hasn't been seen for ages, so he's fading fast from stardom (much like, say, Johhny Rotten has).

A few regional specialties, but mostly because they're getting face-time. When that ends, their importance fades quickly.

Elvis is still going strong.

jrkrideau:

If I have a rough timetable , Deer started questioning Wakefield's results in 2004 and in 2010 (2011?) there was a result? Glad the medical establishment moves so quickly.

You might be interested to know that in the very issue of Lancet where Wakefield's now retracted study was published was a blistering editorial by Robert Chen and Frank DeStefano that was titled "Vaccine adverse events: causal or coincidental?" (with free registration you can read it now). Then just a year later a researcher from the very same Royal Hospital published the following:
Lancet. 1999 Jun 12;353(9169):2026-9.
Autism and measles, mumps, and rubella vaccine: no epidemiological evidence for a causal association.
Taylor B, Miller E, Farrington CP, Petropoulos MC, Favot-Mayaud I, Li J, Waight PA.
Department of Community Child Health, Royal Free and University College Medical School, University College London, UK.

There were even more questions earlier than 2004 from Dr. Michael Fitzpatrick, Dr. Ben Goldacre and others (including several large studies). There really was no question that Wakefield was wrong.

So why was that insufficient to have Wakefield ignored? Why did the media keep touting him like he had something to say, when he was clearly wrong with the science? Why was Mr. Deer needed?

âWhatever the true incidence of scientific misconduct is, we do know that peer review isn't very good at catching outright fraud and never has been.â

Is it fair to say that peer review isnât very effective without knowing the denominator; i.e., what proportion of submitted fraudulent papers survive the peer review process and are published? Conversely, how many fraudulent papers are rejected during the process?

Another rock star scientist: Jeff "Skunk" Baxter

Wow @46

You're not going to get anyone under 30 to know who Oppenheimer was unless they're already studying science seriously.

Perhaps it is the circles I run in, but I have friends in their 20s who know who he is and the HS age kids of friends know who he is.

C&W is old news, again, 10 years after the brouhaha and nobody knows what the problem was.

What? Crick & Watson. "Discoverers" of DNA back in the 50s. Who are you thinking of?

None of those are a "Britney". Einstein is. Hawkin was at least 8 years ago, not so sure now. Sagan hasn't been seen for ages, so he's fading fast from stardom (much like, say, Johhny Rotten has).

You'll probably get a lot of "Britney who?" at this point among the yourger crowd you seem to be concerned about.

Elvis is still going strong.

Except for jokes about him living with Bigfoot, he really isn't. Not with the demographic we're talking about.

By Lynxreign (not verified) on 05 Aug 2011 #permalink

Brian: Your pedophiles analogy may be valid in terms of being a case of institutional denial, but I don't think its valid in terms of actual comparability. Most people are going to be a wee bit more outraged over child rape than they are fudged numbers...at least unless someone got hurt, and often not even then. What you did was kinda Godwin.

Such an interesting article and thank you Orac for your usual superb job of updating us about Brian Deer's latest column.

When I first saw the remarks from Dr. Offit I questioned why he emphasized (regarding Wakefield) the fact that the science behind the study was wrong. IMO, if Dr. Offit was at a medical conference, that remark would certainly be sufficient...but his remarks were solicited by a newspaper and he knew (or perhaps should have known), that the remarks would be picked up by the media worldwide and go "viral" on the internet.

Dr. Offit is considered to be the "go to expert" on vaccines; he richly deserves the title. I would have preferred that he prefaced his remark with "As a researcher I think the most important thing is that Wakefield was wrong". To clarify he could have stated that the reason why Wakefield was wrong was because his wrongness had an agenda of profit and financial gain.

So Orac, on this argument we disagree.

I first became aware of Mr. Deer's journalism, when I viewed the Public Broadcasting documentary about the Wakefield study and how poorly it had been conducted. Yes, I certainly did know about the Lancet publishing the small Wakefield study and its ramifications for public health...I saw it myself while working as a public health nurse. (No need to comment here about the respected Lancet Journal editorial board...they dropped the ball completely.) That very night after viewing the documentary I went straight to Brian Deer's website and into the wee hours of the morning I read all the articles he had ever published regarding medical issues. Now Mr. Deer does not write for NY Times or the Wall Street Journal, where his more colorful commentary would be edited out to meet "the standards" of the aforementioned newspapers. Mr. Deer writes for a newspaper where his journalistic style is in demand by the publishers and his readership...and very consistent with the many articles he has written for the readership.

Insofar as comparing Wakefield's fraud and what Mr. Deer labels as a cover-up, the remark may have been a bit over-the-top with a comparison to the Church covering up abuse of altar boys, but not totally out of sync with Mr. Deer's style of reporting.

Must dash now for an appointment. I sure this debate will be hot and heavy and ongoing and I'll probably be back to re-post.

I forgot to add: Just to mitigate the apparent pile-on a bit here, I may agree with at least some of what Brian's arguing, especially if there's no mechanism in the UK at all. I just don't know the ins and outs of practicing research well enough to suggest what is reasonable to implement. I know that the auditing I deal with at work is generally positive and helpful, except for a couple cases where I've seen it go horribly wrong.

So Orac, on this argument we disagree.

"Partially disagree." You see, I can understand where Dr. Offit is coming from, but I could also understand where Brian Deer was coming from on the issue of science being wrong due to fraud or just being wrong. I tackled the issue in my characteristic logorrheic fashion here:

http://scienceblogs.com/insolence/2011/01/misdirected_criticism_by_some…

http://scienceblogs.com/insolence/2011/01/british_science_accused_in_th…

Well that IS true. I'll agree to that. It DOES happen. It IS a problem.

But the problem being so large? That's a conspiracy theory.

Wow, I'm not even thinking about retaliation or prejudice. I'm thinking that if you call out someone you've been working with for two years, that's two years of *your* data and publications that have to be thrown down the crapper along with theirs. That's a real setback to someone early in their career. Of course we hope people will be sympathetic (although not all will), but even if they do, there's a gap in your publication record you could drive a bus through.

By stripey_cat (not verified) on 05 Aug 2011 #permalink

lilady:

(No need to comment here about the respected Lancet Journal editorial board...they dropped the ball completely.)

(have more detailed comment in moderation)

Except they included an article by Shen and DeStefano on why Wakefield's study was flawed. Also there were several other scientists who questioned Wakefield's claims, and that included several published papers.

There was no question that Wakefield was wrong. The question is why did so many people believe he was right, and why did it take Mr. Deer's articles to prove it to the general audience, and not scientific world?

@ Chris: "why did so many believe he was right"

Because it appealed to deep-seated irrational beliefs that they held ("vaccines are dangerous tampering with Nature"/ "autism is not inherited") just like Burt's "data" about IQ being largely inherited ( thus not affected by environmental causes) fit in fine and dandy with stereotypical thinking about race and class differences in intelligence being somehow *intrinsic* and in-alterable.

On a somewhat *lighter* note: Much as I'd like to see more regulation contra fraud, I question whether there will be any money left, given our ( both UK and US) current economic situation, for either regulators or research. But that's just me.

By Denice Walter (not verified) on 05 Aug 2011 #permalink

Einstein is. Hawkin was at least 8 years ago, not so sure now. Sagan hasn't been seen for ages, so he's fading fast from stardom (much like, say, Johhny Rotten has).

Hawking is hitting the lime light again, especially with him on the new show entitled, "Curiosity: Did God Create The Universe?" by the Discovery Channel.

Sagan's shows are still aired, so I wouldn't count him out just yet.

Also, what about Bill Nye?

The question is why did so many people believe he was right

A significant part of the answer is the way parts of the media swallowed the whole thing hook, line, and sinker without any consideration of whether his results actually justified his claims.

Brian Deer said:

I find it difficult to believe that you can't see an entirely reasonable parallel with sexual abuse in the Catholic church, which is perhaps the seminal example of institutional denial in the face of decades of relentless anecdote.

Outside of the inflamatory nature of this statement making you appear shrieking, the reason why it is not an entirely reasonable parallel is that it ultimately works against you. The RCC's institutional denial of sex abuse didn't get to international levels of disgust until real evidence was produced that showed it and the cover up of same to be institutional. Before then it was largely and easily dismissed as little more than attention- seeking accusations while there were only anecdotes.

By Jeremy Shaffer (not verified) on 05 Aug 2011 #permalink

One has to remember: the editor of the Lancet conspired with Wakefield against Deer, and Goldacre wrote articles saying that Deer's reports shouldn't matter, because the science was already against vaccines causing autism.

Brian Deer's reactions are fully understandable, even if you disagree with them.

Didn't I say on more than one occasion that I could understand how Deer might react the way he did? The problem is that he shoots his own case in the foot and risks alienating some of his biggest fans (like me) by using such comparisons. He's a big boy; he can decide to do that if he wishes, but I will call him on it, regardless of how much gratitude and admiration I have for his work unmasking Wakefield.

Finally, we found that the integrity of the peer-review process can only ever be as robust as the integrity of the people involved.

True, but of course one could say the same about Parliamentary committees...

Isn't there a rather large elephant in the room here? I'm talking about Mr. Deere going after the trickle of fraud in conventional research when compared to the veritable flood of deception in CAM, which seems to enjoy carte blanche when it comes to accountability. Why not go after the greater medical fraud, Mr. Deere, and then revisit real science, worthy as that my seem to you. My humble opinion.

@beamup #43, Orac #44: granted, the volume of data for some physics papers (and NMR papers come to mind as well) wouldn't work well for this model. Raw patient records from clinical trials would be difficult to unethical to copy and store, and the scored/anonymized data would probably be substituted. Data in an accessible database wouldn't need to be copied. Different types of journals (and granting bodies) would certainly need to work out their own standards for what gets filed and under what circumstances it should be unsealed. But the data for the vast majority of papers in chemistry, biology, psychology and most other fields would fit on an individual DVD per paper. Blots before the contrast was adjusted, TLCs, chromatograms, surveys, interviews - most raw data just doesn't take up that much space. For the next rung, say NMR papers: put it on a hard drive, sign and seal it into an envelope, give it to the dean or mail it to the editor.

(a) Proprietary data formats: so what? Store it in whatever format was used to generate the paper, proprietary or not. If there are serious questions about your work, enough so that your editors have a formal beef with you, you will presumably be able to replicate your results from your own data.

(b) Looking at the data in any meaningful way would require a complete replication effort: Yes, exactly. That's often how fraud/error is identified: people try to replicate an experiment or technique and can't get it to work. The vast majority of experiments are replicable by someone else in the field, given the proper materials, strains, etc. Having all the information might obviate many needless attempts at replication (i.e., the error gets found in the data/methods), or at least speed those efforts up via details that had been left out of the methods section.

(c) exabytes of data pared down from zettabytes of data: all right, got me there. Y'all get a pass.

I mis-posted (is that a word?) about disagreeing with Orac...I partially disagree insofar as his taking Deer to task about Dr. Offit. I do agree that (on the face of it) Deer's comparison of a cover-up equates to the bishops and cardinals of the Church covering for pedophile priests by transferring them to another church with new altar boys is a bit over-the-top.

It is difficult for me and I suspect for other pro-vaccine advocates, to ignore the ramifications of the Wakefield study on the individual lives of kids with autism, their families as well as the undermining of public health initiatives to prevent the spread of vaccine-preventable diseases.

I have some other (good) "baggage" as well as the parent of a profoundly/multiply handicapped child and as a strong very vocal advocate for all disabled children and adults...since 1976. I suspect that is why I so appreciated Deer's efforts to reveal the extent of Wakefield's treachery and Orac's ongoing efforts to bring the story to his wide readership. Orac, no one could ever accuse you of treading lightly when it comes to Wakefield, nor could that accusation ever be leveled at Brian Deer.

There is a problem with certain physicians who do not practice good medicine with a very small percentage of doctors who blatantly malpractice on multiple patients. It seems to me that way too much time elapses between the time that serious complaints are lodged with licensing boards and the process is started to really investigate the alleged malpractice. And, there is the perception "out there" that in many instances, other patients are put in peril due to the slowness of the process to fully investigate and take action, if deemed necessary.

But, there were no complaints from the parents of the children that were subjected to invasive and needless testing. Some of the parents had the (false) belief that their children were damaged by the MMR vaccine. Others were looking for the big payoff...but they all participated with the anticipation of suing Big Pharma and being compensated. Of course, the background of kids being sent to Wakefield by solicitors and Wakefield's highly paid fees as the "hired gun" professional plaintiff's witness as well as the big score of developing a single antigen measles vaccine, were the real reasons for the study and the fraudulent results of the study.

The take-away lesson from today's debate is that we are all on the same side...honesty in research, protecting kids, providing for the public health of our citizens and taking down the researchers and their bogus studies who imperil preventive health initiatives.

It should be quite interesting to see how the "journalists" at Age of Autism "spin" Deer's latest article...as well as Orac's blog.

(Now, reverting true to form) When is that SOB Wakefield going back to the U.K.?

i have to agree with Ism, in that CAM modalities can be broadly characterized as fraudulent. how many billions are wasted on that crap when it could be used for legimate scienfic endeavors?

It should be quite interesting to see how the "journalists" at Age of Autism "spin" Deer's latest article...as well as Orac's blog.

I rather suspect they'll ignore Deer's latest article. Although it must be painfully tempting to them to latch on to its attack on the scientific establishment, they can't do so without mentioning Deer's previous brilliant takedown of Wakefield's fraud.

It is rather amusing that today AoA published part 5 of its most recent attack on Brian Deer, with no end in site. Dan Olmsted himself wrote it! Talk about hilarity ensuing.

The key thing to remember is that Brian Deer and I agree about far more things than we disagree about.

The UK House of Commons science and technology commitee proposed that fraud be tackled with a national body to co-ordinate action, and that designated research integrity officers be appointed in academic institutions.

Politicians want more control over other people's activities! Who could have predicted it?
I can't see any possible downside to a system that inflicts paperwork and a structure of state commissars upon academic researchers, while leaving private-sector researchers (e.g. pharmaceutical-company employees) to do what they like.

Then of course there are free-lance, self-employed scientists. Good luck imposing "research integrity officers" upon them!

By herr doktor bimler (not verified) on 05 Aug 2011 #permalink

Essentially the UK House of Commons science and technology committee seems to regard all scientists as servants of the State, who can be saddled with whatever scrutiny the committee might dream up.
Go to hell. Go directly to hell, do not pass go, do not collect $200.

By herr doktor bimler (not verified) on 05 Aug 2011 #permalink

and Goldacre wrote articles saying that Deer's reports shouldn't matter, because the science was already against vaccines causing autism.

But this is actually true, even if it is, perhaps, impolitic to point it out in so many words. Fakefield was always a marginal character in terms of actual science, as opposed to column inches. His study had been conclusively debunked years before it was proven to be fraudulent. So Fakefield was not a problem for science. For public health? Certainly. For science as a social institution? I would argue so. But not for science as a way of improving the state of human understanding, which is what Goldacre was talking about.

Denise Walter brings up a very important similarity in #14: Routine regulatory audits of banks and other financial firms - like peer review in science - are not there to prevent fraud. Routine regulatory audits are there to prevent gross stupidity. Anybody in a position to suborn the internal audits that are conducted to catch embezzlement can hide financial fraud from routine inspection if he's smart enough to commit it in the first place.

Financial fraud is caught when Stuff Blows Up in a sufficiently spectacular fashion to make the police and the firm's creditors send in the forensic accountants. Scientific fraud is discovered when fraudulent conclusions blow up in someone's face hard enough to prompt him to spend time tracking down what he usually presumes is a simple error. Or, as in Fakefield's case, when the fraudster makes such a big public production that he attracts attention from people who track down fraud for the sake of tracking down fraud.

Now, in finance you have a good case for more intrusive and heavy-handed regulation, because financial fraud that never blows up in imaginative and novel ways can still do a lot of damage to innocent bystanders. Also, if Stuff Does Blow Up in finance, you have a pretty clear prima facie case for fraud being committed somewhere in the process, since Stuff Should Not Blow Up in a legally run financial institution of any importance if your financial regulation is properly stringent.

Basic science research, by contrast, has very few innocent bystanders (aside, perhaps, from a few grad students who get caught up in fraud committed by their advisor), and lots of stuff blows up for the perfectly innocent reason of attempting to traverse uncharted territory of human understanding. So in basic science, the trade-off is between higher signal-to-noise ratio from cracking down on fraud vs. lower absolute volume of sound research from the overhead imposed by the crackdown. I am not convinced that this tradeoff provides a compelling argument for a crackdown at the moment.

If the purpose is to protect innocent bystanders, I'd look at the ways in which it is possible to suborn IRBs and other ethics bodies dealing with human or primate research. But it's not scientific fraud you're looking for in that case - you can get perfectly fine data from the most appalling atrocities (granted, the sort of mentality that makes atrocities seem like a good idea might also be predisposed to scientific fraud - but that's tangential to the point).

- Jake

Damn Orac, you beat me to it...I've been slumming again at Age of Autism. I find this "diversion" to be compelling...sort of like mind crack to this addict of voodoo science "practitioners".

Olmsted's latest invective in the Age of Autism's Six (or Sixty) Degrees of Separation amongst Big Pharma and the Rupert Murdoch investigation is that Brian Deer purportedly stated (about Wakefield), "He's a charlatan and as slippery as condom lube".

That quote might be another of Olmsted's "misquotes"...if it isn't, Bravo Brian Deer!.

Orac, you and Brian Deer both are my heroes.

one could say the same about Parliamentary committees...

rw23 "has struck the wooden skewer upon its thicker end". It is a marvel that UK politicians can turn their attention to fraud and non-integrity amongst non-politicians, without their heads exploding.

By herr doktor bimler (not verified) on 05 Aug 2011 #permalink
"It doesn't matter that [Wakefield] was fraudulent," Dr Paul Offit, a vaccine inventor and author in Pennsylvania, was quoted in the Philadelphia Inquirer the next day as saying. "It only matters that he was wrong."

suggests to me that we may have a problem. Offit seems to be saying, "Hell yes. Invent the data, lie about the findings, just hope you're lucky".

I am entirely confident that this is not, at all, what Offit was trying to say.  I have my doubts whether it is even a fair representation of what Offit actually said to the reporter from the Philadelphia Inquirer; reporters have been known to leave out important context in order to get a more provocative-sounding quote.

I would suggest that what Offit meant is probably far more like the following:

"It might seem that the lesson of the Wakefield affair is that we need to guard against science that is wrong because of fraud.  But 'science that is wrong because of fraud' is only a subset of 'science that is wrong.'  If we focus our efforts on discovering and correcting science that is wrong, that will include science that is wrong because of fraud.  If we focus our efforts on discovering and correcting scientific fraud, however, we may overlook science that is wrong but not fraudulent.  It makes more sense to concentrate on 'is it right or wrong?' than 'is it fraudulent?'"

By Antaeus Feldspar (not verified) on 05 Aug 2011 #permalink

How about fraud in the sense of someone stealing your work? That happened to me. I was using a computer program that did a calculation that was based on a published paper. Part of the calculation was finding values for derivatives, and that part didn't work.
So I read the paper. It had tons of "typo" mathematical errors - it seemed the author wanted to protect his calculation by hiding his actual formulas with "typos".
But there was one real mathematical error, in the calculation of derivatives. It was a subtle error, based on a change of coordinates with nonzero second derivative. I fixed that, and rewrote the computer program based on the fix - and after that, a finite differences test showed the derivatives were right.
So, we called the author of the paper. He flatly dismissed the idea that I might actually have fixed his program, even though I told him I'd tested the derivatives with finite differences.
A few years later, a paper of his appeared, with my fix in it. We *were* mentioned, but in very small print and in an ambiguous way, so someone reading it would probably not realize that he hadn't come up with the fix.
So his ego led him to intellectual theft.

@ Jake S: Thank you. While I focused on cognitive and developmental psych ( and stat),I found myself managing a great deal money c. 2000- basically, by trying to figure out how to protect said money, I became fascinated with fraud and mispresentation in pseudo-science and finance. Fraud- if it is to be successful- rests upon higher mental functioning/ social cognition-i.e. person perception, recursive thought, taking the role of the other, self-evaluation, assessing demeanor- skills usually developed around the time of adolescence. A good fraudster needs to anticipate how the other will react, what they want, and what will make them suspicious.

@ Orac: I read those articles plus Jake's most recent one. Funny how he doesn't mention his little adventure @ RI and his tango *avec moi*- I wonder why?( Hint: his audience might read what commenters wrote in response to him).
-btw- I am basking in the glory of being despised by an Age of Autism contributor.

By Denice Walter (not verified) on 05 Aug 2011 #permalink

This was a quite literal LOL moment for me. Clearly you've never dealt with interesting data volumes. In some fields it might possibly be vaguely credible. In others, this would require journals to accept and store hundreds of petabytes (even exabytes) of data for a single paper

Well, yeah, but they'd store it in a box. I had a similar reaction, and I'm only from the publishing end. The delivery of "supplemental materials" by journals remains quite haphazard. Even to the extent that somebody is canonicalizing the stuff in the real editorial office, sanity-checking should not be expected.

Isn't there a rather large elephant in the room here? I'm talking about Mr. Deere going after the trickle of fraud in conventional research when compared to the veritable flood of deception in CAM, which seems to enjoy carte blanche when it comes to accountability. Why not go after the greater medical fraud, Mr. Deere, and then revisit real science, worthy as that my seem to you. My humble opinion.

I agree completely about CAM representing a flood of deception, but disagree that the reasonable response to this would be for all investigation of scientific fraud to leave conventional research alone and pursue fraud only in the CAM domain. If this isn't what you meant to suggest should be done, then I apologize for misunderstanding you, but you've left important parts of your argument out: why should investigation of conventional science be done, but not by Deer, who is already doing it and has a track record of doing it well?

By Antaeus Feldspar (not verified) on 05 Aug 2011 #permalink

The British media's penchant for sensationalism and scare mongering is what caused the damage to the public during the Wakefield affair. They ran with Wakefield's "research", and continued to do so even when subsequent studies questioning Wakefield were published. It was a rather comprehensive admission that the British media is both scientifically ignorant and prefers sensationalism to sell stories.

It's disappointing, then, to see Deer engage in the very same exaggeration and emotive sensationalism (Shrieking scientists! Science is like the secretive pedophile-loving Vatican!) that wrought this mess in the first place. Even Deer's defensiveness when he gets called out on this mirrors the British media's frequent aversion to admitting culpability.

If Deer has serious suggestions on how to deal with scientific fraud, or has data relating to the problem, he is welcome to contribute to the discussion. Anything else, like the sensationalism in Deer's article, is simply more noise.

Wow @46

"Elvis is still going strong."

The immediate response by two teenagers:

"Elvis Costello?"

"Elvis who?"

"We also recommend that all UK research institutions have a specific member of staff leading on research integrity."

Sounds like a bad thing. Not only is it a singular member of staff, but a specific one. I'm no expert, but that looks like relying on the opposite of public, independent criticism and replication.

Would there be value in that?

By Svlad Cjelli (not verified) on 05 Aug 2011 #permalink

@Denise#75: There's only one rule for protecting money under your care: Do your own due diligence.

The same rule applies to science, but scientists tend to be a lot better at it than banksters.

- Jake

@ Antaeus Feldspar:

We use a series of filtres to separate the wheat from the chaff:

our scammers often have spurious educational backgrounds- degrees from matchbook cover schools ( OK, so I'm slightly prejudiced) never completing standard degrees or training.

However, many of our charlatans *are* doctors or have graduate degrees from reasonably respectable, accredited universities.

A finer filtre might be questioning how their work fits in with what we already understand ( what first alerted me to Mr Wakefield re neuro-development early on).

What's Step 3? Wish I knew but somehow I suspect that institutional self-monitoring will not do the trick.

@ JakeS- You got it, Mister!

By Denice Walter (not verified) on 06 Aug 2011 #permalink

This entire subject reminds me of the John Darsee, MD case. Back in the early 80s he was a Harvard medical researcher. He was accused of fraud, and later found to have fabricated much of his research, not only while at Harvard, but also while at Emory (residency) and Notre Dame (undergraduate). While at Emory, he published 10 papers and submitted 45 abstracts, of which only 2 of each are still considered valid.

For those of you who have access to Science, try these reports:

1) Science. 1983 Apr 1;220(4592):31-5. Coping with fraud: the Darsee Case. Culliton BJ. PMID: 6828878

2) Science. 1983 May 27;220(4600):936. Emory reports on Darsee's fraud. Culliton BJ. PMID: 6844919

They're good reads.

From the second article:

In one instance of reputed fraud, Darsee coauthored a paper with cardiologist S.B. Hyemsfield in The New England Journal of Medicine. The Moran committee report states that "there are no original records of this project at Emory University." Hyemsfield does not remember the patients. The published paper does not identify any hospital or clinics as the place the patients came from. Futhermore, the acknowledgement at the end of the paper expressess "indebtedness" to three scientists who apparently do not exist.

Yesterday on NPR's Science Friday was a discussion on Retraction Watch. They mentioned of the over 200 retractions they have covered in one year over 80 were from one researcher.

I'll probably peruse it later today after doing some errands. Hoping I can get through them after spending the night listening to the alarm from my neighbor's car going off once an hour from 2am to 6am.

Correction: Darsee didn't publish 10 papers and 45 abstracts while at Emory. 10 papers and 45 abstracts were published, of which all of them were based in some part on Darsee's work.

I think this is more important than the way I mis-reported it in my earlier post at #84. It shows how one person's fabrication of data can impact many other people.

Oh, wow, there is a Turkish version of me in an alternate universe!

(yes, I know it is a spam-bot)

@ Brian Deer 29:

"With regard to UK institutions, very many have no policies whatsoever for such concerns. In the Wakefield case, for example, University College London - an elite academic institution - introduced procedures I think last September."

UCL has had a policy on academic/ research misconduct for many years - for example, I have an old version with a 2001 date on it.

The current policy (http://www.ucl.ac.uk/ras/acs/resgov/research-misconduct-procedure.pdf ) does have a September 2010 date on it but that is the date of the latest edition.

In the unlikely event that Mr. Deer is still reading this thread, perhaps he would do us the favor of listing a few of these UK institutions that have "no policies whatsoever" covering research misconduct. I'm serious. I'd really like to know.

If he can't produce such a list (or even an example or two of such a UK institution), then I'll have no choice but to go beyond simply disagreeing with him over tactics and question his entire thesis, given that he has made the claim that "very many" UK institutions don't have a policy on research misconduct/fraud.

my 2p worth:

I worked in a lab in the UK where, because of the funding we were receiving (from the government Agriculture dept (DEFRA) maybe?) we had to keep meticulous lab books and document every procedure and sample we took or did. In theory, we could have had a random inspection on all of this. I still have no idea if a story cooked up by the lab head to make us more careful, although we were allowed to be less document-heavy on other projects. Not being in the habit of fudging or lying, I can't say it made any difference to the way I worked, bar having to do lots more paper-work. So some of the funders in UK academe do have systems in place, but my feeling is that it would have prevented bad book-keeping rather than fraud.

I currently scrape funds together to get my research done and have to see 40 % of it cut by the Uni for overheads. If we had to fund 'fraud squads' too, how much would we lose then? 50%? To put it in context, my research money this year is 10 000 pounds, of which I see 6. To put it in more context, my research is in an esoteric field which few give a crap about.

I'm with Orac, in that I think fraud is rare and hard to detect. A quick look at the Retraction Watch blog does show that peer review catches some of it, and that it is generally (and eventually) dealt with pretty harshly. Asides each institution having a system in place to deal with fraud as it arises, I don't know how we deal with the rest.

As an aside, when do we get an official regulation of journalists in the UK? or is that off limits? given there maybe as many 'paedophile priest' journos as there are scientists (though my money is on there being far more), can we see heavy-handed and expensive spot checks there, too?

hibob: "That's often how fraud/error is identified: people try to replicate an experiment or technique and can't get it to work. The vast majority of experiments are replicable by someone else in the field, given the proper materials, strains, etc. "

But that's how science works now. It just doesn't bother with the "detect fraud" bit and just has people try to re-create the test of the theory.

So what is this fraudbusting office doing now?

"Also, what about Bill Nye?"

I would count that as a regional specialty. Still popular with hundreds of millions, but regional. I.e. I've heard of him and can google/youtube for him, but I've never seen him.

Professor Brian Cox is somewhat of a regional one too. At least up until Secrets of the Solar system.

Stripey: "that's two years of *your* data and publications that have to be thrown down the crapper along with theirs. That's a real setback to someone early in their career"

But by producing a paper that shows their senior's errors, you've not thrown out your work and just shown how your work is BETTER than theirs.

Insta-fame. In science circles.

Most fraud is caught. In the case of Wakefield, it was already as far as the science was concerned, of absolutely NO IMPORTANCE WHATSOEVER.

It was the MEDIA who pushed it and made the fraud.

Not science.

A credulous media who were looking for a good story and have carte blanche to make any old shit up "in the interests of balance".

It isn't the science that needs a fraudbusting unit, it's the media needs one to make them culpable for crap journalism.

I'm seconding Wow@92 on fraud detection. The fact that Wakefield had not been reproduced was enough to question the results, although without further evidence, one could not be sure whether Wakefield had allowed his biases to color his procedures, or if he had been engaged in deliberate fraud. Regardless of Wakefield's intentions, the end results were the same.

By Gray Falcon (not verified) on 08 Aug 2011 #permalink

decades of relentless anecdote

Decades, maybe. (Back to Piltdown Man and before.) But relentless? I'm just a layperson, and bow to the superior knowledge of those who're better informed, but I'm certainly not aware of anecdotes of scientific fraud to nearly the extent that, e.g., I'd been aware of anecdotes of priestly pedophilia before more actual data began coming out.

@70: "Basic science research, by contrast, has very few innocent bystanders".
Those who get away with more cheating are rewarded with money and high-visibility papers, while their more honest counterparts aren't. Review Potti/Nevins/Duke story. Also: they actually had clinical trials running, though perhaps not leading to direct harm of the patients. Their inquisitors at NCI and MD Anderson and other places got to waste lots of time looking into the matter too. The institutional response at Duke was and is very interesting (and complicated), and leads to the question of what role or accountability the larger institution has. All of that might have been avoided by reviewers or editors saying "I can't follow your methods, data or results". We often don't demand that there's enough data and methods to be able to check the results even just in principal. When mathematicians review math, I don't think they return opinions like "well, it might be correct", but in biomedical sciences that's just what we often do. We could do a better job, but I admit I can't answer the question of exactly how much effort and money should be spent to achieve that (and who pays).

@96

Re: Sloppy peer review: True, but tangential to the present discussion. What you describe would IMO be inexcusable even if everything about the paper was completely kosher and the conclusion happened to be true.

Re: Funding: Dedicated fraud detection costs money. If you spend more money on fraud detection than the frauds would have otherwise flushed down the drain, then fraud detection is in red ink.

Re: Prestige: I can live with getting upstaged by a crook, if the price of taking him down is that less real work gets done. (Of course, if fraudbusting saves more than it costs, go for it. I just don't think it will.)

Re: Institutional response: "Interesting" and "complicated" are not terms that I like to see in the same sentence as "institutional response to fraud" ;-)

Re: Clinical trials: That is why I'd focus on improving IRBs and similar review structures. Of course it is unethical to conduct clinical trials and then render them pointless by fiddling with the data.

But from a practical perspective, I believe you will catch more unethical clinical trials by strengthening the safeguards against trial designs that, e.g., are too weak to actually test the hypothesis advanced. Those are almost as useless as ones where the data is made up out of whole cloth, and likely to be more numerous (particularly the oversight of clinical trials is adequately rigorous).

- Jake