There's an article in yesterday's New York Times about doubts the public is having about the goodness of scientific publications as they learn more about what the peer-review system does, and does not, involve. It's worth a read, if only to illuminate what non-scientists seem to have assumed went on in peer review, and to contrast that with what actually happens. This raises the obvious question: Ought peer review to be what ordinary people assume it to be? My short answer, about which more below the fold:Not unless you're prepared to completely revamp the institutional structure in which science is practiced, and especially the system of rewards.
First, from the article:
Recent disclosures of fraudulent or flawed studies in medical and scientific journals have called into question as never before the merits of their peer-review system.
The system is based on journals inviting independent experts to critique submitted manuscripts. The stated aim is to weed out sloppy and bad research, ensuring the integrity of the what it has published.
Because findings published in peer-reviewed journals affect patient care, public policy and the authors' academic promotions, journal editors contend that new scientific information should be published in a peer-reviewed journal before it is presented to doctors and the public.
That message, however, has created a widespread misimpression that passing peer review is the scientific equivalent of the Good Housekeeping seal of approval.
Virtually every major scientific and medical journal has been humbled recently by publishing findings that are later discredited. The flurry of episodes has led many people to ask why authors, editors and independent expert reviewers all failed to detect the problems before publication.
The publication process is complex. Many factors can allow error, even fraud, to slip through. They include economic pressures for journals to avoid investigating suspected errors; the desire to avoid displeasing the authors and the experts who review manuscripts; and the fear that angry scientists will withhold the manuscripts that are the lifeline of the journals, putting them out of business. By promoting the sanctity of peer review and using it to justify a number of their actions in recent years, journals have added to their enormous power. ...
Journal editors say publicity about corrections and retractions distorts and erodes confidence in science, which is an honorable business. Editors also say they are gatekeepers, not detectives, and that even though peer review is not intended to detect fraud, it catches flawed research and improves the quality of the thousands of published papers.
However, even the system's most ardent supporters acknowledge that peer review does not eliminate mediocre and inferior papers and has never passed the very test for which it is used. Studies have found that journals publish findings based on sloppy statistics. If peer review were a drug, it would never be marketed, say critics, including journal editors. ...
Fraud, flawed articles and corrections have haunted general interest news organizations. But such problems are far more embarrassing for scientific journals because of their claims for the superiority of their system of editing.
A widespread belief among nonscientists is that journal editors and their reviewers check authors' research firsthand and even repeat the research. In fact, journal editors do not routinely examine authors' scientific notebooks. Instead, they rely on peer reviewers' criticisms, which are based on the information submitted by the authors.
While editors and reviewers may ask authors for more information, journals and their invited experts examine raw data only under the most unusual circumstances.
In that respect, journal editors are like newspaper editors, who check the content of reporters' copy for facts and internal inconsistencies but generally not their notes. Still, journal editors have refused to call peer review what many others say it is -- a form of vetting or technical editing.
In spot checks, many scientists and nonscientists said they believed that editors decided what to publish by counting reviewers' votes. But journal editors say that they are not tally clerks and that decisions to publish are theirs, not the reviewers'.
Editors say they have accepted a number of papers that reviewers have harshly criticized as unworthy of publication and have rejected many that received high plaudits.
Many nonscientists perceive reviewers to be impartial. But the reviewers, called independent experts, in fact are often competitors of the authors of the papers they scrutinize, raising potential conflicts of interest. ...
Journals have rejected calls to make the process scientific by conducting random audits like those used to monitor quality control in medicine. The costs and the potential for creating distrust are the most commonly cited reasons for not auditing. ...
Journals seldom investigate frauds that they have published, contending that they are not investigative bodies and that they could not afford the costs. Instead, the journals say that the investigations are up to the accused authors' employers and agencies that financed the research.
Editors also insist that science corrects its errors. But corrections often require whistle-blowers or prodding by lawyers. Editors at The New England Journal of Medicine said they would not have learned about a problem that led them to publish two letters of concern about omission of data concerning the arthritis drug Vioxx unless lawyers for the drug's manufacturer, Merck, had asked them questions in depositions. Fraud has also slipped through in part because editors have long been loath to question the authors.
"A request from an editor for primary data to support the honesty of an author's findings in a manuscript under review would probably poison the air and make civil discourse between authors and editors even more difficult than it is now," Dr. Arnold S. Relman wrote in 1983. At the time, he was editor of The New England Journal of Medicine, and it had published a fraudulent paper.
Fraud is a substantial problem, and the attitude toward it has changed little over the years, other editors say. Some journals fail to retract known cases of fraud for fear of lawsuits. ...
When an author is found to have fabricated data in one paper, scientists rarely examine all of that author's publications, so the scientific literature may be more polluted than believed, Dr. Sox said.
Dr. [Harold C.] Sox [editor of the Annals of Internl Medicine] and other scientists have documented that invalid work is not effectively purged from the scientific literature because the authors of new papers continue to cite retracted ones.
When journals try to retract discredited papers, Dr. Sox said, the process is slow, and the system used to inform readers faulty. Authors often use euphemisms instead of the words "fabrication" or "research misconduct," and finding published retractions can be costly because some affected journals charge readers a fee to visit their Web sites to learn about them, Dr. Sox said.
Despite its flaws, scientists favor the system in part because they need to publish or perish. The institutions where the scientists work and the private and government agencies that pay for their grants seek publicity in their eagerness to show financial backers results for their efforts.
The public and many scientists tend to overlook the journals' economic benefits that stem from linking their embargo policies to peer review. Some journals are owned by private for-profit companies, while others are owned by professional societies that rely on income from the journals. The costs of running journals are low because authors and reviewers are generally not paid.
A few journals that not long ago measured profits in the tens of thousands of dollars a year now make millions, according to at least three editors who agreed to discuss finances only if granted anonymity, because they were not authorized to speak about finances.
Any influential system that profits from taxpayer-financed research should be held publicly accountable for how the revenues are spent. Journals generally decline to disclose such data. ...
Journals have devolved into information-laundering operations for the pharmaceutical industry, say Dr. Richard Smith, the former editor of BMJ, the British medical journal, and Dr. Richard Horton, the editor of The Lancet, also based in Britain.
The journals rely on revenues from industry advertisements. But because journals also profit handsomely by selling drug companies reprints of articles reporting findings from large clinical trials involving their products, editors may "face a frighteningly stark conflict of interest" in deciding whether to publish such a study, Dr. Smith said.
(Bold emphasis added.)
There's a lot going on here. For now, I'll just deal with some of the high points.
1. What is peer review supposed to accomplish?
The thing I find most striking in general-audience discussions of the peer-review system is that non-scientists have much different expectations about what peer review is supposed to accomplish. When instances of fraud are discovered in the scientific literature, people ask, "Why didn't peer review detect this fraud and keep this paper from being published?" Well, how would the reviewers know that the results published in the manuscript were fraudulent? Most likely, they would have to set up the experiments again themselves and see what results they got. This sounds good, but it takes a long time to get the hang of a new experimental system. (Think of how long it may have taken the research team that produced the manuscript now under review to work with that system, refine their technique, and get it to the point where they felt prepared to say they could collect reliable data.) Could a peer reviewer do this experimental verification and return a review of the manuscript to the journal in a timely fashion? Not bloody likely.
The other big obstacle to experimental verification as a routine part of peer review is that there is no reward whatsoever in replicating someone else's result. You don't get publications, or grants, or tenure, from demonstrating that some other guy really found what he said he did. And, the time it would take to verify someone else's data would cut into the time the reviewer ought to be spending doing his or her "original" research projects -- the ones that might help get publications, grants, or tenure. The most likely circumstance for an experimental verification of someone else's results is when the experimental system in question is being used as a starting point for one's own new research. However, usually one builds on published results. The manuscript being peer reviewed is privileged information -- the scientist who is reviewing it is not supposed to take advantage of his or her early (i.e., pre-publication) access to that information to further his or her own research program.
So, as things stand, with no provision in the scientific rat-race for time, funding, or career recognition from verifying someone else's results (especially prior to publication), it aint gonna happen.
What are you evaluating when you receive a manuscript to peer review? The reviewer, a scientist with at least moderate expertise in the area of science with which the manuscript engages, is evaluating the strength of the scientific argument. Assuming you used the methods described to collect the data you present, how well-supported are your conclusions? How well do these conclusions mesh with what we know from other studies in this area? (If they don't mesh well with these other studies, do you address that and explain why?) Are the methods you describe reasonable ways to collect data relevant to the question you're trying to answer? Are there other sorts of measurements you ought to make to ensure that the data are reliable? Is your analysis of the data reasonable, or potentially misleading? What are the best possible objections to your reasoning here, and do you anticipate and address them?
While aspects of this process may include "technical editing" (and while more technical scrutiny, especially of statistical analyses, may be a very good idea), good peer reviewers are bringing more to the table. They are really evaluating the quality of the scientific arguments presented in the manuscript, and how well they fit with the existing knowledge or arguments in the relevant scientific field. They are asking the skeptical questions that good scientists try to ask of their own research before they write it up and send it out. They are trying to distinguish well-supported claims from wishful thinking.
It's just that, in the process of this evaluation, peer reviewers are taking the data presented in the manuscript as given. Until there's institutional support for replicators, that's how it is.
It's a separate issue, of course, what journal editors do with manuscripts one they have the referee reports in hand. In cases of a split decision among the reviewers, the editor might well be justified in breaking the tie. (Most of these editors are themselves scientists, so they have relevant scientific expertise they can bring to bear.) On the other hand, if journal editors were to completely ignore the referee reports, that would be a problem. It doesn't much matter to me what their reasons might be (e.g., please the advertisers, help a friend, exercise unbridled editorial power) -- if you're claiming to produce a peer reviewed journal, you have to take the input of the peer reviewers seriously.
As to the idea that journal editors need to tiptoe around so as not to offend the powerful scientist-authors or drive them to other journals: please! Can we remember that science is supposed to be about the knowledge the community produces, not about feeding monster egos? If certain scientists don't like being called out for making sloppy arguments or doing shoddy work, perhaps they ought not to make sloppy arguments or do shoddy work. The way the system is set up now, there are loads of scientists working very hard and knocking themselves out to get their results published. Not only do they want to share the knowledge, but they want to build their record of publication to get tenure, grants, and the like. (Did you know that a year on your CV without publications can be a strike against you when you're applying for grants? It's true.) The journals will have plenty to publish, and the quality of what they publish will be better if decisions are based on the science rather than the interpersonal politics.
Given that peer review doesn't routinely include the replication that might detect fraud, journals will also have to grow more of a backbone about retractions. When fraud is established, the authors of the fraudulent papers should be flagged, in bold print, in perpetuity on the free home page of the electronic journals and on the first page of every issue of the print journal. If scientists do your journal dirt by publishing lies, you have an obligation to out them. The report of their wrong-doing should come up in every literature search that turns up a paper of which they were an author.
Yes, it's harsh. But screwing with the reliability of the scientific literature is not something that scientists should tolerate. Reviewers trust that the authors of the manuscripts have done their best to provide reliable data. You violate that trust, it's bound to get ugly.
You've managed to pick up on something which I've wondered often when considering the concept of peer review - namely that checking that Professor Random's paper on "the reaction rates for ruthenium-catalysed decarbonylation of racemic triaminoalkylones" is scientifically accurate in its findings, data and conclusions is not fun - sealing a deal with big pharma to investigate a new athlete's foot treatment based on Dr Trump's paper on "tungsten-catalysed synthesis of dihydroxyacrylone" is significantly more fun since it means the numbers on your bank statement go up rather than down.
Sounds like the solution is institutionalised peer review - paying people to look over other peoples' shoulders, criticise their experimental technique, question their decisions and act as an in situ editor and get paid for it.
TW:
It wouldn't be possible for reviewers to test anywhere near all the experiments that get published in the scientific literature. But there is no need to. Bad science eventually gets weeded out (though it may take time) because researchers have an even stronger motivation than money to do so: their reputations are enhanced if they shoot down an influential paper in an important area.
Of course bad science in unimportant areas is much less likely to be corrected.
Janet,
I fully agree with your approach of exposing the cheaters and fraudsters on the journal's website and in its printed version. I would go even further; the journal's editor(s) should sue the author(s) of the retracted paper for damages.
Janet,
Thanks for the illuminating discussion on peer review. It's far from a perfect system, but it's the best thing we've got. Something so important definitely needs to remain under constant surveillance, though, and there's nothing wrong with constructive self-criticism.
Although somewhat peripheral to the main point, I thought the closing was interesting:
"Journals have devolved into information-laundering operations for the pharmaceutical industry, say Dr. Richard Smith, the former editor of BMJ, the British medical journal, and Dr. Richard Horton, the editor of The Lancet, also based in Britain.
"The journals rely on revenues from industry advertisements. But because journals also profit handsomely by selling drug companies reprints of articles reporting findings from large clinical trials involving their products, editors may "face a frighteningly stark conflict of interest" in deciding whether to publish such a study, Dr. Smith said."
I think that this melds with an argument for a heavier reliance on open access systems, funded by author page fees. According to this account, that would reduce some of the corporate influence on the scientific literature.
Oh my! It all seems so simple.
If the goal is distribution of experimental results obtained by the scientific method, results are judged by reproducibility and utility to others, and the scientific approach is by definition self-correcting, then why have the peer review system at all?
Polly A.
[Despite everything, I believe that people are really good at heart--Anne Frank]
Who would pay for research verification?
Well, the cost of one library subscription of a top line elseivier medical journal is enough money to keep my lab running for over a week. Four subscriptions would fund a 1 year post-doc. Journal publishers are not starving writers banging out reports on typewriters from a frozen basement in Rotterdam.
And as for publishing, I reckon there is more good science that goes unpublished than there is fraud in print. A more serious problem for the publishing system is that there is no incentive for non-professional-grant-applicants to publish. This is especially true for commercial labs who have a competitive advantage in keeping their hands close to their vest.
But the most important, and frightening, bit of information from the NYTimes article is this: It apepars that the public, and even science journalists, cannot distinguish between good science and correct science. The article states:
"None of the recent flawed studies have been as humiliating as an article in 1972 in the journal Pediatrics that labeled sudden infant death syndrome a hereditary disorder, when, in the case examined, the real cause was murder. "
An article is humiliating because it was wrong? I'm willing to bet that most papers published in 1972 have by now had at least some, if not most, of their conclusions disproved. The purpose of journal articles is not to state the "truth" (whatever that means). It is to report the results of well done research.
Oh, my! Everything seems backwards, so redundant and so wasteful.
Why not let the global peer review system of reproducibility, utility, self-correction and evaluation of investigator reputation go to work after publication, not before?
As opposed to the pre-judgment and censorship by a few of limited expertise, knowledge and time, who arbitrarily decide what gets to be evaluated by the whole?
Polly A.
[Despite everything, I believe that people are really good at heart--Anne Frank]
I explain science and peer review to people using Wikipedia as an example. One person comes along and writes a little bit about a subject. Then, other people come along and edit the page - they add more information, argue about discrepancies, etc.
What science is at any point is just our best Wikipedia page. If someone with new and spectacular information comes along tomorrow, the whole page may get rewritten.
The point is, that Science is a process, not a static set of facts.