How hard is it to clean up the scientific literature?

ResearchBlogging.org

Science is supposed to be a project centered on building a body of reliable knowledge about the universe and how various pieces of it work. This means that the researchers contributing to this body of knowledge -- for example, by submitting manuscripts to peer reviewed scientific journals -- are supposed to be honest and accurate in what they report. They are not supposed to make up their data, or adjust it to fit the conclusion they were hoping the data would support. Without this commitment, science turns into creative writing with more graphs and less character development.

Because the goal is supposed to be a body of reliable knowledge upon which the whole scientific community can draw to build more knowledge, it's especially problematic when particular pieces of the scientific literature turn out to be dishonest or misleading. Fabrication, falsification, and plagiarism are varieties of dishonesty that members of the scientific community look upon as high crimes. Indeed, they are activities that are defined as scientific misconduct and (at least in theory) prosecuted vigorously.

You would hope that one consequence of identifying scientists who have made dishonest contributions to the scientific literature would be that those dishonest contributions would be removed from that literature. But whether that hope is realized is an empirical question -- one taken up by Anne Victoria Neale, Justin Northrup, Rhonda Dailey, Ellen Marks, and Judith Abrams in an article titled "Correction and use of biomedical literature affected by scientific misconduct" published in 2007 in the journal Science and Engineering Ethics. Here's how Neale et al. frame their research:

Journals occasionally report on notorious research integrity violations, summarizing information from scientific misconduct investigations, and noting the affected publications. Many other lesser-known cases of fraudulent publications have been identified in ofï¬cial reports of scientific misconduct, yet there is only a small body of research on the nature and scope of the problem, and on the continued use of published articles affected by such misconduct.

The purpose of this study was to identify published research articles that were named in ofï¬cial ï¬ndings of scientific misconduct that involved Public Health Services (PHS)-funded research or grant applications for PHS funding, and to investigate compliance with the administrative actions contained in these reports for corrections and retractions, as represented in PubMed. This research also explored the way in which such corrections are indicated to PubMed users, and determined the number of citations to the affected articles by subsequent authors. (6)

Worth noting here is that the research described in this paper focused on one particular part of the scientific literature, namely the published findings in biomedical sciences -- so the findings here may not tell us a whole lot about the situation in chemistry, or physics, or astronomy, or geology, or any other scientific field whose publications are not indexed in PubMed. It's also important to notice that this study is concerned with the fate of publications of authors who have actually been caught being dishonest to their scientific peers.

The standard for "being caught" the researchers apply here is having an official finding of misconduct (by the Office of Research Integrity, or ORI) against you. In part, this is because such a finding usually includes consequences connected to publications that may embody the dishonesty toward fellow scientists. Neale et al. write:

When the ï¬nal report of an institutional inquiry into misconduct deems that the allegation of scientific misconduct has been substantiated, the ORI issues a "Finding of Scientific Misconduct" report, which is published in its Annual Report, and also in the NIH Guide to Grants and Contracts. These reports usually specify administrative actions against the respondents. Routine administrative actions include debarment from applying for PHS funding or participating in study sections for a period of time, and notifying editors of any published articles determined to be fraudulent, plagiarized and/or in need of some type of correction, or directing the respondents to make such notifications. (7)

One hears (sometimes with only a vague gesture to the empirical data) that the incidence of fabrication, falsification, and plagiarism is much higher in the biomedical sciences than in other scientific fields, especially the "hard sciences". (Neale et al. don't make that claim, as far as I can tell.) Whether or not that is so, the body of literature associated with the biomedical sciences is well-indexed and powerfully searchable through PubMed, a service of the U.S. National Library of Medicine and the National Institutes of Health.

But, as Neale et al. point out, there may be reason to worry that articles indexed in PubMed that are retracted or corrected will be identifiable as such:

The National Library of Medicine (NLM) policy for tagging articles with corrections states that notices of errata and retractions will be linked to articles indexed and available on its online PubMed database only if the journal publishes the errata or retraction in a citable form. The citable form requirement stipulates that the errata or retraction is labeled as such, and is printed on a numbered page of the journal that published the originally article. The NLM does not consider unbound or tipped error notices, and for online journals, only considers errata listed in the table of contents with identifiable pagination. (7)

They also note:

In a 2002 survey of journal retraction policies, Michel Atlas noted one participant who stated that his journal did not publish retractions. Some journals allow one author to retract an article, but other journals require that every coauthor consent to the retraction. Fear of litigation is behind the inaction in some cases. (7)

Of course, not every retraction is the result of a finding at the end of an inquiry into misconduct. But, in situations where there has been an inquiry into misconduct, and the finding is that there has been misconduct that requires correction of the literature via a correction or a retraction, you would hope that the coauthors of the paper would consent to the appropriate action.

You'd also hope that scientific journals would recognize their interest in serving their readers by ensuring the scientific quality of the articles they publish. The pre-publication screening (via peer review and editorial oversight) can do part of the job here, but even in situations where there is nothing like misconduct on the part of authors, occasionally honest mistakes are discovered after publication. Why on earth would a journal have a policy that would prevent authors who become aware of such mistakes from communicating the relevant information to their fellow scientists who have access to the published work now known to be mistaken, whether through a correction or a retraction?

In any case, the present study points to policies and facts on the ground that might make us worry about how completely errors in the scientific literature (whether honest mistakes or intentional deceptions) are corrected.

Neale et al. set out to quantify the extent to which papers found needing to be retracted on account of misconduct were actually identified as retracted in the literature that scientists draw upon in their scientific work. To do this, they looked that the NIH Guide for Grants and Contracts and the ORI Annual Reports for 1991-2001. From all the published "Findings of Scientific Misconduct" in these two sources, they collected the information on publications identified as affected by the misconduct, on what administrative actions were taken against the people found to have committed scientific misconduct, and on whether those found to have committed scientific misconduct accepted responsibility for the misconduct.

Next, Neale et al. searched PubMed to see if the publications in question had been retracted, or if errata for them had been published. (As I understand their methodology, they searched for the articles themselves, and were checking to see if the retractions or erratum notices came up in these search results.)

Finally, they searched Web of Science to locate instances where other publications cited these articles which had been flagged as needing to be corrected or retracted. Remember that they were considering misconduct findings that were published through 2001 (and thus, one would assume, affected scientific work published prior to the findings of misconduct). Their examination of the citation history of these papers focused on a time interval after these findings:

Data collection from the ISI Web of Science was repeated two times during 2003, and once during 2004 to reï¬ne the data collection methodology. Because citations increase over time, a ï¬nal citation analysis was conducted during the week of May 17, 2005 and the citations as of that week are reported here. This allowed for a minimum period of 3 years between the publication of an affected article and the cut-off date for the citation analysis.

Publishing may not always move quickly, but you might hope that three years is sufficient to communicate to the scientific community that draws on the literature whether a particular piece of that literature is not as reliable as it was first thought to be.

What did they find?

For the published findings of misconduct in the NIH Guide for Grants and Contracts and the ORI Annual Reports for 1991-2001, 102 articles were identified as needing retraction or correction. (There were 41 researchers whose misconduct was tied to these 102 articles, 19 of them identified as responsible for a single problematic paper and 22 of them responsible for two or more problematic papers. One of those 41 researchers was responsible for a whopping 10 articles in need of retraction or correction.)

Of those 102 articles, 79 reported results that were fabricated, falsified, or misrepresented; two contained plagiarism; 16 gave inaccurate reports of the methodology the researchers actually used; and five reported "results" from fabricated experimental subjects.

Just over half of the 41 researchers here (responsible for 53 of the flagged articles) accepted the findings of misconduct, while five were recorded as disagreeing with the findings or denying responsibility for the misconduct. (The other misconduct findings didn't record the respondents' response to the findings.)

By the time the findings of misconduct were published, corrigenda (corrections) had already been published for 32 of the flagged articles and were "in press" for 16 more. The findings of misconduct noted, in the administrative actions they prescribed following the findings of misconduct, that retractions or corrigenda should be published for another 47 of these flagged articles.

For those doing the math, this leaves seven of the articles flagged (as reporting results that were fabricated, falsified, or misrepresented, or as containing plagiarism, or as giving inaccurate reports of the methodology the researchers actually used, or as reporting "results" from fabricated experimental subjects) for which the administrative actions did not specifically call for correction or retraction. However, it's not unreasonable to think that articles flawed in these ways ought to be corrected or retracted, in order to protect the reliability of the scientific literature and the trust scientists need to be able to place in the reports published by their fellow scientists if they don't want to have to do the whole damned scientific job themselves.

How many of those 102 flagged articles turned up with corrections or retractions in the PubMed searches Neale et al. conducted? By May 2005 (which was their data collection cut-off), 47 of the articles were indexed as having a retraction, 26 were indexed as having an erratum, and 12 other "had pertinent information in the PubMed 'Comment' ï¬eld" (12). Ten more of the articles had no notice of corrigenda in PubMed but did have "an open access link to the NIH Guide 'Findings of Scientific Misconduct' that indicated the article was affected by misconduct.". (13) Three had no such link that came up through PubMed.

This adds up to 98 articles -- which means that four of the 102 flagged articles didn't come up in PubMed searches at all.

These results show some variation in how the problematic articles were flagged as problematic to those searching PubMed, but the fact that only three of the 102 articles here were not flagged as such doesn't seem like a terrible level of compliance with the goal of correcting the scientific literature. (Of course, to scientists basing their work on the three problematic papers that seemed to be indexed without warning labels, this might not be terribly comforting.)

The next question, though, is whether this correction of the scientific literature was successfully communicated to its intended audience -- especially to other researchers who might be basing their own published work in part on these problematic papers. This is where the question of citations of the 102 problematic papers comes in.

By their data collection cut-off date (May 17, 2005), Neale et al. found listed in the Web of Science database nearly 6,000 citations to the 102 problematic papers. For the whole set of problematic papers, the median number of citations was 26, with a higher median number of citations (36) for the 13 problematic papers for which PubMed didn't have a linked corrigendum. The median number of citations of the papers with errata was 33, while the median number of citations of the retracted papers was 27. One of these problematic papers was actually listed as cited by 592 other journal articles.

Potentially, this is a problem.

Neale et al. note that some of these citations may be a result of the long turnaround time between when researchers submit manuscripts and when the manuscripts are published. If the manuscripts are published at about the same time that the retractions or corrections of the sources they cite are published, it's understandable that the authors citing the problematic papers could not have been in possession of the information that the papers they were citing were, in fact, problematic. (This might, however, take some of the wind out of the sails of those with the firm conviction that errors in the literature will be reliably detected by the other researchers who draw on and cite that literature in their own research.) However, given that Neale et al. were looking at a fairly long window of time after the last of these 102 papers had been flagged, in official findings of misconduct, as problematic, it seems clear that some significant number of these papers were still being cited after retractions or corrections were published (or after information in their comments field in PubMed indicated a problem, or after links in PubMed could have connected searchers to the official misconduct findings relevant to these papers).

Why didn't the authors citing these problematic papers know that they were problematic? (It's hard to imagine that they would cite them if they knew, for example, that they had been retracted.*) Neale et al. have this to say:

Most journals are not open access and on-line availability of corrections in the PubMed 'Comment' is determined by institutional subscriptions, making it difficult for some to learn more about the particular details related to the corrigenda. Researchers should be alert to 'Comments' linked to the open-access NIH Guide for Grants and Contracts, as its 'Findings of Scientific Misconduct' usually provide the most detail about the nature of the problem in the affected articles and are often more informative than the statements about the retraction or correction found in the journals (which do not always reveal that the article was affected by scientific misconduct).

How can the continued citation of research affected by scientific misconduct be reduced? More prominent labeling in the PubMed database is desirable to alert users to notices of retraction and errata. This could take the form of larger or bold fonts for these notices. In addition, a prominent placement of the word 'retraction' on the ï¬rst page of such articles would be useful, because once a user downloads an article, these notices are left behind.

Some of the problem, in other words, may be due to the vigilance (or lack thereof) displayed by those using the scientific literature, but some of it may come down to the extent to which that scientific literature is accessible to the researchers. Yet another instance where Open Access journals could make life better! (There is an irony in this observation being reported in Science and Engineering Ethics, which is not an Open Access journal.)

Neale et al. note that weeding such problematic papers out of the pool of scientific literature that researchers cite may require journal editors, manuscript authors, and even journal readers to take on more responsibility -- for example, before they submit a manuscript for publication (either initially or after the last set of revisions), ensuring that none of the sources they cite have been retracted or corrected. Failing to exercise such vigilance could inadvertently render their own paper a problematic one (if it depends in part on another problematic paper).

However, until the scientific community is on board in recognizing such vigilance as a duty, it's unlikely that failing to exercise it could itself rise to the level of scientific misconduct.

_________________

*It's possible that at least some of the citations to the problematic papers were citations to identify now-discredited theories or claims. The methodology in the study doesn't seem to have involved tracking down every article that cited one of the problematic papers and characterizing each of those citations (e.g., as approving or disapproving). It might be interesting to see research in how frequently scientists cite papers as reliable prior work or independent support for their findings versus how frequently they cite papers to disagree with them.

- - - - -

Anne Victoria Neale, Justin Northrup, Rhonda Dailey, Ellen Marks, & Judith Abrams (2007). Correction and use of biomedical literature affected
by scientific misconduct Science and Engineering Ethics, 13, 5-24 : 10.1007/s11948-006-0003-1

Categories

More like this

In the last post, we looked at a piece of research on how easy it is to clean up the scientific literature in the wake of retractions or corrections prompted by researcher misconduct in published articles. Not surprisingly, in the comments on that post there was some speculation about what…
Predatory open access journals seem to be a hot topic these days. In fact, there seems to be kind of a moral panic surrounding them. I would like to counter the admittedly shocking and scary stories around that moral panic by pointing out that perhaps we shouldn't be worrying so much about a fairly…
In the current issue of The Scientist, there's a pair of interesting pieces about how professional life goes on (or doesn't) for scientists found guilty of misconduct by the U.S. Office of Research Integrity (ORI). Alison McCook's article, Life After Fraud, includes interviews with three scientists…
You may recall the case of Luk Van Parijs, the promising young associate professor of biology at MIT who was fired in October of 2005 for fabrication and falsification of data. (I wrote about the case here and here.) Making stuff up in one's communications with other scientists, whether in…

I doubt that any of this really matters. Fraudulent work that leads to false conclusions is going to be realized by those to whom it is relevant to be wrong sooner rather than later if it is actually of any importance.

CPP @1, I reckon it might matter quite a lot to the researchers who end up wasting time, effort, and research funds trying to build something new on the basis of a reported result or technique that turns out (maybe later than sooner) to be fraudulent.

So, in the long run, maybe the community is just fine, but some honest individuals who are part of that community are going to get screwed.

Comrade PolyannaProf you will be singing a different tune when your grant proposal keeps being dissed because some egregious BS persists unrebutted in the literature.

Prof. S., I think the clear path here is to get PubMed more proactive and assertive about lInking to retractions, no matter how they are published.

By DrugMonkey (not verified) on 27 Mar 2010 #permalink

I think that is perhaps the most ill-considered thing CPP has ever stated on the internet. I know this might seem to be hyperbole.

A similarly problematic issue is when labs publish something they learn later is not reproducible or compromised by some fatal flaw, but never bother to report it.

Note that Nature currently has a completely fraudulent crystal structure on their books and they are going to wait for the ORI finding to retract it I guess, even though what has been published goes against our physical understanding of proteins. It is pathetic.

One big problem with this approach is the assumption that people only cite papers for their major conclusion. That's far from the truth. In a typical paper of mine, about half the references are for reagents I use, or methods. If someone describes a plasmid in a paper, sends me the plasmid, and then the paper is retracted because a figure was falsified, I would still cite that paper. What else could you do? (Assuming that you tested the plasmid when you got it, which you should do anyway, because about half the plasmids people send me are not what they say they are anyway.)

Interesting iayork,

that implies we should never retract, just correct? Presumably the retractions have at least one unfaked datum reported after all...

By DrugMonkey (not verified) on 27 Mar 2010 #permalink

One reason for continued citations may be that people simply don't recheck all papers they refer to every time. If you have found and downloaded a highly relevant paper you're most likely going to refer to it over several years. And as you already have the paper, you don't really have any reason to look it up again and again in the online databases. If you miss the retraction notice (easy enough especially if the source is not a journal you normally follow closely) you may continue to cite it long after it's discredited.

It might be interesting to see research in how frequently scientists cite papers as reliable prior work or independent support for their findings versus how frequently they cite papers to disagree with them.

It would indeed. And is it possible that some of the later citations are made with the purpose of drawing attention to discredited findings? If something of this nature occurred in my areas of research interest, I would probably want to point out if a particular hypothesis or school of thought had been discredited (especially if it was at odds with my own conclusions, so it wouldn't be an entirely altruistic act).

I agree with iayork #5. Just because paper X is retracted, does not mean all aspects of it is wrong. And similarly, just because paper X is retracted, this does not imply that paper Y which cites paper X is also wrong. Perhaps paper Y says "We were unable to replicate the results of paper X and instead find the opposite results." Obviously, paper Y would need to cite paper X.

There are two issues here, which we need to separate. First, is whether the scientific literature will correct itself (as CPP #1 points out). I think there is no doubt that it will. That's the strength of the scientific enterprise. Second, is whether we are wasting money, time, and effort because some doofus screwed up or lied. Again, I think there is no doubt that we will. But is that really so much worse than the other problems inherent in the grant-giving process (which have been discussed and will continue to be discussed ad nauseum elsewhere)?

My view is that it makes sense to flag a "retracted" paper, but that the scientific literature, much like raw data, should never be removed.

"Worth noting here is that the research described in this paper focused on one particular part of the scientific literature, namely the published findings in biomedical sciences -- so the findings here may not tell us a whole lot about the situation in chemistry, or physics, or astronomy, or geology, or any other scientific field whose publications are not indexed in PubMed."

The main reason for this focus is the fact that the publications coming out of this section of the scientific community are the outcome of a cut-throat funding system, which is a fertile ground for misconduct. Money is usually the number one reason for people to break the rules. I strongly believe that scientists in the PHS-funding system are more prone to commit a scientific misconduct than geologists, astronomers or chemists due to the way their careers and survival depend on grant funding. Moreover, because PHS funding is public money, it has an investigating arm, the ORI, that doesn't exist in other fields of scientific research.

As to a better way for scientists, who may cite problematic papers due to ignorance about their retraction or correction, to easily follow them, the ISI's Web of Science could publish a cummulative list of such papers monthly and annualy.

Lastly, the approach of CPP (comment #1) is ostrich-like and wasteful.

By Anonymous (not verified) on 29 Mar 2010 #permalink

Fortunately, time will do the job. There is no incorrect scientific result, fabricated or not, that will stand the test of time. Science is self-correcting, a feature that is difficult to find in any other human activity.

My experience is that by the time a retraction actually appears in print the community has already known for some time the study could not be replicated or that something was wrong.

One tool to accelerate this process is to have more open journals (like PLoS) with the ability to post comments and notes to each paper. This will bring up doubts sooner and force the authors to respond to the criticisms.

By Dario Ringach (not verified) on 29 Mar 2010 #permalink

Dario,

It depends on who you term "the community"- many of the shady papers in my field are maybe known to the people that have the money and resources to go to and shmooze at the important meetings, which would be a small percentage of the people in the actual field.

"Fortunately, time will do the job. There is no incorrect scientific result, fabricated or not, that will stand the test of time. Science is self-correcting, a feature that is difficult to find in any other human activity."

This is both naive and dangerous assumption. It is naive because you assume that the self-correction somehow occurs for all fraudulant scientific research in a reasonable amount of time, which is most probably incorrect, and it is dangerous because you trust the self-correction as the only policing necessary for scientific misconduct. Where public money is concerned, the tax payer cannot afford to wait for science to self-correct itself. Where science's good reputation and the public's trust in it are concerned, both will be greatly damaged while waiting for self-correction to occur.

By Anonymous (not verified) on 29 Mar 2010 #permalink

Anonymous #14 - Do you have any data on your statement that "the self-correction somehow occurs for all fraudulent scientific research in a reasonable amount of time, which is most probably incorrect"? In fact, most of the cases I have seen on Writedit or that have made the major-new-media have either been small and unimportant (think yet another bunny-hopping result), fraudulent but correct (think the stem cell result from Verfaille, where the data was manipulated, but the results were later replicated by other groups), or well-known to be problematic if not known to be wrong in what I consider a very fast time frame (think the physicist boy-wonder Schon or that Korean stem cell researcher [Woo-Suk, according to wiki]).

Science works on a slow time-course. (Time from discovery to impact is consistently 20-40 years. By the way, this is why science is federally funded and not privately funded.) These errors are well-corrected in the scientific time-line.

The idea that somehow taxpayers need to push their idea of efficiency in public systems is the same hypocrisy that pushes them to look at every public school that spends five bucks on a new textbook but never notices the inefficiency of private business that spends millions on a private jet. The fact is that science does what it is supposed to very very well - it prepares the world for 30 years from now.

Part of the problem is that science is often now sold to the taxpayer as if it is going to solve problems in six months (double the NIH budget and we'll cure cancer). But, as scientists, we need to address this not by complaining that science is inefficient. Rather, we need to correctly explain the 30-year science time-line.

As pointed out by several posters above (CPP, me, Dario Ringach), science is one of the few processes in our current society with a built-in error correction filter.

qaz,

I do not disagree with you and the others who have correctly indicated that science is self-correcting. Where we disagree is on the attitude that this feature of science is sufficient and that there is no reason to do a thing about the fraudsters who commit scientific misconduct, since their fraudulant work will be eventually exposed. The damage these fraudsters cause to science may be temporary, but letting them get away with it will encourage more and more of them to come out of the woodwork. Science public funding, as we know it, will collapse just as our financial system did in 2008.

By Anonymous (not verified) on 30 Mar 2010 #permalink

Anonymous #16 - I definitely agree that something should be done about the fraudsters who commit scientific misconduct. My understanding is that they're discredited, lose their grant privileges, sometimes go to jail, and that their scientific careers are pretty much ruined. Seems fair to me.

The issue at the top of the post was what should be done with the scientific literature they leave behind. I say that those papers should be flagged as problematic but not purged from the literature.

I think the reason science public funding is in trouble is due to lots of other reasons and has very little to do with the (relatively rare) fraudster. Actually, I don't think the problem is that scientific funding is in trouble. I think the problem is in the distribution of that funding. But, again, I don't think the distribution is much affected by fraud. Does anyone know the actual numbers of public $ spent on research that turns out to be fraudulent relative to overall $?

And what happens when people aren't bothering to replicate a paper, just make medical decisions based on its results? A doctor may see a paper, think "hmmm, that makes sense," change their pattern of practice in some large or small way (whether to send people with a certain condition to physical therapy, or what painkillers to prescribe), and not notice a later paper that finds problems, or a retraction.

I'm thinking of the series of papers, all by the same person, claiming that NSAIDs were more effective for chronic pain than narcotics. The results were fabricated to "prove" the result the guy wanted. They have since been retracted, but how many people saw the retraction? Yes, it was in some newspapers, but does your doctor read the newspaper every day? There are almost certainly doctors who saw one or more of the original papers, and told patients that no, they couldn't have/didn't need morphine or codeine or fentanyl, here's some ibuprofen, but aren't aware of the retraction. And a patient who comes in now and says "doc, I read in the Globe that ibuprofen really isn't as good, so can you prescribe me some vicodin?" might get it, or might be dismissed as a drug-seeker (and told, more politely, that the Globe isn't a medical journal).

Vicki,

It is, unfortunately, the attitude that science, more than any other human activity, is self-correcting, self-policing, which allows misconduct to fester and hence to cause much greater damage than otherwise would occur.

A permanent, cummulative list of all retracted and corrected papers should be readily available for all of us on Pubed, ISI World of Science Web, etc.

By Anonymous (not verified) on 30 Mar 2010 #permalink

Vicki, I would say that anybody who makes an important medical decision on the basis of a single paper (or set of papers from the same group) should probably not practice medicine. Even perfectly valid, well-done studies are sometimes wrong, and anybody engaged in a field ought to know that. That's the whole reason there are so many follow-up studies and other checks, and that's partly why it takes many years for a discovery to filter into medical (or other) practice.

It seems now that even NRG agreed, for the first time, to retract from one of its journals an article that contained a plagiarized paragraph, though the editors, in expalining their action, chose to blant the severity of the misconduct.

http://www.the-scientist.com/blog/display/57267/

Second, is whether we are wasting money, time, and effort because some doofus screwed up or lied. Again, I think there is no doubt that we will.

The amount of money, time, and effort that are wasted "because some doofus screwed up or lied" in a published paper is orders of magnitude smaller than the amount of money, time, and effort that are wasted for all sorts of other reasons: poorly designed experiments, incompetent scientists, inadvertent mistakes, misguided hypotheses, etc.

While there is surely reason to distinguish the former from the latter from an ethical standpoint, from a practical standpoint the former is quantitatively overwhelmed by the latter.

What's a $100,000 bribe money hidden in the freezer of a corrupted politician compared to the millions in no bid contracts given to a company that the US Vice President used to be its CEO? Let's ignore Mr. Jefferson and maybe even Mr. Cheney, since Wall Street cheated us all out of much more money than thousands of Jeffersons and Cheneys would ever do.

The tendency to ignore or minimize the negative effects of misconduct in any field, just because its measured monetary damage is supposedly small, completey misses all the other negative effects of misconduct on our society.

By Anonymous (not verified) on 04 Apr 2010 #permalink