When human subjects protection stifles innovation

The other day, I happened across an Op-Ed article in the New York Times that left me scratching my head at the seeming insanity of the incident it described. The article, written by Dr. Atul Gawande, author of Complications: A Surgeon's Notes on an Imperfect Science and Better: A Surgeon's Notes on Performance, described what seemed on the surface to be an unbelievable travesty:

In Bethesda, MD, in a squat building off a suburban parkway, sits a small federal agency called the Office for Human Research Protections. Its aim is to protect people. But lately you have to wonder. Consider this recent case.

A year ago, researchers at Johns Hopkins University published the results of a program that instituted in nearly every intensive care unit in Michigan a simple five-step checklist designed to prevent certain hospital infections. It reminds doctors to make sure, for example, that before putting large intravenous lines into patients, they actually wash their hands and don a sterile gown and gloves.

The results were stunning. Within three months, the rate of bloodstream infections from these I.V. lines fell by two-thirds. The average I.C.U. cut its infection rate from 4 percent to zero. Over 18 months, the program saved more than 1,500 lives and nearly $200 million.

Yet this past month, the Office for Human Research Protections shut the program down. The agency issued notice to the researchers and the Michigan Health and Hospital Association that, by introducing a checklist and tracking the results without written, informed consent from each patient and health-care provider, they had violated scientific ethics regulations. Johns Hopkins had to halt not only the program in Michigan but also its plans to extend it to hospitals in New Jersey and Rhode Island.

Having allowed myself, in my usual inimitable way, to be distracted from this article to blog about "natural cures" and some antivaccination lunacy, I found that fellow ScienceBloggers Revere and Mike the Mad Biologist had beaten me to the discussion, one doing a good job and one using the incident as a nothing more than a convenient excuse to indulge his penchant for attacking the Bush Administration. Now don't get me wrong. Over the last seven years, I've come to detest the Bush Administration, which combines hubris and incompetence into a toxic brew unlike any that I've seen since I started following politics some 30 years ago. No one but our current President could have altered my politics so quickly. However, it's definitely going overboard to blame this incident on the Bush Administration. More than likely it would have happened no matter who was President. The reason is that, as Revere realizes, this is far more consistent with government bureaucracy, with its attendant risk-averseness combined with the natural tendency of bureaucracies to widen the scope of their mission and the areas that they regulate, run amok. This is a tendency that is largely independent of the executive or legislative branches of government and appears to be common to virtually all government agencies.

There is no doubt that the Office for Human Research Protections (OHRP) and the Institutional Review Boards (IRBs) that operate under its rules (or institutions very much like them) are absolutely essential to the protection of human subjects. In the decades following the horrific medical experiments performed by scientists in Nazi Germany and physicians in Japan on prisoners, as well as disturbing experiments performed in the United States itself, such as the Tuskegee syphilis experiments, it became clear that rules for the protection of human subjects needed to be codified into law and an office set up to enforce these protections. Thus was born in 1979 (unbelievably late, I know) a document entitled "Ethical Principles and Guidelines for the Protection of Human Subjects of Research" (otherwise known as the Belmont Report). Based on the Belmont Report, the Common Rule was codified in 1991 (less than 17 years ago!) and presently serves as the basis for all federal rules governing human subjects research. All federally-funded research must abide by the Common Rule, and many states have laws requiring that even human subjects research not funded by the federal or state government must also abide by the Common Rule, which regulates the makeup and function of the IRBs.

So far, so good. Unfortunately, like all government bureacracies, the OHRP has had a tendency in recent years to insert itself in areas that it formerly left alone. Indeed, over a year ago, in response to an article in Inside Higher Ed, I wrote about this very problem, namely the recent tendency of institutional review boards (IRBs) and the Office for Human Research Protections (OHRP) to expand their purview in ways that are increasingly bizarre and arguably do not further their mission to insure the protection of human subjects involved in clinical research. For example, IRBs were requiring researchers to get their approval for projects where the chance of any sort of harm coming to the subjects is so vanishingly small that requiring IRB approval borders on the ludicrous. I'm talking about examples like these:

  1. A linguist seeking to study language development in a preliterate tribe was instructed by the IRB to have the subjects read and sign a consent form before the study could proceed.
  2. A political scientist who had bought a list of appropriate names for a survey of voting behavior was required by the IRB to get written informed consent from the subjects before mailing them the survey.
  3. A Caucasian PhD student, seeking to study career expectations in relation to ethnicity, was told by the IRB that African American PhD students could not be interviewed because it might be traumatic for them to be interviewed by the student.
  4. An experimental economist seeking to do a study of betting choices in college seniors was held up for many months while the IRB considered and reconsidered the risks inherent in the study.
  5. An IRB attempted to block publication of an English professor's essay that drew on anecdotal information provided by students about their personal experiences with violence because the students, though not identified by name in the essay, might be distressed by reading the essay.
  6. A campus IRB attempted to deny an MA student her diploma because she did not obtain IRB approval for calling newspaper executives to ask for copies of printed material generally available to the public.

In light of examples such as the ones documented above in a report by the American Association of University Professors (whose link, sadly, appears to have expired), what happened at Johns Hopkins University over this infection control quality improvement (QI) initiative does not appear so out of the pale or surprising. After all, there were actual patients involved here. As tempting as it is to label the behavior of the OHRP as idiotic, boneheaded, incomprehensible, the cancellation of this research becomes somewhat more understandable if one keeps in mind the tendency of bureaucracies to interpret rules in the most conservative and expansive way possible, often with the input of lawyers who tell them to do everything possible (whether it makes sense or not) within the very letter of the law to minimize risk. Even so, I see this as a case where caution and an increasing hideboundness overruled even the most basic of common sense, or even science. Indeed, if this ruling stands, I pity my poor colleagues involved in trying to do outcomes or QI research, which often involves just this sort of thing: Setting up guidelines based on what we already know to be best practices and then observing whether implementing these guidelines improve overall outcomes in the treatments of different diseases.

There are multiple reasons to conclude that OHRP overreached in this case to a ridiculous extent, and a lot of the misunderstanding comes from not understanding (or accepting) the difference between tinkering, innovation, and research, where arguably guidelines could be viewed as "tinkering." In fact, it could be argued that these guidelines aren't even tinkering. First, let's look at the OHRP rationale for its action:

The government's decision was bizarre and dangerous. But there was a certain blinkered logic to it, which went like this: A checklist is an alteration in medical care no less than an experimental drug is. Studying an experimental drug in people without federal monitoring and explicit written permission from each patient is unethical and illegal. Therefore it is no less unethical and illegal to do the same with a checklist. Indeed, a checklist may require even more stringent oversight, the administration ruled, because the data gathered in testing it could put not only the patients but also the doctors at risk -- by exposing how poorly some of them follow basic infection-prevention procedures.

Yes, I do have to admit that there was a certain warped logic to it, the sort that could only seem compelling from within a protective bubble, safely isolated from the real world. Here's the main problem. As MedInformaticsMD pointed out, nothing on this checklist was anything that couldn't be found in any basic introductory surgical and medical text. Making sure that the physician washes his hands and gowns up before doing an invasive procedure such as inserting a central venous catheter? How radical! How dangerous! Come on, this is Infection Control 101, the remedial session for morons. It's the sort of thing that was first described by Ignaz Semmelweis 150 years ago. There's nothing "experimental" about the intervention, at least not with respect to patients. Rather, it's just a test of whether altering the system to require physicians to do what they know they should do anyway would produce a measurable decrease in infectious complications. As a physician, I can understand the concern that such a study might provide ammunition to trial lawyers to go after physicians or hospitals that may not have been as vigilant as they should have been at infection control. I can also understand how some physicians might view such guidelines as intrusions or as mindless. (I myself have said on occasion that once there's a protocol common sense gets thrown out the window.) However, neither of these are reasons enough for the OHRP to stop such a study. Moreover, from the perspective of protecting patients, this was nothing more than a chart review, which is usually considered very low risk research, especially when the identifying information for patients is anonymized. As Dr. Roy Poses put it:

The decision to shut down this observational research project appeared to be extreme and based on, to be charitable, exceedingly narrow and nit-picking ground. The data collected did not appear to be sensitive; there was no question about protection of its confidentiality; the QI intervention could have been carried out without associated research and without patient informed consent (since the intervention affected physicians directly, not patients); and the study was apparently approved by local institutional review boards (IRBs).

Worst of all, the action of the OHRP, if it stands as a precedent, is likely to have a chilling effect on huge swaths of the discipline known as outcomes research. Here's where Mike, despite his gratuitous and (in this case, at least) probably unjustified Bush-bashing, made one good point. Since the main objection of OHRP seemed to be that the patients were not asked for informed consent by investigators to use information from their medical records, this is a potential outcome of this decision:

Not only is this an awful decision as it relates to this particular program, and the potential to prevent 30,000 people from dying annually, but as construed, almost any public health intervention to reduce contamination that is not disclosed to patients will be shut down. What happens if a patient decides to object to this hospital-wide study? There are a lot of patients out there, and some of them are fucking morons. Is a data collection pilot program subject to this (beyond the usual HIPAA and IRB concerns)?

Actually, what would happen is that that patient's data could not be included in the study, nor could that of any patient who refused to sign informed consent. While this wouldn't shut down the study for an entire hospital, it would have the potential to introduce unknown biases that would be difficult, if not impossible, to control for. At the very minimum it would have the potential to increase the cost and difficulty of such studies, meaning that fewer studies of this type would be done. Moreover, as pointed out by Jim Sabin, there are ways of ethically doing such quality improvement or outcomes research in such a manner that does not require obtaining informed consent from each patient, nurse, and doctor, as well as criteria for differentiating systems interventions from research. The OHRP action is even more bizarre when you consider that JCAHO requires surveillance and tracking for nosocomial infections as the basis for implementing evidence-based infection control programs in hospitals. In other words, inadequate surveillance for nosocomial infections will result in loss of JCAHO accreditation. It would not be unreasonable that a hospital could ethically justify such a program as part of its ongoing attempts to decrease the rate of line infections and the evaluation of whether its interventions are working.

Although I do not discount the possibility that there may have been political interference or that, as Maggie Mahar speculated, someone "was worried that the checklist program would draw too much attention to just how prone to error our healthcare system is," barring further evidence, my take on this issue is that it's far more likely to be just another incident consistent with the institutional tendency of OHRP and IRBs to expand their reach and interpret rules ever more conservatively, to the point where ridiculous decisions like this come about. A less charitable way of putting it would be, "Never blame malice when incompetence will explain an action."

I would be the last person to say that human subjects research should be easy. Any ethically performed research project can't be easy, and there will always need to be strong safeguards to prevent another Tuskegee experiment. However, there comes a point when making it harder and more onerous to comply with regulations reaches a point of diminishing returns in protecting patient safety at the cost of making research more costly and time consuming to the point of actually causing harm, usually by allowing more deaths or adverse outcomes to occur while research about interventions drags on interminably. It isn't always easy to tell what the proper balance should be, but in this case it was a no-brainer. Shutting down this infection control initiative was bureaucratic boneheadedness at its most egregious.

Categories

More like this

A while back I wrote about how the Office of Human Research Protection (OHRP) had blocked the implementation of a checklist for ICUs that would most likely prevent roughly 20,000 deaths from infectious disease annually. ScienceBlogling Revere reports that the OHRP has reversed its decision (…
Recently we posted on the insanity of requiring informed consent for posting a hygiene checklist in the ICU. This week the New England Journal of Medicine weighed in. Here's some background from the NEJM Commentary: About 80,000 catheter-related bloodstream infections occur in U.S. intensive care…
...and scuttle one of the best efforts going to reduce the problem of antibiotic resistance. I discussed before how the antibiotic resistance problem is, in the context of hospital infections, an infection control problem: One of the hidden stories in the rise in the frequency of antibiotic…
I guess there are a lot of things in the newspapers that leave you shaking your head, but a recent Op Ed by surgeon Atul Gawande left both Mrs. R. and me shaking our heads simultaneously, accompanied by jaws headed south and and eyes bulging. Quite a visual, I admit. But consider the source. I'll…

A campus IRB attempted to deny an MA student her diploma because she did not obtain IRB approval for calling newspaper executives to ask for copies of printed material generally available to the public.

Is this for real? Because, if so, oh my word. That's even beyond parody of bureaucracy run amok. If somebody tried to put that in a movie that parodied bureaucracy, critics would have written it off as not humors, over-the-top, and implausible.

It was documented in the AAUP report. Unfortunately, the original link to the report, published in October 2006, appears to be dead, and the Wayback Machine couldn't resurrect it. I may have saved it; I'll check my hard drive.

Orac, have you volunteered for your institutions IRB? You would be a wise addition, I suspect!

The problems that these cases illustrate is not that IRBs are a bad idea, but that their members are sometimes people who enjoy saying "no" to good ideas.

In the case of the (great) checklist issue, it seems to be a bit of a grey area whether informed consent was required or not. Clearly it was a controlled experiment, but there was (in some sense) no treatmeant! The people getting treated with a checklist were getting exactly the same treatment as those without a checklist. The experiment was run on the medical practitioners, not on the patients. On the other hand, since the dependent measure was patient outcomes, it's pretty clear that confidentiality issues needed to be addressed somehow, although I'm not sure that an IRB would be the appropriate place, as normal outcome monitoring in the hospital does not require IRB approval.

The problems that these cases illustrate is not that IRBs are a bad idea, but that their members are sometimes people who enjoy saying "no" to good ideas.

The problem is that this kind of committee structure tends to be set up to attract those sorts of people, and it takes constant vigilance to avoid commitees like that to devolve into insane nitpicking micromanaging worshipers of absolute interpretation of rules (and committee power) over common sense.

@Rob: Yes, exactly.

Orac,

It sounds like you're focusing on the aspect of informed consent by the patients. I did the same thing at first.

After further thought, I realized it's probably more about informed consent by the physicians. They're the real subjects of this study.

I completely agree that such 'research' should not require informed consent. Otherwise, we reach the absurd conclusion that a federally funded hospital can arbitrarily institute whatever checklists they think will help, but cannot attempt to monitor whether the checklists do help (unless the MDs agree to let them).

OTOH, it's easy to frame this as a behavioral modification study on MDs, which makes it harder to justify exactly why informed consent shouldn't be required.

For example, suppose an obesity researcher wants to give overweight subjects a diet checklist to tape on their refrigerator, and then wants to monitor their weight and eating habits to see if they improve. Wouldn't the researcher be expected to get informed consent from the subjects? If so, what's the basis for requiring IC in that case, but not in the Johns Hopkins case?

Note - I'm not arguing that the Johns Hopkins case should require IC. But I can see a need for an objective criterion to distinguish it from seemingly similar situations where an IC is required. (Either that, or IC shouldn't be required in those other situations either.)

Can I just make sure I have understood this correctly ?

A checklist was introduced into Michigan hospitals that listed the steps medical staff should take to reduce the risk of infection in the patient and that the checklist covered no new procedures but rather listed those considered to be best practice. Records were kept of infection rates, presumably having been de-personalised. Of course infection rates should already be monitored.

How is this an ethical issue at all ?

By Matt Penfold (not verified) on 03 Jan 2008 #permalink

I had a run in with an IRB about ten years ago, when I was working on my thesis for Library Science on methods to protect rare book collections and archives. As part of the project I wanted to interview the directors of several archives about the steps their facilities had taken in this area. To my surprise, I had to get IRB approval. I got this long list of questions I needed to answer, none of which applied to my project (after all, I wasn't studying humans per se, I was studying library buildings!) I was stumped--I wasted most of a month struggling with the IRB application, trying to figure out how to answer the questions.
Eventually my advisor told me to submit whatever I had, and the IRB handwaved it, but it was a frustrating experience nonetheless.

By Doug Hudson (not verified) on 03 Jan 2008 #permalink

I look forward to the day when one needs informed consent to ask for informed consent. Not.

All these scenarios sound like they are actually deleted scenes from Brazil

Just in case you want it, Orac, here's the current link to the AAUP report:
http://www.aaup.org/AAUP/comm/rep/A/humansubs.htm

As I noted in my comment at Effect Measure, the thing that really suggests the Hopkins case is a power grab on the part of OHRP is that JCAHO requires nosocomial infection surveillance and ongoing study of infection control measures as part of accreditation. Although JCAHO isn't a regulatory agency, maintaining accreditation is very important to a facility's community reputation and might well have implications in civil court - since JCAHO explicitly defines a standard of care, a facility that does not perform to that standard may be found negligent by the court. I am not sure whether a hospital administrator would consider it worse to be operating in contravention of federal regulation on human subjects or in contravention of "a reasonable standard of care."

The reason is that, as Revere realizes, this is far more consistent with government bureaucracy, with its attendant risk-averseness combined with the natural tendency of bureaucracies to widen the scope of their mission and the areas that they regulate, run amok.

Is there any alternative to OHRP and the IRBs? Or will it just require endless diligence to prevent this inane bureaucratic bumbling from turning medical research into a nightmare?

I ran into a variation of this a couple of years ago: I got a letter from a researcher explaining the experiment they wanted to do with a form attached requesting permission for Alberta Health Care (socialized medicine that we have) to release my address to the researcher. Not my medical records or anything like that, just my address. Which the researcher had to have to send the form initially. All I could do was shake my head.

"...records were kept of infection rates, presumably having been de-personalised. Of course infection rates should already be monitored.

How is this an ethical issue at all ?"

This case highlights the distinction made between audit and research (at least it is a distinction we draw in the UK). The rules for studying outcomes within a hospital are much more lax if the study is not for publication but for use in house.

Harlan and Rob are right; there is considerable inconsistency from board to board depending upon the composition of the committee at the time. Anthropologists often have difficulties with IRBs because their research involves people, but not in the same way that medical or psychological research necessarily does. Frequently, denial of a project has more to do with disapproval of the disciplinary outlook or methods than with the ethical issues at hand. To combat this problem, a great deal of time may be spent educating the members of the committee about, say, anthropological research, and then the membership changes, and everyone's back to square one.

I'm curious that the local IRBs approved the research but then it got stopped at a higher level. How much does this reflect the aforementioned inconsistency -- as well as other issues raised in the post and in comments?

While this is distressing and asinine, there may be some pragmatic work-around options. One is to institutionalize the checklist practices via NURSING policies and procedures. Nursing flies under the radar of IRBs and OHRP. it is assumed to be integrated into hospital operations instead of being recognized for its professional status and practice independence. One might take advantage of that oppression and slide the checklists in as interdisciplinary standards of practice. The other might be to capitalize on the JCAHO/OHRP power struggle, as jen_M illustrated, and highlight obstructionism bia IRB and OHRP decisions to the Joint. A third might be to appeal to CMS for intervention as it is beginning to deny payments for the results of complications.

Thank you for writing about this, Orac. I was distressed at reading Gawandes column, and the blogosphere has been mostly silent about this issue.

The problem is that this kind of committee structure tends to be set up to attract those sorts of people, and it takes constant vigilance to avoid commitees like that to devolve into insane nitpicking micromanaging worshipers of absolute interpretation of rules (and committee power) over common sense.

I see the exact same thing in software engineering. There is a type of engineer who writes to the spec and delights in nitpicking conformance to that spec -- totally ignoring whether or not the damn thing actually works. Sitting in change control board meetings with these sorts of people ranges from entertaining to painful. I am not surprised to hear that such people exist in other bureaucracies as well.

By Calli Arcale (not verified) on 03 Jan 2008 #permalink

FYI, I found that you can read several letters from OHRP to Johns Hopkins University (JHU) and to Michigan Health & Hospital Association (MHHA) at this site. See the letters dated July 19 & Nov 6. Letters from JHU or MHHA to OHRP are referenced, but don't seem to be posted.

The July 19 letter contains OHRP's justification for its ruling. Basically, it boils down to a determination that the study didn't meet any of the legal definitions of exempt research, and that it consisted of planned interventions in patients' healthcare and physicians' behavior.

The letter doesn't address the fact that the "interventions" consisted of implementing best practices recommended by CDC, but it's not clear whether JHU raised that issue in it's defense, either. The letter does state that "the subjects of the research were both the healthcare providers at the participating ICUs and their patients" (near the end of page 6).

You can also read the study authors' publication here.

Sadly, I can now see even better why OHRP reached their decision. IMO, the JHU study was indeed designed, conducted, and described very much as an interventional research study, and it doesn't obviously meet any of the legal categories for exempt research. I think there should be an exemption for such work, but there doesn't seem to be one in the current regs.

When I read the Op-Ed I scanned the blogs for discussion and found none. I concluded that it was an overblown article. It is pleasing to see it publicised and shown to be true. But what now amazes me is the discussion above. It concentrates, not all posts, on the minutiae of interpretation of informed consent. Talk about Nero fiddling while Rome burns. The study/trial/experiment etc SAVED 1000 lives a year let alone the $130M. At this stage the OHRP should be finding ways to cut the red tape and implement this system.
If they think there is a rationale for allowing 1000 people a year to die, they should pubicly publish it.
It reminds me of an author, probably Robert Heinlein:-
"There are two types of army paymaster, one who will work the system to get you what you should have and the other to work the system to stop you"

As a government employee (in my case the New Zealand government) I can't say I'm really surprised by this. Most government monitoring can degenerate into this unless you are really careful. The trouble is that government policy objectives are often nebulous, with no clearly defined method of measuring outcomes. This is especially true if a policy is simply created for soundbite purposes, rather than out of a sober consideration of a problem.

In the absence of a means of measuring outcomes, a monitoring group will often get hung up on process. If the process eclipses the outcome, then yuo get people being called out for failing to dot their i's and cross their t's.

The only way I can think to combat this is to stress the importance of the underlying objective - drill into your IRBs that at the end of the day the goal is to stop researchers abusing their subjects. If it is not clear that abuse would occur, then let it go. Other than this all you can do is what you do for any case of government failure, somewhat lower your estimate of government's ability to solve policy problems.

Good article. That system is surely broken, but I suspect is does have a lot more to do with monitoring doctors than it does with consent from patients as someone above has noted.

So stop tracking the results, then it's no longer a study, it's just enforcing current policy for quality of patient care. It's criminal that we even need to be reminded to wash our hands.

By PlanetaryGear (not verified) on 05 Jan 2008 #permalink

On the flip side -- a study that surprisingly got through several IRBs -- I was wondering if you read this article in Science, beginning of abstract pasted below.

In a randomized controlled trial, we compared abandoned children reared in institutions to abandoned children placed in institutions but then moved to foster care. Young children living in institutions were randomly assigned to continued institutional care or to placement in foster care, and their cognitive development was tracked through 54 months of age.

Rather horrifying! Can you imagine this experiment being performed in a first- (or second-)world country in the 21st century? But the title of the paper is:

Cognitive Recovery in Socially Deprived Young Children: The Bucharest Early Intervention Project

Is it now OK to perform this experimental intervention, since it's in Romania? The same issue of Science includes a policy Forum article:

The Ethics of International Research with Abandoned Children.
Joseph Millum and Ezekiel J. Emanuel
Science 21 December 2007: 1874-1875.
Research with abandoned children does not necessarily involve exploitation.

Any thoughts?

The authors of the study, Nelson et al., do have a lengthy discussion of ethical issues within the paper (e.g., the secretary of state for child protection in Romania invited them to do the study, the IRBs at Minnesota, Tulane, and Maryland [PI home institutions] approved the study, etc.). However, to me it seems to set off alarm bells in terms of ethics. I'm definitely not a developmental psychologist, but this statement seems odd:

Clinical equipoise is the notion that there must be uncertainty in the expert community about the relative merits of experimental and control interventions such that no subject should be randomized to an intervention known to be inferior to the standard of care (27). Because of the uncertainty in the results of prior research [??], it had not been established unequivocally that foster care was superior to institutionalized care across all domains of functioning... [Is the superiority of foster care really in doubt?]

Is it safe to assume that Gawande has accurately described what transpired? Has anybody checked to see what OHRP has to say about their reasoning in the case? Due diligence should precede high dudgeon.

By bob koepp (not verified) on 15 Jan 2008 #permalink

I worked on a large university IRB for several years, and I am intimately acquainted with the problems you describe. As some said earlier, there is WIDE variation in how IRBs interpret 45CFR Part46, but many difficulties can be figured out by constantly thinking about the following questions:
1. What is the definition of "research"? Any activity designed to advance knowledge. Under this simple definition, their activities were definitely research. Why did they bother organizing the study? Because you don't know exactly what the outcome will be.

2. Who is a human subject? Any person from whom data is gathered to evaluate some hypothesis.

Now, this is where the OHRP goes wrong IMHO. Perhaps the researchers went wrong there as well.

I think the subjects weren't the patients. The subjects were in fact the doctors/nursing staff. They are the ones who needed to give consent. It is the infection rates assoicated with their activities that constitutes the DV of this study.

Now, you say, surely the patients were the subjects? No. As consenting patients in these places, they were assured that they would be given care meeting the standards of the field, and they were, in all cases. The data associated with their treatment that is gathered as part of the normal activities of being a hospital is a normal and allowable part of the hospital's own record keeping and institutional research.

If the patient data are then used to answer an additional question about the outcomes of practices used by the doctors, you could consider the subjects the doctors because it is their performance that is being evaluated.

If still consider the patients the subjects, then treat this as a review of archived records, a category of research activity that is granted an exemption under 45cfr46.

Finally, my understanding of this study is that the control condition (no checklist) constituted the current standard of care, while the checklist condition merely created a mechanism for enforcement of a higher standard. Arguing that either condition constituted an unacceptable risk to the subjects is just silly. This last one is important, because it is the consideration of risk that is an overriding concern for IRB reviews.

By boojieboy (not verified) on 15 Jan 2008 #permalink

The OHRP has responded. It seems someone complained about the research. I have a discussion about it at mousomer.wordpress.com which I'd like to see your commnets on.