Before I got into genomics, I spent some time in science and health policy. On a couple occasions, I was invited to participate in a round table/white paper thingee where we were supposed to offer suggestions to NIH and other funding agencies. We would make recommendations, program officers would agree with those recommendations, and then reviewers would... fund the same old shit.
That's why I've advocated more specific RFAs that allow NIH to set targeted priorities:
My experience has been that with very targeted calls for proposals, there are far fewer proposals submitted, and it's much easier to flat out reject them because many proposals are not germane to the funding objectives. This means that NIH program officers have to be far more active in defining specific research objectives than they have been--to a considerable extent, NIH is placing this responsibility on reviewers who often lack knowledge of the larger institutional objectives. That needs to be changed.
That's why I was interested in NIH's new scoring system that would set up scores in such a way that program officers (and the council) would have an opportunity to use their discretion:
With less information--that is, less opportunities to discriminate on pretty meaningless stuff in terms of the scientific merit and capabilities--I think there will be more grants that are very tightly bunched together, meaning that either program officers and the review council will select grants based on statistically ridiculous differences (although one could argue that's already happening), or else funding decisions will shift somewhat to NIH officials.
I'm not entirely sure that's a bad thing--if there's a downside to the R01 mechanism as currently construed, there's little accountability for panels that choose grantees stupidly.
Well, ScienceBlogling DrugMonkey crunches the numbers and concludes that's exactly what's happening:
I have to say I'm in favor of this approach and the outcome. I feel that in the past POs were all too willing to act as if they believed, and likely did actually believe, that "small and likely meaningless mathematical differences" in score were producing bedrock quality distinctions. I felt that this allowed them to take the easy way out when it came to sorting through applications for funding. Easy to ignore bunny hopper bias which resulted in 10X of the same-ol, same-ol projects being funded. Easy to ignore career-stage bias. Easy to think that if HawtNewStuff really was all that great, of course it would get good scores. Etc.
I like that the POs are going to have to look at the tied applications and really think about them.
I agree completely. In her book ECONned: How Unenlightened Self Interest Undermined Democracy and Corrupted Capitalism (review coming soon!), Yves Smith makes the point that economics too often is what economists want it to be, not what the rest of us need it to be. Unlike many economists, however, we are funded by the public, and we need to be accountable to the public. By removing some decision making from study sections, and returning it to the NIH, that's a very good step.
If you're worried that this will 'politicize' science, well then, you might just have to sully yourself with political advocacy. You know, like citizens do.
And now, extra bonus video (which is tangentially appropriate):
- Log in to post comments
I agree with this as well. The old scoring system gave the ilusion of discernable differences, when it was just noise. Funding someone with a 9.9%ile but not a 10.0%ile solely because of the 0.1%ile difference in scores was statistically, ethically, and pragmatically bankrupt.
I'm no expert on the grant process... but I can see this working well enough with some minor adjustments. It actually isn't all that different in principle to how the profs I've worked with decide grades.
The scoring system is a good *first pass*. From the distribution, a dividing line between "possible" and "not a chance" can normally be discerned easily. From there, the hard part of looking at each proposal and determining where the final divider goes has to be done. For grades, it is both easier and harder, since there isn't a hard limit on the number of As you can give out.
However, the process shouldn't stop there. Proposals near the border need to be reconsidered including more subjective criteria. Same old stuff should get knocked down a position or two, and especially relevant stuff (or new PIs, new ideas, or just coolness) should get upgraded a bit.
In practice this is really hard if it involves rejecting proposals that made the cut... so a better model is to hold back some percentage of the funding and just add in proposals which didn't make the (slightly more strict) cut but have less easily quantified merits. Yeah, amounts to rejecting some proposals which would otherwise meet the cut to accept some which wouldn't... but is a lot easier to implement.
And, yeah, it could very well be abused. It is putting more power into the hands of just a few people. However, the potential for abuse is limited to the marginal cases (and only a fraction of the funding), and the possible gain in funding more new and relevant proposals is worth it IMO.