What the HRC should have done

By John Pickering 18/09/2013

The system is broke.  It is no better than a lottery.  The Health Research Council tacitly acknowledged this last year when they introduced a lottery to their grant funding round.  The lottery was for three grants of $150,000 each.  These “Explorer Grants” are available again this year.  The process went thus: HRC announced the grant and requested proposals;  proposals were required to meet simple requirements of transformative, innovative, exploratory or unconventional, and have potential for major impact;  proposals were examined by committees of senior scientists;  all that met the criteria were put in a hat and three winners were drawn out.

116 grants were received, 3 were awarded (2.6%!!!). There were several committees of 4-5 senior scientists. Each committee assessed up to 30 grants.  I’m told it was a couple of days work for each scientist. I’m also told that, not surprisingly given we’ve a damned good science workforce, most proposals met the criteria. WHAT A COLOSSAL WASTE OF TIME AND RESOURCES.

Here is what should have happened:  All proposals should have gone immediately into the hat.  Three should have been drawn out.  Each of these three should have been assessed by a couple of scientists to make sure they meet the criteria.  If not, another should be drawn and assessed.  This would take about a 10th of the time and would enable results to be announced months earlier.

Given that the HRC Project grants have only about a 7% success rate and that the experience of reviewers is that the vast majority of applications are worthy of funding  I think a similar process of randomly drawing and then reviewing would be much more efficient and no less fair.  Indeed, here is the basis of a randomised controlled trial which I may well put as a project proposal to the HRC.

Null Hypothesis:  Projects assessed after random selection perform no differently to those assessed using the current methodology.

Method:  Randomly divide all incoming project applications into two groups. Group 1: Current assessment methodology.  Group 2: Random assessment methodology.  Group 1: assess as per normal aiming to assign half the allocated budget.  Group 2: Randomly draw 7% of the Group 2 applicants;  assess;  draw more to cover any which fail to meet fundability (only) criteria;  fund all which meet this criteria in order they were drawn until half the allocated budget is used.

Outcome measures:  I need to do a power calculation and think about the most appropriate measure, but this could be either a blinded assessment of final reports or a metric like difference in numbers of publications.

Let’s hope that lessons are learnt when it comes to the processes used to allocate National Science Challenges funds.

Tagged: Explorer Grants, funding, grants, Health research council, HRC, Lottery, National Science Challenges, Project grants, Random, Randomised controlled trial

0 Responses to “What the HRC should have done”

  • Hi John

    I wrote one of those 116 applications, which were supposed to be more like a Marsden than a normal HRC proposal. Alas, I wasn’t one of the 3 funded, but as an applicant I did receive a very strongly worded letter saying that, far from all the applications meeting the criteria, only 16% did. I was one of those 16%.The tone of the letter was one of exasperation that so many people had just submitted something very much like their usual proposals for what was clearly not business as usual for the HRC.

    I still like your idea though, but wouldn’t hold your breath!

  • Interesting Siouxsie, the info I had was assessors thought many more met the criteria (2/3rds was talked about), but some assessors were “looking for reasons” to exclude. Certainly, I don’t buy that 84% of applicants *really* didn’t meet the criteria – I don’t think that (collectively) we’re that bad at writing applications.
    Anyway, there’s got to be a better way!

  • John

    Lets take that a step further….
    Get all the candidates to submit their business cards to the lottery, then if their card is drawn , they get to write the application… a lot less effort all around.

    It’s better than “Sorry your application wasn’t one of those drawn…actually nobody bothered to read it”

  • Siouxsie is correct; the vast majority of the 116 proposals did not meet either one or both of the two criteria required of Explorer Grant proposals – being potentially transformative, and being exploratory but viable. While John is broadly correct in saying that NZ has a “damn good science workforce”, in this particular case many did not perform. Many ideas were good, but just not appropriate for the Explorer Grant scheme.
    In one sense the first round of Explorer Grants was disappointing – too many applicants submitted business-as-usual ideas. Our experience at the HRC, however, is that it often takes a year or two for any new scheme to settle in – time for applicants to calibrate their proposals to what is required, and time for the HRC to clearly communicate expectations.
    Finally, to John’s thoughts about a randomised trial of grant review processes – there should, of course, be an evidence base on which processes are run. The HRC regularly reviews and evaluates various aspects of its granting processes, although we have not carried out a trial of the type John suggests. A research proposal to the HRC that sought to investigate grant review processes would fail, of course; we must only fund research that delivers health outcomes.

  • @Robin Olds. Another thought, assuming that you do improve HRC communication and applications better fit the criteria, then if even only 50% of the applications fit there is a 95% probability of finding 3 valid applications in a random draw of 10 applications. This would mean only one assessing committee assessing far fewer applications. Please do think seriously about my “draw first ask questions later” method – we can’t afford to waste our senior scientists’ time.