Research Highlights Article
March 29, 2017
How can we make sure the best medical science gets funded?
Experts know best, but expertise has its downsides too
A microscope image of lab-growth muscle produced with funding from the National Institute of Arthritis and Musculoskeletal and Skin Diseases.
Nenad Bursac/NIH Flickr
Exploratory life science research happening now might lead to the treatment that will save your life down the road – but only if it remains funded long enough to reach a breakthrough.
Sponsors like the National Institutes of Health can’t afford to fund every promising project, so tough decisions have to be made. Last year, in the process of doling out over $2 billion in funding for life science research, NIH reviewers rejected 22,000 proposals for R01 grants.
Behind each rejection were years of preliminary research and months spent perfecting a grant application. Any one of these projects could have led to discoveries about halting brain decay or a promising new cancer-fighting technique, but many will now struggle to find any funding.
At a time when government funding for life science research is on the chopping block, making sure we are getting the most for our money is taking on a new importance. But the time-honored process for deciding which projects receive grants – peer review of project proposals by other researchers and experts in the field – has some detractors.
It’s not obvious that reviewers, even ones with significant subject area expertise, are able to identify the best projects at an early stage. The proposal that will lead to the next front-page medical breakthrough might be hard to distinguish from dozens of other avenues of research that will never bear any fruit. The research process is so unpredictable that some have even argued for funding to be available on a lottery basis after any obviously deficient projects are excluded.
This could save thousands of hours that researchers spend applying for grants and justifying their work, rather than actually doing it. It would also eliminate bias toward underrepresented groups that can creep into the process when decisions are made by a group of insiders.
But can we do better than the proverbial monkeys throwing darts at a list of grant applications? Danielle Li, an economist at Harvard Business School, set out to study this question.
“There is a sense among some scientists that there really is no correlation between the scores and outcomes at all, that there just wasn’t expertise” coming to bear in the NIH grant review process, said Li in an interview with the AEA. She wanted to see if that was true.
There is a sense among some scientists that there really is no correlation between the scores and outcomes at all, that there just wasn’t expertise.
Danielle Li
In a new study appearing in this month’s issue of the American Economic Journal: Applied Economics, Li finds evidence that experts who are more familiar with a topic at hand really can do better – their funding decisions did at least a slightly better job of predicting which projects would end up earning more citations in future years.
Li looks at cases where a reviewer sitting on an NIH review committee had cited an applicant’s work in his or her own research. That is an indicator that the reviewer is doing very closely-related research to the applicant and might know the ins and outs of the field better than another reviewer on the panel. In those cases, presumably, the reviewers are better able to judge the merit of a project proposal.
The problem is that might make the reviewer more inclined to fund the project even if it is less worthy. Researchers might be rooting for their sub-sub-fields to get more attention, buzz, and money, and might bend their evaluations accordingly. Or they may even subconsciously value similar research more, simply because it is the sort of research that personally interests them.
The problem is that “if you want to come back and say ‘we want all your information but we don’t want your conflicts of interest’ it becomes a much harder problem,” explains Li. It’s not obvious how to disentangle the very real knowledge and expertise that subject-matter experts have from the professional biases they carry in favor of their own fields.
One option would be to prohibit review by closely-linked researchers in favor of more neutral evaluators. But Li’s analysis of funding decisions over several years at NIH reveals that the benefit of having topic-area experts weigh in with their informed opinions probably outweighs the costs.
The figure below shows that applications that panned out with more future citations (a measure of project quality that isn’t determined until years after the funding decision is made) were more likely to be accepted by committees that included closely-linked reviewers. Apparently, first-hand knowledge of an applicant’s work allowed them to be more discerning than reviewers who weren’t experts in the sub-field.
Li estimates that if closely-linked researchers were excluded from the process, the average quality of the funded projects would decline by around 3%. There would be less bias in favor of certain subfields, but the lack of close expertise would make it harder for the committees to identify the most promising subset of projects.
Still, Li emphasizes that this does not mean that reviewer bias in favor of their own subfields is not a problem, especially when it comes to welcoming new people and new perspectives into the life sciences.
“Let’s say there’s an ‘old boys club’ the way some law firms and industries used to be,” says Li. “The problem is if you’re a college student who’s interested in science and you look at this field which is very clubby, and you’re a minority or at a rural institution” you might not decide to enter the field at all. Evidence that research networks are gendered, for instance, suggests that reviewer bias could entrench historical disparities.
Professional bias on the part of NIH reviewers can be offputting and exclusionary even if that bias is an unfortunate byproduct of wanting to have the most relevant experts review applicants. Major funders like the NIH need to think creatively about how to harvest the expertise of researchers without letting their biases get in the way.
◊
Expertise versus Bias in Evaluation: Evidence from the NIH appears in the April 2017 issue of the American Economic Journal: Applied Economics.