Go back

Tools from finance could identify the risky research worth funding

Considering the spread of reviewers' judgements as well as the average would separate risky proposals from mediocre ones, says Jonathan Linton.

Traditionally, research funders’ decisions are based on the average scores returned by their grant panels. This ensures that proposals that the entire panel favours receive support. It also means, however, that controversial projects are usually not funded, as one or more committee members feels the project lacks merit. In a competitive review process, the low evaluation of one pessimistic reviewer is sufficient to depress the ranking below the level at which projects get funded.

No one is arguing that poor projects should be funded. But large disagreements in a committee generally reflect fundamental differences in opinion. There is a difference between mediocre research and risky research, and, at a time of tight budgets and fierce competition, there is a widespread feeling that funders are becoming too risk averse.

Projects that spark tremendous disagreement are rarely funded. This disagreement can be a result of competing schools of thought in one field, or a result of a multidisciplinary review team in which different disciplines assess risk in very different ways.

For example, in cancer research, there are two schools of thought on how tumors are supplied with the blood that they require for rapid growth. Some cancer researchers believe that tumors stimulate the development of vasculature, while others suggest that tumors form their own pseudo-vasculature. Following the wrong model could result in missing out on a method for destroying tumors. In fact, it has been suggested that taking the incorrect approach is likely to make tumors more resistant to treatment.

Assume that a peer-review committee was properly balanced between the two opposing schools of thought. We can expect proposals favoured by one group to score low with the other group, and vice versa.

In such a situation, neither set of proposals would be funded as their average peer-review scores will be too low compared with less controversial proposals. Research with potentially large impacts will be avoided. If, however, funders considered variation in peer reviews as identifying a disagreement worthy of resolution, research aimed at illuminating and resolving the tension between the two opposing schools would be more likely to be funded.

In other words, by basing funding decisions on reviewers’ average scores, we are not extracting as much information from the review process as we might. In the case of decisions based on peer review and expert assessment, considering the variation in scores as well as the mean could enhance the system and help clarify the nature of disagreement.

In effect, grant panels are being asked to judge the ultimate value of a research proposal, and their scores represent a proxy for this value. How can decision-making best take account of the variation in such measures of value? This is, not surprisingly, an issue that the financial world has already confronted and developed theoretical tools for handling.

An investment’s current value is a function of its market price and its volatility. Highly volatile, risky investments are more likely to dwindle into worthlessness, but also more likely to yield spectacular payoffs.

In finance, the Black-Scholes model—devised in the 1970s and awarded the Nobel prize in 1997—is used to factor volatility into pricing decisions, by treating changing value as a random walk through time. The model has proved particularly useful in pricing a class of derivatives known as call options, which give the holder the right to buy a stock at a future time for a particular price.

Applying the Black-Scholes model to calculate an “option price” for research projects, based on their reviewer score, would help funders to identify and support both projects that are consistently considered very valuable and, more importantly, projects on topics of clear importance that are of a little lower average value but involve significant disagreements in opinion. Funders would have a robust method for including high-risk, high-return projects in the portfolios, and maximising the impact of their spending.

R&D funding needs ways of considering the trade-offs between risk and return. Until our assessment systems support the consideration of research that involves tremendous potential outcomes—even though there is also tremendous disagreement—we will miss out on opportunities for obtaining tremendous economic and societal returns.

Taking a leaf out of finance’s book would allow us to manage varying levels of risk better, whether science, technology or innovation is under consideration. The alternative is that investment flows to low-risk projects that are easier to justify but ultimately less valuable.

Jonathan Linton is chair of operations and technology management at Sheffield University Management School. See also Research Policy, v45, p1936-1938, 2016.

More to say? Email comment@Research Research.com

This article also appeared in Research Fortnight