As far as risks are concerned, there seems to be a consensus view on general funding strategies from different funders: government funding agencies tend to fund low-risk proposals while private foundations, at least in principle, should fund more of those so-called high-risk high-reward projects. Unfortunately, these high-risk high-reward funding programs, in their actual practice, often utilize similar measures that have been applied for funding main-stream or low-risk projects. Here, an improved review procedure for such programs will be tentatively proposed, which will be easy to implement and meanwhile biased properly toward high-risk, potentially paradigm-shifting proposals.
Federal funding agencies are indeed risk-averse, though not completely due to their intentions. In their review process, they typically ask for opinions from several external experts. For a typical funding call, the competition is so intense that a proposal has nearly no chance to get funded if it does not receive excellent reviews from all consulted experts. Unfortunately, the requirement of such perfect consent in review inevitably results in funding only projects with low risks. In recent years, these agencies seem to recognize such shortcomings and started some separate programs aiming to support high-risk projects, for example, EAGER from NSF and USP from DOE. However, their scales and practices have not really changed the situation much.
Meanwhile, philanthropic organizations are stepping in, hopefully to alleviate the situation, with many of them declaring their aim to support high-risk, high-return projects in their mission statements. However, it seems to me that the measures, to ensure that high risks are considered, have not been properly implemented, though I don’t know in detail how private foundations are operated. In my humble opinion, neither program officers nor experts from the same field can solely or even jointly decide which high-risk high-reward projects to be funded without issues of bias and fairness.
By analogy with the business world, we may find more clues. High-risk high-reward proposals in basic science are, in many ways, similar to start-up companies in the business world. Private foundations, by comparison, are like venture capitalists and angel investors. Identifying a good startup worth investing is not much different from finding a worthy high-risk high-reward project to support. There are really two correlated factors or aspects of risk / reward to evaluate with the same goal: how innovative, disruptive, or even paradigm-shifting a given proposal is.
As elaborated in “the Structure of Scientific Revolutions” by Kuhn, experts fully immersed in the old paradigm are typically the fiercest critics against the emerging new paradigm. For any potentially paradigm-shifting proposals, therefore, one of the true indicators for the high-risk factor is that they are supported by very few experts from the same field. On the other hand, minimum scientific standards are needed to apply in order to exclude the competition from pseudo-science. One of the possible ways to ensure such minimum standards, for example, similar to the practice of Consultative Review newly introduced by the esteemed journal PNAS, is asking for positive reviews from experts that the applicants recommend, who are presumably in favor of the proposal and have no conflict of interest. The threshold number of such positive reviews is debatable, but could be set to one, two, or even three, depending on how much risk a given foundation would like to take. Other possible alternatives of minimum standards include a requirement of relevant peer-reviewed publications in esteemed academic journals, which should be directly related to the proposal.
The biggest pitfall to avoid is to inadvertently support mainly low risk projects. For example, in rare feedback I received for a funding call aiming to fund high-risk high-reward projects, the funder stressed the importance of strong community support and citations, which are exactly indicative of main-stream works, in other words, low-risk projects. To ensure a high-risk factor in place, the funder should have considered the opposite. If most of the randomly-selected experts are in favor, almost consensus support from the community of the specialized field, and/or a huge number of citations in relevant publications, then such projects should not be considered for any funding programs aiming to support high-risk efforts.
As for the evaluation of the high-reward factor, funders can not rely, at least not fully, on the positive reviews from applicant-selected experts. However, funders can not rely on the opinions of randomly-selected experts either, as they are most likely biased as well, in favor of the old paradigm. As one can imagine, the most unbiased reviews on the impact factor may likely be from non-expert scientists from other fields. The best option would be scientists from immediately adjacent (sub)-fields where the proposed new paradigm does not change much. These non-expert scientists may not be able to evaluate the proposal’s feasibility or technical details, but they can likely tell how impactful it is if successful.
In summary, a possible review structure or procedure for high-risk high-reward funding programs would be like the following:
* positive reviews of 1-3 applicant-selected experts in the same field and /or relevant peer-reviewed publications (1-3)
* reviews of funder-selected experts in the same field (can not be all or mostly positive to be considered high-risk)
* reviews on the reward/impact factor of the proposal by non-expert scientists from adjacent fields
* [optional] reviews on the reward/impact factor of the proposal by non-expert scientists from very different fields or disciplines (maybe important for the most disruptive proposals with the potential of the largest paradigm shift).