Piloting and Evaluating NSF Science Lottery Grants: A Roadmap to Improving Research Funding Efficiencies and Proposal Diversity
This memo was jointly produced by the Federation of American Scientists & the Institute for Progress
Summary
The United States no longer leads the world in basic science. There is growing recognition of a gap in translational activities — the fruits of American research do not convert to economic benefits. As policymakers consider a slew of proposals that aim to restore American competitiveness with once-in-a-generation investments into the National Science Foundation (NSF), less discussion has been devoted to improving our research productivity — which has been declining for generations. Cross-agency data indicates that this is not the result of a decline in proposal merit, nor of a shift in proposer demographics, nor of an increase (beyond inflation) in the average requested funding per proposal, nor of an increase in the number of proposals per investigator in any one year. As the Senate’s U.S. Innovation and Competition Act (USICA) and House’s America COMPETES Act propose billions of dollars to the NSF for R&D activities, there is an opportunity to bolster research productivity but it will require exploring new, more efficient ways of funding research.
The NSF’s rigorous merit review process has long been regarded as the gold standard for vetting and funding research. However, since its inception in the 1950s, emergent circumstances — such as the significant growth in overall population of principal investigators (PIs) — have introduced a slew of challenges and inefficiencies to the traditional peer-review grantmaking process: The tax on research productivity as PIs submit about 2.3 proposals for every award they receive and spend an average of 116 hours grant-writing per NSF proposal (i.e., “grantsmanship”), corresponding to a staggering loss of nearly 45% of researcher time; the orientation of grantsmanship towards incremental research with the highest likelihood of surviving highly-competitive, consensus-driven, and points-based review (versus riskier, novel, or investigator-driven research); rating bias against interdisciplinary research or previously unfunded researchers as well as reviewer fatigue. The result of such inefficiencies is unsettling: as fewer applicants are funded as a percentage of the increasing pool, some economic analysis suggests that the value of the science that researchers forgo for grantsmanship may exceed the value of the science that the funding program supports.
Our nation’s methods of supporting new ideas should evolve alongside our knowledge base.
Our nation’s methods of supporting new ideas should evolve alongside our knowledge base. Science lotteries — when deployed as a complement to the traditional peer review grant process — could improve the systems’ overall efficiency-cost ratio by randomly selecting a small percentage of already-performed, high quality, yet unfunded grant proposals to extract value from. Tested with majority positive feedback from participants in New Zealand, Germany, and Switzerland, science lotteries would introduce an element of randomness that could unlock innovative, disruptive scholarship across underrepresented demographics and geographies.
This paper proposes an experimental NSF pilot of science lotteries and the Appendix provides illustrative draft legislation text. In particular, House and Senate Science Committees should consider the addition of tight language in the U.S. Innovation and Competition Act (Senate) and the America COMPETES Act (House) that authorizes the use of “grant lotteries” across all NSF directorates, including the Directorate of Technology and Innovation. This language should carry the spirit of expanding the geography of innovation and evidence-based reviews that test what works.
Challenge and Opportunity
A recent NSF report pegged the United States as behind China in key scientific metrics, including the overall number of papers published and patents awarded. The numbers are sobering but reflect the growing understanding that America must pick which frontiers of knowledge it seeks to lead. One of these fields should be the science of science — in other words not just what science & technology innovations we hope to pursue, but in discovering new, more efficient ways to pursue them.
Since its inception in 1950, NSF has played a critical role in advancing the United States’ academic research enterprise, and strengthened our leadership in scientific research across the world. In particular, the NSF’s rigorous merit review process has been described as the gold standard for vetting and funding research. However, growing evidence indicates that, while praiseworthy, the peer review process has been stretched to its limits. In particular, the growing overall population of researchers has introduced a series of burdens on the system.
One NSF report rated nearly 70% of proposals as equally meritorious, while only one-third received funding. With a surplus of competitive proposals, reviewing committees often face tough close calls. In fact, empirical evidence has found that award decisions change nearly a quarter of the time when re-reviewed by a new set of peer experts. In response, PIs spend upwards of 116 hours on each NSF proposal to conform to grant expectations and must submit an average of 2.3 proposals to receive an award — a process known as “grantsmanship” that survey data suggests occupies nearly 45% of top researchers’ time. Even worse, this grantsmanship is oriented towards writing proposals on incremental research topics (versus riskier, novel, or investigator-driven research) which has a higher likelihood of surviving a consensus-driven, points-based review. On the reviewer side, data supports a clear rating bias against interdisciplinary research or previously unfunded researchers PIs, while experts increasingly are declining invitations to review proposals in the interests of protecting their winnowing time (e.g., reviewer fatigue).
These tradeoffs in the current system appear quite troubling and merit further investigation of alternative and complementary funding models. At least one economic analysis suggests that as fewer applicants are funded as a percentage of the increasing pool, the value of the science that researchers forgo because of grantsmanship often exceeds the value of the science that the funding program supports. In fact, despite dramatic increases in research effort, America has for generations been facing dramatic declines in research productivity. And empirical analysis suggests this is notnecessarily the result of a decline in proposal merit, nor of a shift in proposer demographics, nor of an increase (beyond inflation) in the average requested funding per proposal, nor of an increase in the number of proposals per investigator in any one year.
As the Senate’s U.S. Innovation and Competition Act (USICA) and House’s America COMPETES Act propose billions of dollars to the NSF for R&D activities, about 96% of which will be distributed via the peer review, meritocratic grant awards process, now is the time to apply the scientific method to ourselves in the experimentation of alternative and complementary mechanisms for funding scientific research.
Science lotteries, an effort tested in New Zealand, Switzerland, and Germany, represent one innovation particularly suited to reduce the overall taxes on research productivity while uncovering new, worthwhile initiatives for funding that might otherwise slip through the cracks. In particular, modified science lotteries, as those proposed here, select a small percentage of well-qualified grant applications at random for funding. By only selecting from a pool of high-value projects, the lottery supports additional, quality research with minimal comparative costs to the researchers or reviewers. In a lottery, the value to the investigator of being admitted to the lottery scales directly with the number of awards available.
These benefits translate to favorable survey data from PIs who have gone through science lottery processes. In New Zealand, for example, the majority of scientists supported a random allocation of 2% total research expenditures. Sunny Collings, chief executive of New Zealand’s Health Research Council, recounted:
“Applications often have statistically indistinguishable scores, and there is a degree of randomness in peer review selection anyway. So why not formalize that and try to get the best of both approaches?”
By establishing conditions for entrance into the lottery — such as selecting for certain less funded or represented regions — NSF could also over-index for those applicants less prepared for “grantsmanship”.
What we propose, specifically, is a modified “second chance” lottery, whereby proposals that are deemed meritorious by the traditional peer-review process, yet are not selected for funding are entered into a lottery as a second stage in the funding process. This modified format ensures a high level of quality in the projects selected by the lottery to receive funding while still creating a randomized baseline to which the current system can be compared.
The use of science lotteries in the United States as a complement to the traditional peer-review process is likely to improve the overall system. However, it is possible that selecting among well-qualified grants at random could introduce unexpected outcomes. Unfortunately, direct, empirical comparisons between the NSF’s peer review process and partial lotteries do not exist. Through a pilot, the NSF has the opportunity to evaluate to what extent the mechanism could supplement the NSF’s traditional merit review process.
By formalizing a randomized selection process to use as a baseline for comparison, we may discover surprising things about the make up of and process that leads to successful or high-leverage research with reduced costs to researchers and reviewers. For instance, it may be the case that younger scholars who come from non-traditional backgrounds end up having as much or more success in terms of research outcomes through the lottery program as the typical NSF grant, but are selected at higher rates when compared to the traditional NSF grantmaking process. If this is the case, then there will be some evidence that something in the selection process is unfairly penalizing non-traditional candidates.
Alternatively, we may discover that the average grant selected through the lottery is mostly indistinguishable from the average grant selected through the traditional meritorious selection, which would provide some evidence that existing administrative burdens to select candidates are too stringent. Or perhaps, we will discover that randomly selected winners, in fact, produce fewernoteworthy results than candidates selected through traditional means, which would be evidence that the existing process is providing tangible value in filtering funding proposals.By providing a baseline for comparison, a lottery would offer an evidence-based means of assessing the efficacy of the current peer-review system. Any pilot program should therefore make full use of a menu of selection criteria to toggle outcomes, while also undergoing evaluations from internal and external, scientific communities.
Plan of Action
Recommendation 1: Congress should direct the NSF to pilot experimental lotteries through America COMPETES and the U.S. Innovation and Competition Act, among other vehicles.
In reconciling the differing House America COMPETES and Senate USICA, Congress should add language that authorizes a pilot program for “lotteries.”
We recommend opting for signaling language and follow-on legislation that adds textual specificity. For example, in latest text of the COMPETES Act, the responsibilities of the Assistant Director of the Directorate for Science and Engineering Solutions could be amended to include “lotteries”:
Sec. 1308(d)(4)(E). developing and testing diverse merit-review models and mechanisms, including lotteries, for selecting and providing awards for use-inspired and translational research and development at different scales, from individual investigator awards to large multi-institution collaborations;
Specifying language should then require the NSF to employ evidence-based evaluation criteria and grant it the flexibility to determine timeline of the lottery intake and award mechanisms, with broader goals of timeliness and supporting the equitable distribution among regional innovation contenders.
The appendix contains one example structure of a science lottery in bill text (incorporated into the new NSF Directorate established by the Senate-passed United States Innovation and Competition Act), which includes the following key policy choices that Congress should consider:
- Limiting eligibility to meritorious proposals;
- Ensuring that proposals are timely;
- Limiting the grant proposal size to provide the maximum number of awards and create a large sample to fairly evaluate the success of a lottery program;
- Rigorous stakeholder feedback mechanisms from the scientific research community;
- Fast-tracking award distribution following a lottery; and
- Regular reports to Congress in accordance with the NSF’s Open Science Policy to ensure transparency; accountability; and rigorous evaluation.
Recommendation 2: Create a “Translational Science of Science” Program within the new NSF Technology, Innovation and Partnerships Directorate that pilots the use of lotteries with evidence-based testing:
First, the NSF Office of Integrative Activities (OIA) should convene a workshop with relevant stakeholders including representatives from each directorate, the research community including NSF grant recipients, non-recipients, and SME’s on programmatic implementation from New Zealand, Germany, and Switzerland in order to temperature- and pressure-test key criteria for implementing piloted science lotteries across directorates.
- The initial goal of the workshop should be to gather feedback and gauge interest from the PI community on this topic. To this end, it would be wise to explore varying elements in science lottery construction to appreciate which are most supported from the PI community. The community, for example, should be involved in the development of baseline parameters for proposal quality and a timely, equitable process, despite varying directorate application deadlines. This might include applicants’ consented sign-off before entrance into the lottery, upfront and consistent communications of timelines, and randomization and selection from a pool with scores of at least [excellent/very good/good] during the peer evaluation process described in the NSF’s “Proposal and Award Policies and Procedures Guide”.
- Another goal of this workshop would be to scope the process of an OIA inter-directorate competition to submit applications in order to receive an award from the Division of Grants and Agreements to pursue pilot science lottery. The workshop should therefore develop a clear sense of opportunities with respect to budget sizing for each directorate and could consider making recommendations about the placement of science lottery pilots across directorates based on willingness to devote experimental resources. To maximize the number of lottery recipients, the proposal must not exceed 200% of the median grant proposal to a given directorate;
- Finally, a third goal of the workshop should be to explore standards and timeframe for evidence-based evaluation mechanisms as described above and in the bill-text below, including stakeholder feedback mechanisms, regular reports to Congress, and transparency requirements. Additional mechanisms might include detailed reports on grants and awardees like demographic and geographic information of awardees, comparison of outcomes from traditional awardees and lottery awardees, and a statistical picture of the entire pool of grant proposals entered into the lottery. If the workshop is based on competitive directorate applications, the General Services Administration’s Office of Evaluation Sciences (OES) should be invited for later-stage workshop convenings to provide technical assistance in designing evaluation criteria. Some unifying criteria include meeting the requirements of the NSF’s Open Science Policy, Public Access Policy, and making grant information public as soon as feasible to facilitate rapid evaluation from external stakeholders — a potential metric to judge directorate applications.
Appendix: Bill Text
Note: Please view attached PDF for the formatted bill text
H. ______
To establish a pilot program for National Science Foundation grant lotteries.
In the House of Representatives of the United States
February 2, 2022
______________________________
A BILL
Title: To establish a pilot program for National Science Foundation grant lotteries.
Be it enacted by the Senate and House of Representatives of the United States of America in Congress assembled,
SEC. _____. Pilot Program to Establish National Science Foundation Grant Lotteries
- Findings.— Congress makes the following findings:
- Over the past seven decades, the National Science Foundation has played a critical role in advancing the United States academic research enterprise by supporting fundamental research and education across all scientific disciplines;
- The National Science Foundation has made remarkable contributions to scientific advancement, economic growth, human health, and national security, and its peer review and merit review processes have identified and funded scientifically and societally relevant basic research;
- Every year, thousands of meritorious grant proposals do not receive National Science Foundation grants, threatening the United States’ leadership in science and technology and harming our efforts to lead translation and development of scientific advances in key technology areas; and
- While Congress reaffirms its belief that the National Science Foundation’s merit-review system is appropriate for evaluating grant proposals, Congress should establish efforts to explore alternative mechanisms for distributing grants and evaluating, objectively, whether it can supplement the merit-review system by funding worthwhile projects that otherwise go unawarded.
- Definitions.—In this section:
- Directorate.— The term “Directorate” refers to the Directorate for Technology and Innovation established in Sec. 2102 of this Act.
- Assistant Director.— The term “Assistant Director” refers to the Assistant Director for the Directorate described in Sec. 2102(d) of this Act.
- Foundation.—The term “Foundation” refers to the National Science Foundation.
- PAPPG.—The term “PAPPG” refers to the document entitled “OMB Control Number 3145-0058,” also known as the Proposal and Award Policies and Procedures Guide, published by the National Science Foundation, also published as NSF 22-1.
- Program.—The term “program” refers to the program established in subparagraph (d) of this section.
- Grant request.—The term “grant request” refers to the amount of funding requested in an individual grant proposal to the National Science Foundation.
- Lottery awardee.—The term “lottery awardee” refers to a grant proposal selected for award during a lottery established by this section.
- Lottery year.—The term “lottery year” refers to the calendar year of eligibility for proposals, as determined by the Assistant Director, for a lottery established under this program.
- Purpose.—It is the purpose of this section to establish a pilot program for merit-based lotteries to award scientific research grants in order to:
- Provide grants to meritorious but unawarded grant proposals;
- Explore “second-look” mechanisms to distribute grants to meritorious but overlooked grant proposals; and
- To evaluate whether alternative mechanisms can supplement the Foundation’s merit-review system.
- Establishment.— No later than 180 days after establishment of the Directorate, the Assistant Director shall establish a lottery program to provide second-look grants for meritorious grant proposals that were declined funding by the Foundation.
- Requirements.
- Eligibility.—A grant proposal shall be eligible for a lottery if:
- It did not receive funding from the Foundation;
- The grant proposal received an overall evaluation score deemed meritorious during the peer review process;
- Meritorious.—The Assistant Director determine a minimum score that a proposal must receive during the peer evaluation process described in Chapter III of the PAPPG to be deemed meritorious.
- The grant request does not exceed 200 percent of the median grant request to a given directorate in the calendar year with the most recently available data;
- The grant was proposed to one of the following directorates within the Foundation:
- Biological Sciences;
- Computer and Information Science and Engineering;
- Engineering;
- Geosciences; or
- Mathematical and Physical Sciences;
- Social, Behavioral, and Economic Sciences;
- Education and Human Resources;
- Environmental Research and Education;
- International Science of Engineering;
- The grant has been deemed timely by a Foundation Program Officer; and
- Any other criteria deemed necessary by the Assistant Director
- Exemptions.—If deemed necessary or worthwhile to further the mission and goals of the Directorate or the Foundation, the Assistant Director may:
- Exempt grant proposals from the requirement in subparagraph (e)(1)(D); and
- Determine an appropriate method to include such exempted proposals in a lottery.
- Stakeholder Feedback.—Prior to finalizing eligibility requirements, the Assistant Director shall, to the extent practicable, ensure that the requirements take into consideration advice and feedback from the scientific research community. The Federal Advisory Committee Act (5 U.S.C. App.) shall not apply whenever such advice or feedback is sought in accordance with this subsection.
- Eligibility.—A grant proposal shall be eligible for a lottery if:
- Implementation.
- Policies and Procedures.—The Assistant Director shall:
- Develop procedures and policies to ensure that each grant lottery:
- Is randomized and affords equal opportunity to all participants; and
- Is not susceptible to fraud;
- Ensure that grant amounts are distributed equitably among the directorates described in subparagraph (e)(1)(D);
- Ensure that relevant external parties have due notice of their obligations with respect to participation in a lottery;
- Ensure that relevant staff and officers of the Foundation are aware of their duties and responsibilities with respect to implementation of the program;
- Ensure that ranked alternative awardees are selected for each lottery in the event that:
- a lottery awardee withdraws their application;
- a lottery awardee receives Foundation funding following an appeals process; or
- is otherwise deemed ineligible for a Foundation grant.
- Grant Approval.—Once a proposal has been selected for an award:
- It shall be submitted to the Division of Grants and Agreements for a review of business, financial, and policy implications and award finalization thereafter, as described in PAPPG Chapter III; and
- It shall not be declined funding by the Division of Grants and Agreements unless granting the award would result in fraud, abuse, or other outcomes deemed egregious and antithetical to the mission of the Foundation.
- Lottery timeline.—For each directorate specified in subparagraph (e)(1)(D), the Assistant Director shall administer a lottery for each calendar year ending in the years [2022, 2023, and 2024].
- Stakeholder Feedback.—Prior to finalizing lottery implementation, and subsequent to conducting each lottery, the Assistant Director shall, to the extent practicable, ensure that lottery implementation takes into consideration advice and feedback from the scientific research community. The Federal Advisory Committee Act (5 U.S.C. App.) shall not apply whenever such advice or feedback is sought in accordance with this subsection.
- Develop procedures and policies to ensure that each grant lottery:
- Deadline of Submission of Grants to the Directorate.—No later than [90 days] following a given lottery year, Foundation Program Officers shall submit all grant proposals that meet the criteria described in subparagraphs (e)(1)(A)—(e)(1)(F) of this section.
- Authorization of Appropriation.— There is authorized to be appropriated to the Foundation [$—,000,000] to carry out this Section.
- Evaluation and Oversight and Public Access.
- Evaluation.—The Assistant Director shall:
- Ensure that awards are evaluated using the same methods and procedures as other grant programs of the Foundation, including as set forth by the Foundation’s Evaluation and Assessment Capability and the Foundation’s values of learning, excellence, inclusion, collaboration, integrity, and transparency; and
- Establish a rapid, empirically-based evaluation program to determine the effectiveness of the lottery program.
- Reports to Congress.—
- Periodic.— No later than 180 days following completion of a lottery, the Assistant Director shall submit a summary report to Congress including:
- A list of all grants awarded;
- Demographic information of the grant awardees;
- Geographic information of the grant awardees;
- Information regarding the institutions receiving grants;
- An assessment comparing lottery grant awardees with those awarded grants through the Foundation’s traditional review process;
- Information and data describing the entire pool of grant proposals deemed eligible for the lottery.
- Any other information deemed necessary or valuable by the Assistant Director;
- Yearly.—Not later than [two years] following the first lottery, the Assistant Director shall submit comprehensive reports on a yearly basis, for a period of five years after the report submission, evaluating awards using the Foundation’s Evaluation and Assessment Capability or other assessment methods used to evaluate grants awarded through the traditional grant process;
- Final report.—Within [3 years] of completion of the final lottery, the Assistant Director shall submit a final report to Congress evaluating the success of the program and assessing whether Congress should make the program permanent.
- Periodic.— No later than 180 days following completion of a lottery, the Assistant Director shall submit a summary report to Congress including:
- Public Access.—The Assistant Director shall:
- Ensure that the program meets the requirements of the Foundation’s:
- Open Science Policy;
- Public Access Policy; and
- General values of learning, transparency, and integrity.
- Make grant information available to the public as soon as is feasible to facilitate rapid, empirically-based evaluation by external stakeholders;
- Ensure that the program meets the requirements of the Foundation’s:
- Evaluation.—The Assistant Director shall:
- Duties, Conditions, Restrictions, and Prohibitions.—
Right to Review.—Nothing in this section shall affect an applicant’s right to review, appeal, or contest an award decision.
The incoming administration must act to address bias in medical technology at the development, testing and regulation, and market-deployment and evaluation phases.
The incoming administration should work towards encouraging state health departments to develop clear and well-communicated data storage standards for newborn screening samples.
Proposed bills advance research ecosystems, economic development, and education access and move now to the U.S. House of Representatives for a vote
NIST’s guidance on “Managing Misuse Risk for Dual-Use Foundation Models” represents a significant step forward in establishing robust practices for mitigating catastrophic risks associated with advanced AI systems.