Investing in Digital Agriculture Innovation to Secure Food, Yields, and Livelihoods
Summary
Smallholder farmers and their households account for more than 2 billion people—almost one-third of humanity and more than two-thirds of the world’s poor. Smallholder farmers are the economic engine of local livelihoods and critical local sources of nutrition and food security. Their persistently low agricultural productivity is a major driver of global poverty and food insecurity. Many known agricultural practices and technologies could improve farmers’ yields and incomes, but systemic barriers and information gaps hamper their adoption. Today, with the rapid growth of mobile phone penetration throughout the developing world, we are in a unique moment to deploy new digital technologies and innovations to improve food security, yields, and livelihoods for 100 million smallholder farmers by 2030.
To spearhead USAID’s leadership in digital agriculture and create a global pipeline from tested innovation to scaled impact, USAID should launch a Digital Agriculture for Food Security Challenge, establish a Digital Agriculture Innovation Fund, and convene a Digital Agriculture Summit to jump-start the process.
Challenge and Opportunity
Two-thirds of the world’s ultra-poor depend on agriculture for their livelihood. Low productivity growth in this sector is the biggest obstacle to poverty reduction and sustainable food security. The Food and Agriculture Organization’s 2022 report on The State of Food Security and Nutrition in the World estimates that around 2.3 billion people—nearly 30% of the global population—were moderately or food insecure in 2021 and as many as 828 million were affected by hunger. Improving smallholder farmer incomes and local food security is critical to achieving the United Nations Sustainable Development Goals by 2030, particularly ending poverty (SDG 1) and eliminating hunger (SDG 2). Yet smallholder farmers typically harvest only 30%–50% of what they could produce. Smallholder farmers are particularly at risk from climate-driven shocks, and fundamental changes to growing conditions make climate adaptation a key challenge to improving and securing their yields.
More than $540 billion is spent in the agricultural sector each year through public budgets, mostly subsidies on farm inputs and outputs. Of USAID’s over $1 billion annual budget for agricultural aid, much attention is given to direct nutrition and economic assistance as well as institution and market-shaping programs. By contrast, efforts in climate adaptation and food security innovation like the Feed the Future Innovation Labs and Agriculture Innovation Mission for Climate (AIM for Climate) rely on traditional, centralized models of R&D funding that limit the entry and growth of new stakeholders and innovators. Not enough investment or attention is paid to productivity-enhancing, climate-adaptation-focused innovations and to translating R&D investment into sustainable interventions and scaled products to better serve smallholder farmers.
USAID recognizes both the challenge for global food security and the opportunity to advance economic security through evidence-driven, food-system level investments that are climate-driven and COVID-conscious. As directed by the Global Food Security Act of 2016, the U.S. Government Global Food Security Strategy (GFSS) 2022–2026 and its counterpart Global Food Security Research Strategy (GFSRS) highlight the potential for digital technologies to play a pivotal role in the U.S. government’s food system investments around the world. The GFSS describes “an ecosystem approach” that prioritizes the “financial viability of digital products and services, rather than one that is driven predominantly by individualized project needs without longer-term planning.” A core part of achieving this strategy is Feed the Future (FTF), the U.S. government’s multi-agency initiative focused on global hunger and food security. Administrator Samantha Powers has committed $5 billion over five years to expand FTF, creating an opportunity to catalyze and crowd in capital to build a thriving, sustainable global agriculture economy—including innovation in digital agriculture—that creates more resilient and efficient food systems.
However, USAID stakeholders are siloed and do not coordinate to deliver results and invest in proven solutions that can have scaled sustainable impact. The lack of coordination means potential digital-powered, impactful, and sustainable solutions are not fostered or grown to better serve USAID’s beneficiaries globally. USAID’s Bureau for Resilience and Food Security (RFS) works with partners to advance inclusive agriculture-led growth, resilience, nutrition, water security, sanitation, and hygiene in priority countries to help them accelerate and protect development progress. USAID’s FY 2023 budget request also highlights RFS’s continued focus on supporting “partner countries to scale up their adaptation capacity and enhance the overall climate resilience of development programming.” The FTF Innovation Labs focus on advanced agricultural R&D at U.S. universities but do not engage directly in scaling promising innovations or investing in non-academic innovators and entrepreneurs to test and refine user-centered solutions that fall within FTF’s mandate. USAID’s emerging Digital Strategy and Digital Development Team includes specific implementation initiatives, such as a Digital Ecosystem Fund and an upcoming Digital Vision for each sector, including agriculture. USAID is also planning to hire Digital Development Advisors, whose scope aligns closely with this initiative but will require intentional integration with existing efforts. Furthermore, USAID country missions, where many of these programs are funded, often do not have enough input in designing agriculture RFPs to incorporate the latest proven solutions and digital technologies, making it harder to implement and innovate within contract obligations.
This renewed strategic focus on food security through improved local agricultural yields and climate-resilient smallholder farmer livelihoods, along with an integration of digital best practices, presents an opportunity for USAID and Feed the Future. By using innovative approaches to digital agriculture, FTF can expand its impact and meet efficiency and resilience standards, currently proposed in the 2022 reauthorization of the Global Food Security Act. While many known agricultural practices, inputs, and technologies could improve smallholder farmers’ yields and incomes, adoption remains low due to structural barriers, farmers’ lack of information, and limitations from existing agriculture development aid practices that prioritize programs over sustainable agricultural productivity growth. Today, with the rapid pace of mobile phone penetration (ranging between 50% and 95% throughout the developing world), we are in a unique moment to deploy novel, emerging digital technologies, and innovations to improve food security, yields, and livelihoods for 100 million smallholder farmers by 2030.
There are many digital agriculture innovations – for example digital agricultural advisory services (DAAS, detailed below) – in various stages of development that require additional investment in R&D. These innovations could be implemented either together with DAAS or as stand-alone interventions. For example, smallholder farmers need access to accurate, reliable weather forecasts. Weather forecasts are available in low- and middle-income countries (LMICs), but additional work is needed to customize and localize them to farmers’ needs and to communicate probabilistic forecasts so farmers can easily understand, interpret, and incorporate them in their decision-making.
Similarly, digital innovations are in development to improve farmers’ linkages to input markets, output markets, and financial services—for example, by facilitating e-subsidies and mobile ordering and payment for agricultural inputs, helping farmers aggregate into farmer producer organizations and negotiate prices from crop offtakers, and linking farmers with providers of loans and other financial services to increase their investment in productive assets.
Digital technologies can also be leveraged to mobilize smallholder farmers to contribute to climate mitigation by using remote sensing technology to monitor climate-related outcomes such as soil organic carbon sequestration and digitally enrolling farmers in carbon credit payment schemes to help them earn compensation for the climate impact of their sustainable farming practices.
Digital agricultural advisory services (DAAS) leverage the rapid proliferation of mobile phones, behavioral science, and human-centered design to build public extension system capacity to empower smallholder farmers with cutting-edge, productivity-enhancing agricultural knowledge that improves their food security and climate resilience through behavior change. It is a proven, cost-effective, and shovel-ready innovation that can improve the resilience of food systems and increase farmer yields and incomes by modernizing the agricultural extension system, at a fraction of the cost and an order of magnitude higher reach than traditional extension approaches. DAAS gives smallholder farmers access to on-demand, customized, and evidence-based agricultural information via mobile phones, cheaply at $1–$2 per farmer per year. It can be rapidly scaled up to reach more than a hundred million users by 2030, leading to an estimated $1 billion increase in additional farmer income per year. USAID currently spends over $1 billion on agricultural aid annually, and only a small fraction of this is directed to agricultural extension and training. Funding is often program-specific without a consistent strategy that can be replicated or scaled beyond the original geography and timeframe. Reallocating a share of this funding to DAAS would help the agency achieve strategic climate and equity global food security goals. Scaling up DAAS could improve productivity and transform the role of LMIC government agricultural extension agents by freeing up resources and providing rapid feedback and data collection. Agents could refocus on enrolling farmers, providing specialized advice, and improving the relevance of advice farmers receive. DAAS could also be integrated into broader agricultural development programs, such as FAO’s input e-subsidy programs in Zambia and Kenya. |
Plan of Action
To spearhead USAID’s leadership in digital agriculture and create a global pipeline from tested innovation to scaled impact, USAID, Feed the Future, and its U.S. government partners should launch a Digital Agriculture for Food Security Challenge. With an international call to action, USAID can galvanize R&D and investment for the next generation of digitally enabled technologies and solutions to secure yields and livelihoods for one hundred million smallholder farmers by 2030. This digital agriculture moonshot would consist of the following short- and long-term actions:
Recommendation 1: Allocate $150 million over five years to kickstart the Digital Agriculture Innovations Fund (DAI Fund) to fund, support, and scale novel solutions that use technology to equitably secure yields, food security, and livelihoods for smallholder farmers.
The fund’s activities should target the following:
- Digital Agriculture Pilot and Research Fund (DAPR Fund) ($35 million): Provide funding for research, user design, and pilot testing to industry, NGO, and university innovators to create and verify digital innovations like customized weather forecasts, digital extension, microinsurance, microcredit, and local input dealer directories. This could employ the Small Business Innovation Research model and use technical assistance from within the agency and in partner organizations to support the development of promising new ventures or products/services from existing players.
- Digital Agriculture Scaling and Commercialization Fund ($100 million): Invest in grants or, with collaboration from U.S. International Development Finance Corporation, in equity funding for proven digital agriculture solutions as bridge capital to enhance their scaling to new markets or products. Funding should be directed not only to FTF Innovation Labs solutions but also to those outside the FTF network with a focus on LMIC-founded ventures, digital and technology-enabled startups, and existing footprints in FTF target countries to ensure broader impact. Selected solutions should have demonstrated outcomes in proof of concept and moved into the “demonstrated uptake” phase of the product life cycle. Annual investments should be up to $10 million across a small portfolio of ventures to crowd-in unlocked private capital and foster competitive, sustainable enterprises. Contract authority should be flexible and mission-oriented.
- Market-Shaping and Public-Private Partnerships ($15 million): Create an Advanced Research Projects Agency-Energy (ARPA-E) style Tech-to-Market Team, a separate group of staffers working full-time to find marketing opportunities for novel technologies in the innovation pipeline. This group could coordinate new public-private partnerships, like the Nutritious Foods Financing Facility (N3F), which can support the digital agriculture ecosystem for smallholder farmers. This funding would also allow for the hiring of a cadre of dedicated digital development advisors at USAID to spearhead this work in the digital agriculture sector and collaborate with agency country missions in planning and executing RFPs and other agricultural aid programs.
The fund’s investment priorities should align with stated GFSS and GFSRS objectives, including solutions focused on climate-smart agricultural innovation, enhanced nutrition, and food systems, genetic innovation, and poverty reduction. Program activities and funding should coordinate with FTF implementation in strategic priority countries with large agricultural sectors and mature, low-cost mobile networks such as Ethiopia, India, Kenya, Nigeria, and Pakistan. It should also collaborate with the FTF Innovation Lab and the AIM for Climate Initiative networks.
Recommendation 2: Convene the Digital Agriculture Summit to create an all-hands-on-deck approach to facilitate and accelerate integrated digital agriculture products and services that increase yields and resilience.
USAID will announce the dedicated DAI Fund, convening its interagency partners—like the US Department of Agriculture (USDA), Development Finance Corporation (DFC), Millennium Challenge Corporation (MCC), US Africa Development Foundation (USADF) as well as philanthropy, private sector capital, and partner country officials and leaders to chart these pathways and create opportunities for collaboration between sectors. The Summit can foster a community of expertise and solidify commitments for funding, in-kind resources, and FTF country partnerships that will enable DAI Fund solutions to demonstrate impact and scale. The Summit could occur on the sidelines of the United Nations General Assembly to allow for greater participation and collaboration with FTF country representatives and innovators. Follow-up activities should include:
- Partner Country Commitments: Secure commitments from FTF partner countries to direct annual funding toward digital infrastructure and the development of a local digital agriculture economy, whether in the form of R&D, implementation, or infrastructure funding.
- Philanthropic and Private Sector Commitments: Following the Grand Challenges model, the Digital Agriculture for Food Security Challenge should seek commitments from philanthropy and private sector funders to expand the funding pool and finance pipelines for startups. Invitation to the Summit would be contingent on commitments of financial support and in-kind resources for digital agriculture innovation.
- SXSAg for Digital Agriculture: Annual gatherings of innovators, investors, and stakeholders to share knowledge and results as well as attract more private capital.
- Innovator Community of Practice: Create a Community of Practice of innovators and experts inside and outside the agency to advise DAI Fund staff and USAID on current challenges in the digital agriculture space for non-established entrants and opportunities for future fund investments.
- Webinar Series: As a follow-up to the Summit, a webinar series could disseminate knowledge and build institutional buy-in and support for DAAS with key stakeholders within the agency. Subject matter experts from PxD and other service providers can share evidence, use cases, and lessons learned in developing and delivering these services and provide recommendations on how USAID can better incorporate digital agriculture into its operations.
Conclusion
With the exponential adoption of mobile phones among smallholder farmers in the past decade, digital agriculture innovations are emerging as catalytic tools for impact at an unprecedented scale and social return on investment. Devoting a small percentage (~2%–5%) of USAID’s agricultural aid budget to DAAS and other digital agriculture innovations could catalyze $1 billion worth of increased yields among 100 million smallholder farmers every year, at a fraction of the cost and an order of magnitude higher reach than traditional extension approaches.
Achieving this progress requires a shift in strategy and an openness to experimentation. We recommend establishing a Digital Agriculture Innovation Fund to catalyze investment from USAID and other stakeholders and convening a global Digital Agriculture Summit to bring together subject matter experts, USAID, funders, and LMIC governments to secure commitments. From our experience at PxD, one of the world’s leading innovators in the digital agriculture sector, we see this as a prime opportunity for USAID to invest in sustainable agricultural production systems to feed the world and power local economic development for marginalized, food-insecure smallholder farmers around the world.
More from Jonathan Lehe, Gautam Bastian, and Nick Milne can be found at Precision Development.
Using the reach and power of the US government and its leaders as a platform to convene, multi-sector stakeholders can be brought together to outline a common agenda, align on specific targets, and seek commitments from the private sector and other anchor institutions to spur collective, transformational change on a wide range of issues aligned to the goals and interests of the federal agency and Administration’s priorities. External organizations respond to these calls-to-action, often leading to the development of partnerships (formal and informal), grand challenges, and the building of new coalitions to make financial and in-kind commitments that are aligned with achieving the federal government’s goals. A commitment could be modeled after how the State Department’s convened the Global Alliance for Clean Cookstoves:
- a financial contribution (e.g.) the U.S. pledged nearly $51 million to ensure that the Global Alliance for Clean Cookstoves reaches its ‘100 by 20,’ which calls for 100 million homes to adopt clean and efficient stoves and fuels by 2020.
- shared expertise: the organization mobilizes experts in a variety of issues: gender, health, security, economics, and climate change to address significant risk factors. The U.S. will also offer assistance to implement cookstoves.
- research and development: the U.S. is committed to an applied research and development effort that will serve as the backbone of future efforts in the field that includes analyzing health and environmental benefits of using clean stoves, developing sustainable technologies, and conducting monitoring to ensure success of the Alliance’s goals.
USAID is a leader in the US government in running open innovation challenges and prizes. Other U.S. government agencies, foreign government aid agencies, and philanthropies have also validated the potential of open innovation models, particularly for technology-enabled solutions. USAID’s Grand Challenges for Development (GCDs) are effective programmatic frameworks that focus global attention and resources on specific, well-defined international development problems and promote the innovative approaches, processes, and solutions to solving them.
Conceived, launched, and implemented in coordination with public and private sector partners, Grand Challenges for Development (see list below) emphasize the engagement of non-traditional solvers around critical development problems. The Grand Challenges for Development approach is a complement to USAID’s current programming methods, with each GCD is led by experts at the bureau level. These experts work directly with partners to implement the day-to-day activities of the program. The Grand Challenges for Development programs show how the power of the framework can be leveraged through a variety of modalities, including partnerships, prizes, challenge grant funding, crowdsourcing, hack-a-thons, ideation, and commitments. The Digital Agriculture for Food Security Challenge could mimic a GCD program like Saving Lives at Birth by providing consistent funding, resources, and energy toward new meaningful, cost-effective breakthroughs to improve lives where solutions are most needed.
Information provision, including DAAS, is a difficult product for private sector entities to deliver with a sustainable business model, particularly for smallholder farmers. The ability and willingness to pay for such services is often low among resource-poor smallholder farmers, and information is easily shareable, so it is hard to monetize. National or local governments, on the other hand, have an interest in implementing digital solutions to complement in-person agricultural extension programs and subsidies but tend to lack the technical capacity and experience to develop and deliver digital tools at scale.
USAID has the technical and institutional capacity to provide digital agriculture services across its programs. It has invested hundreds of millions of dollars in agricultural extension services over the past 60 years and has gained a strong working knowledge of what works (and what doesn’t). Digital tools can also achieve economies of scale for cost relative to traditional in-person agriculture solutions. For instance, in-person extension requires many expenses that do not decrease with scale, including fuel, transportation, training, and most importantly the paid time of extension agents.
One estimate is that extension agents cost $4,000 to $6,000 per year in low-income countries and can reach between 1,000 to 2,000 farmers each—well above the World Bank recommended threshold of 500 farmers per agent—bringing annual costs to $2–$6 per farmer per year. This estimate assumes a farmer-to-agent ratio well above the World Bank’s recommended threshold of 500:1. In other contexts, it has been estimated as high as $115. We estimate a cost-effectiveness of $10 in increased farmer income for every $1 invested in programs like DAAS, which is an effective return on American foreign development assistance.
Digital solutions require not only the up-front cost of development and testing but also maintenance and upkeep to maintain effectiveness. Scaling these solutions and sustaining impact requires engaged public-private partnerships to reduce costs for smallholder famers while still providing positive impact. Scaling also requires private capital – particularly for new technologies to support diffusion and adaptation – but is only unlocked by de-risking investments by leveraging development aid.
As an example, PxD engages directly with national governments to encourage adoption of DAAS, focusing on building capacity, training government staff, and turning over systems to governments to finance the operation and maintenance of systems into perpetuity (or with continued donor support if necessary). For instance, the State Government of Odisha in India built a DAAS platform with co-financing from the government and a private foundation, scaled the platform to 3 million farmers, and transitioned it to the government in early 2022. A similar approach could support scale across other geographies—especially given USAID’s long-standing relationships with governments and ministries of agriculture.
A growing body of evidence shows that DAAS can have a significant impact on farmers’ yields and incomes. Precision Development (PxD) currently reaches more than 7 million smallholder farming households with DAAS in nine countries in Africa, Asia, and Latin America, and there is a well-established market with many other service providers also providing similar services. This research, including several randomized control trials conducted by PxD researchers in multiple contexts as well as additional research conducted by other organizations, shows that DAAS can improve farmer yields by 4% on average in a single year, with benefit-cost ratios of 10:1, and the potential for these impacts to increase over time to create larger gains.
There is also evidence of a larger impact in certain geographies and for certain crops and livestock value chains, as well as a larger impact for the subset of farmers who use DAAS the most and adopt its recommendations.
Unlocking the U.S. Bioeconomy with the Plant Genome Project
Summary
Plants are an important yet often overlooked national asset. We propose creating a Plant Genome Project (PGP), a robust Human Genome Project-style initiative to build a comprehensive dataset of genetic information on all plant species, starting with the 7,000 plant species that have historically been cultivated for food and prioritizing plants that are endangered by climate change and habitat loss. In parallel, we recommend expanding the National Plant Germplasm System (NPGS) to include genomic-standard repositories that connect plant genetic information to physical seed/plant material. The PGP will mobilize a whole-of-government approach to advance genomic science, lower costs, and increase access to plant genomic information. By creating a fully sequenced national germplasm repository and leveraging modern software and data science tools, we will unlock the U.S. bioeconomy, promote crop innovation, and help enable a diversified, localized, and climate-resilient food system.
Challenge and Opportunity
Plants provide our food, animal feed, medicinal compounds, and the fiber and fuel required for economic development. Plants contribute to biodiversity and are critical for the existence of all other living creatures. Plants also sequester atmospheric carbon, thereby combating climate change and sustaining the health of our planet.
However, as a result of climate change and human practices, we have been losing plants at an alarming rate. Nearly 40% of the world’s 435,000 unique land plant species are extremely rare and at risk of extinction due to climate change. More than 90% of crop varieties have disappeared from fields worldwide as farmers have abandoned diverse local crop varieties in favor of genetically uniform, commercial varieties.
We currently depend on just 15 plants to provide almost all of the world’s food, making our global food supply extremely vulnerable to climate change, new diseases, and geopolitical upheaval—problems that will be exacerbated as the world’s population rises to 10 billion by 2050.
We are in a race against time to stop the loss of plant biodiversity—and at the same time, we desperately need to increase the diversity in our cultivated crops. To do this, we must catalog, decode, and preserve valuable data on all existing plants. Yet more than two decades since we sequenced the first plant genome, genome sequence information exists for only 798 plant species—a small fraction of all plant diversity.
Although large agriculture companies have made substantial investments in plant genome sequencing, this genetic information is focused on a small number of crops and is not publicly available. What little information we have is siloed, known only to large corporations and not openly available to researchers, farmers, or policymakers. This is especially true for nations in the Global South, who are not usually included in most genome sequencing projects. Furthermore, current data in existing germplasm repositories, State Agricultural Experiment Stations, and land-grant universities is not easily accessible online, making it nearly impossible for researchers in both public and private settings to explore. These U.S. government collections and resources of germplasm and herbaria, documented by the Interagency Working Group on Scientific Collections, have untapped potential to catalyze the bioeconomy and mobilize investment in the next generation of plant genetic advancements and, as a result, food security and new economic opportunities.
Twenty years ago, the United States launched the Human Genome Project (HGP), a shared knowledge-mapping initiative funded by the federal government. We continue to benefit from this initiative, which has identified the cause of many human diseases and enabled the development of new medicines and diagnostics. The HGP had a $5.4 billion price tag ($2.7 billion from U.S. contributions) but resulted in more than $14.5 billion in follow-on genomics investments that enabled the field to rapidly develop and deploy cutting-edge sequencing and other technologies, leading to a drop in genomic sequencing cost from $300 million per genome to less than $1,000.
Today, we need a Human Genome Project for plants—a unified Plant Genome Project that will create a global database of genetic information on all plants to increase food security and unlock plant innovation for generations to come. Collecting, sequencing, decoding, and cataloging the nation’s plant species will fill a key gap in our national natural capital accounting strategy. The PGP will complement existing conservation initiatives led by the Office of Science and Technology Policy (OSTP) and other agencies, by deepening our understanding of America’s unique biodiversity and its potential benefits to society. Such research and innovation investment would also benefit government initiatives like USAID’s Feed the Future (FTF) Initiative, particularly the Global Food Security Research Strategy, around climate-smart agriculture and genetic diversity of crops.
PGP-driven advancements in genomic technology and information about U.S. plant genetic diversity will create opportunities to grow the U.S. bioeconomy, create new jobs, and incentivize industry investment. The PGP will also create opportunities to make our food system more climate-resilient and improve national health and well-being. By extending this effort internationally, and ensuring that the Global South is empowered to contribute to and take advantage of these genetic advancements, we can help mitigate climate change, enhance global food security, and promote equitable plant science innovation.
Plan of Action
The Biden Administration should launch a Plant Genome Project to support and enable a whole-of-government approach to advancing plant genomics and the bioeconomy. The PGP will build a comprehensive, open-access dataset of genetic and biological information on all plant species, starting with the 7,000 plant species that have historically been cultivated for food and prioritizing plants that are endangered by climate change and habitat loss. The PGP will convene key stakeholders and technical talent in a novel coalition of partnerships across public and private sectors. We anticipate that the PGP, like the Human Genome Project, will jump-start new technologies that will further drive down the cost of sequencing and advance a new era for plant science innovation and the U.S. bioeconomy. Our plan envisions two Phases and seven Key Actions.
Phase 1: PGP Planning and Formation
Action 1: Create the Plant Genomics and U.S. Bioeconomy Interagency Working Group
The White House OSTP should convene a Plant Genomics and U.S. Bioeconomy Interagency Working Group to coordinate the creation of a Plant Genome Project and initiate efforts to consult with industry, academic, philanthropy, and social sector partners. The Working Group should include representatives from OSTP, U.S. Department of Agriculture (USDA) and its Agricultural Research Service (ARS), National Plant Germplasm System, Department of Commerce, Department of Interior, National Science Foundation (NSF), National Institutes of Health (NIH), Smithsonian Institution, Environmental Protection Agency, State Department’s Office of Science and Technology Adviser, and USAID’s Feed the Future Initiative. The Working Group should:
- Identify experts and resources to enable the PGP and work with multi-sector entities, including within USDA, to identify sources of seeds/plants in the United States.
- Conduct a kickoff meeting with OSTP and identify a team that includes NPGS representatives to inventory existing resources, coordinate seed collection efforts, and create connectivity with the PGP.
- Provide recommendations on working with international institutes in the Global North (e.g., Global Biodiversity Information Facility and Earth BioGenome Project) and the Global South (e.g., The African BioGenome Project, the International Potato Center and others). The Earth BioGenome Project’s work on green plants and initial genomic quality standards offers potential starting points for collaboration.
- Create recommendations for the nation’s first Plant Genome Research Institute to drive initial and future efforts in obtaining plant genome information and accelerating innovative research in plant genomics.
Action 2: Launch a White House Summit on Plant Genomics Innovation and Food Security
The Biden Administration should bring together multi-sector (agriculture industry, farmers, academics, and philanthropy) and agency partners with the expertise, resource access, and interest in increasing domestic food security and climate resilience. The Summit will secure commitments for the PGP’s initial activities and identify ways to harmonize existing data and advances in plant genomics. The Summit and follow-up activities should outline the steps that the Working Group will take to identify, combine, and encourage the distribution and access of existing plant genome data. Since public-private partnerships play a core enabling role in the strategy, the Summit should also determine opportunities for potential partners, novel financing through philanthropy, and international cooperation.
Action 3: Convene Potential International Collaborators and Partners
International cooperation should be explored from the start (beginning with the Working Group and the White House Summit) to ensure that sequencing is conducted not just at a handful of institutions in the Global North but that countries in the Global South are included and all information is made publicly available.
- During the annual UN General Assembly Summit, OSTP should convene a forum of key leaders across multiple countries, international NGOs, Fortune 1000 companies, and academia.
- This forum will drive international public-private commitments to action that support the launch of the PGP.
- This forum should produce a yearly report–the first of its kind, on progress at the intersection of technological and data-driven advances in plant and crop innovation, preserving plant biodiversity, ending hunger, achieving food security, and improving nutrition.
- This work could culminate in a flagship announcement of new commitments tied to the UN’s 2030 Agenda for Sustainable Development. Various champions and experts on The Nagoya Protocol, Convention on Biological Diversity (CBD), and others working with The Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) should be included.
We envision at least one comprehensive germplasm seed bank in each country or geographical region similar to the Svalbard seed vault or The Royal Botanic Garden at Kew and sequencing contributions from multiple international organizations such as Beijing Genomics Institute and the Sanger Institute.
Phase 2: PGP Formalization and Launch
Action 4: Launch the Plant Genome Research Institute to centralize and coordinate plant genome sequencing
Congress should create a Plant Genome Research Institute (PGRI) that will drive plant genomics research and be the central owner of U.S. government activities. The PGRI would centralize funding and U.S. government ownership over the PGP. We anticipate the PGP would require $2.5 billion over 10 years, with investment frontloaded and funding raised through matched commitments from philanthropy, public, and private sources. The PGRI could be a virtual institute structured as a distributed collaboration between multiple universities and research centers with centralized project management. PGRI funding could also incorporate novel funding mechanisms akin to the BRAIN Initiative through U.S. philanthropy and private sector collaboration (e.g., Science Philanthropy Alliance). The PGRI would:
- Identify key strategic public and private partners to join the coalition that prioritizes and undertakes the sequencing projects.
- Coordinate sequencing that will be conducted at sequencing centers funded by the PGRI while ensuring that all current consortia and initiatives for plant genome sequencing are included and connected.
- Define metrics for gene sequencing (e.g., accuracy, capacity, and cost of finished sequence), genome assembly, and genetic/physical map creation.
- Engage with industry providers of novel sequencing technology to bring down costs.
- Develop the final operational plan with timelines and funders outside the U.S. government in philanthropy and the private sector.
- Promote the development of novel bioinformatic and computational tools to facilitate gene assembly in polyploid plant genomes and view reference genomes of various plant species and varieties.
- Based on recommendations from the Working Group, select the agency or offices best positioned to house and maintain the data.
- Implement FAIR standards for data storage and dissemination and ensure an open-access, user-friendly interface for the final software platform. This could be achieved through a current database such as GenBank.
- Run an open challenge to share existing genome sequence data that is currently not publicly available and ensure that receiving centers undertake appropriate validation and quality control of all imported data.
Action 5: Expand and Strengthen NPGS-Managed Seed Repositories
We recommend strengthening the distributed seed repository managed by the U.S. National Plant Germplasm System and building a comprehensive and open-source catalog of plant genetic information tied to physical samples. The NPGS already stores seed collections at state land-grant universities in a collaborative effort to safeguard the genetic diversity of agriculturally important plants and may need additional funding to expand its work and increase visibility and access.
- Bring in new partnerships, funding, and technical expertise from the private sector, a major user of the NPGS collections and the primary means by which new and improved plants are commercialized.
- Provide funding to create structured and highly annotated datasets of seed profiles with taxonomic data and other criteria such as phenotypic/physical attributes, local usage, commercial characteristics, and rarity.
- Automatically feed data from all new and existing germplasm repositories into the PGP, linking existing physical germplasm data to novel genetic data and connecting genomic and genetic data with taxonomic information.
- Invite computer scientists to develop novel data-driven algorithms and machine-learning models incorporating newly collected genomic data to identify plant varieties that might be especially climate resilient. (This potential innovation has been demonstrated in machine-vision research involving digitized herbarium specimens).
Action 6: Create a Plant Innovation Fund within AgARDA
The Agriculture Advanced Research and Development Authority (AgARDA) is a USDA-based advanced research projects agency like DARPA but for agriculture research. The 2018 Farm Bill authorized AgARDA’s creation to tackle highly ambitious projects that are likely to have an outsize impact on agricultural and environmental challenges—such as the PGP. The existing AgARDA Roadmap could guide program setup.
- The Administration should launch BioDRIVe, a $100 million public natural assets fund targeted at plant innovation and biodiversity gains inspired by the DRIVe program focused on preventing future pandemics. If traditional federal funding mechanisms are inadequate, such data-driven investment vehicles/flexible funding tools could also include philanthropic funding mechanisms and be integrated within the PGRI. This would promote public-private partnerships and de-risk promising technologies that drive innovation for food security and biodiversity conservation.
- Through its RFP process, AgARDA could help drive key agricultural innovation that arises from the PGP, such as creating stronger, more climate-resilient, disease-resistant, or nutritionally superior plants and identifying plants that can break down pollutants or produce novel enzymes and products. AgARDA is ready to go as soon as it receives funding through the annual Congressional funding process.
Phase 3: Long-Term, Tandem Bioeconomy Investments
Action 7: Bioeconomy Workforce Development and Plant Science Education
Invest in plant science and technical workforce development to build a sustainable foundation for global plant innovation and enable long-term growth in the U.S. bioeconomy.
- Use the PGRI as a platform for a renewed focus on training world-class botanists, plant breeders, horticulturists, and agronomists. A networked effort to train the next generation of plant scientists, similar to the 100Kin10 initiative that successfully trained 100,000 new STEM teachers in 10 years, could be very useful. Funding could be targeted to scholarships in these areas at universities, community colleges, and scientific associations such as the American Society of Agronomy. We also recommend increasing emphasis on plant science in K-12 education.
- Build stronger ties between the plant science industry and the engineering workforce to support the growing data and technology needs for plant science research. This could include bringing in science and engineering fellows from existing fellowship programs to help build new software and data science tools for plant science.
- Launch a fellowship program in partnership with NSF, USDA, and scientific associations such as the American Association for the Advancement of Science or the American Society of Plant Biologists for talented plant biologists, agricultural researchers, and data and software engineers to serve a yearlong “tour of duty” in public service, where they would work internationally to collect, maintain, and expand the plant genome database. Existing and new repositories could benefit from this talent pool, and these cohorts of fellows would disseminate knowledge of the database throughout their careers, helping to achieve adoption at scale.
Conclusion
We are in a race against time to identify, decode, catalog, preserve, and cultivate the critical biodiversity of the world’s plant species before they are lost forever. By creating the world’s first comprehensive, open-access catalog of plant genetic information tied to physical samples, the Plant Genome Project will unlock plant innovation for food security, help preserve plant biodiversity in a changing climate, and advance the bioeconomy. The PGP’s whole-of-government approach will accelerate a global effort to secure our food systems and the health of the planet while catalyzing a new era of plant science, agricultural innovation and co-operation.
We estimate that it would cost ~$2.5 billion to sequence the genomes of all plant species. (For reference, the Human Genome Project cost $5.4 billion in 2017 to sequence just one species).
Yes, we recommend active solicitation of existing sequence information from all entities. This data should be validated and checked from a quality control perspective before being integrated into the PGP.
The newly created Plant Genome Research Institute (PGRI) will coordinate the PGP. The structure and operations of the PGRI will follow recommendations from the OSTP-commissioned Stakeholder Working Group. All work will be conducted in partnership with agencies like the U.S. Department of Agriculture, National Institutes of Health, National Science Foundation, private companies, and public academic institutions.
Existing sequencing efforts and seed banks will be included within the framework of the PGP.
The PGP will start as a national initiative, but to have the greatest impact it must be an international effort like the Human Genome Project. The White House Summit and Stakeholder Working Group will help influence scope and staging. The extinction crisis is a global problem, so the PGP should be a global effort in which the United States plays a strong leadership role.
In Phase 1, emphasis might be placed on native “lost crops” that can be grown in areas that are suffering from drought or are affected by climate change. Collection and selection would complement and incorporate active Biden Administration initiatives that center Indigenous science and environmental justice and equity.
In Phase 2, efforts could focus on sequencing all plants in regions or ecosystems within the U.S. that are vulnerable to adverse climate events in collaboration with existing state-level and university programs. An example is the California Conservation Genomics Project, which aims to sequence all the threatened, endangered and commercially exploited flora and fauna of California. Edible and endangered plants will be prioritized, followed by other plants in these ecosystems.
In Phase 3, all remaining plant species will be sequenced.
All collected seeds will be added to secure, distributed physical repositories, with priority given to collecting physical samples and genetic data from endangered species.
The PGP will work to address and even correct some long-standing inequalities, ensuring that the rights and interests of all nations and Indigenous people are respected in multiple areas from specimen collection to benefit sharing while ensuring open access to genomic information. The foundational work being done by the Earth BioGenome Project’s Ethical, Legal and Social Committee will be critically important.
Invitees could include but would not be limited to the following entities with corresponding initial commitments to support the PGP’s launch:
- Genome sequencing companies, such as Illumina, PacBio, Oxford Nanopore Technologies, and others, who would draft a white paper on the current landscape for sequencing technologies and innovation that would be needed to enable a PGP.
- Academic institutions with active sequencing core facilities such as the University of California, Davis and Washington University in St. Louis, among others, who would communicate existing capacity for PGP efforts and forecast additional capacity-building needs, summarize strengths of each entity and past contributions, and identify key thought leaders in the space.
- Large ag companies, such as Bayer Crop Science, Syngenta, Corteva, and others, who are willing to share proprietary sequence information, communicate industry perspectives, identify obstacles to data sharing and potential solutions, and actively participate in the PGP and potentially provide resources.
- Government agencies and public institutions such as NIH/NCBI, NSF, USDA, Foundation for Food and Agriculture Research, CGIAR, Missouri Botanical Garden, would draft white papers communicating existing efforts and funding, identify funding gaps, and assess current and future collaborations.
- Current sequencing groups/consortiums, such as the Wheat Genome Sequencing Consortium, Earth BioGenome Project, Open Green Genomes Project, HudsonAlpha, and others, would draft white papers communicating existing efforts and funding needs, identify gaps, and plan for data connectivity.
- Tech companies, such as Google and Microsoft, could communicate existing efforts and technologies, assess the potential for new technologies and tools to accelerate PGP, curate data, and provide support such as talent in the fields of data science and software engineering.
The STEMpathy Task Force: Creating a Generation of Culturally Competent STEM Professionals
Summary
Science, technology, engineering, and mathematics (STEM) are powerful levers for improving the quality of life for everyone in the United States. The connection between STEM’s transformative potential and its current impact on structural societal problems starts in the high school classroom.
Teachers play a critical role in fostering student cultural awareness and competency. Research demonstrates that teachers and students alike are eager to affect progress on issues related to diversity, equity, inclusion, and accessibility (DEIA). Educational research also demonstrates that DEIA and empathy enhance student sense of belonging and persistence in professional STEM pathways. However, formal STEM learning experiences lack opportunities for students to practice cultural competency and explore applications of STEM to social justice issues.
Cultural Competency is the ability to understand, empathize, and communicate with others as part of a diverse community.
The Biden-Harris Administration should establish the STEMpathy Task Force to aid high school STEM teachers in establishing cultural competency as an overarching learning goal. Through this action, the Administration would signal the prioritization of STEM equity—reflected in both the classroom and the broader community—across the United States. The program would address two pertinent issues in the STEM pipeline: the lack of momentum in STEM workforce diversification and STEM’s unfulfilled promise to relieve our society of systems of oppression and bias. Students need to be taught not only the scientific method and scientific discourse, but also how to approach their science in a manner that best uplifts all people.
Challenge & Opportunity
In a 2017 survey, over 1,900 U.S. companies listed the ability to work effectively with customers, clients, and businesses from a range of different countries and cultures as a critical skill. Since then, the importance of cultural competency in the U.S. workforce has become increasingly apparent.
Culturally competent workers are more creative and better equipped to solve tricky problems. For example, foresters have managed wildfires by following the instruction and guidance of tribal nations and traditional ecological knowledges. Engineers have designed infrastructure that lowers the water bills of farmers in drought-stricken areas. Public health representatives have assuaged concerns about COVID-19 vaccines in under-served communities. STEM professionals who improve Americans’ quality of life do so by collaborating and communicating with people from diverse backgrounds. When students can see these intersections between STEM and social change, they understand that STEM is not limited to a classroom, lab, or field activity but is also a tool for community building and societal progress.
Today’s middle and high school students are increasingly concerned about issues around race/ethnicity, gender, and equity. Recent college graduates also share these interests, and many demonstrate a growing desire to participate in meaningful work and to pursue social careers. When students realize that STEM fields are compatible with their passion for topics related to identity and social inequities, they are more likely to pursue STEM careers—and stick with them. This is the way to create a generation of professionals who act with STEMpathy.
To unite STEM subjects with themes of social progress, cultural competency must become a critical component of STEM education. Under this framework, teachers would use curricula to address systemic social inequities and augment learning by drawing from students’ personal experiences (Box 1). This focus would align with ongoing efforts to promote project-based learning, social-emotional learning, and career and technical education in classrooms across the United States.
American high school STEM students will demonstrate an understanding of and empathy for how people from varied backgrounds are affected by environmental and social issues. An environmental sciences student in California understands the risks posed by solar farms to agricultural production in the Midwest. They seek to design solar panels that do not disrupt soil drainage systems and financially benefit farmers.An astronomy student in Florida empathizes with Indigenous Hawaiians who are fighting against the construction of a massive telescope on their land. The student signs petitions to prevent the telescope from being built.A chemistry student in Texas learns that many immigrants struggle to understand healthcare professionals. They volunteer as a translator in their local clinic.A computer science student in Georgia discovers that many fellow residents do not know when or where to vote. They develop a chatbot that reminds their neighbors of polling place information. |
With such changes to the STEM lessons, the average U.S. high school graduate would have both a stronger sense of community within STEM classrooms and the capacity to operate at a professional level in intercultural contexts. STEM classroom culture would shift accordingly to empower and amplify diverse perspectives and redefine STEM as a common good in the service of advancing society.
Plan of Action
Through an executive order, the Biden-Harris Administration should create a STEMpathy Task Force committed to building values of inclusion and public service into the United States’ STEM workforce. The task force would assist U.S. high schools in producing college- and career-ready, culturally competent STEM students. The intended outcome is to observe a 20 percent increase in the likelihood of students of color and female- and nonbinary-identifying students to pursue a college degree in a STEM field and for at least 40 percent of surveyed U.S. high school students to demonstrate awareness and understanding of cultural competence skills. Both outcomes should be measured by National Center for Education Research data 5–10 years after the task force is established.
The STEMpathy Task Force would be coordinated by the Subcommittee on Federal Coordination in STEM Education (FC-STEM) from the White House Office of Science and Technology Policy (OSTP). The interagency working group would partner with education-focused organizations, research institutions, and philanthropy foundations to achieve their goals (FAQ #6). These partnerships would allow the White House to draw upon expertise within the STEM education sphere to address the following priorities:
- Publish guides on cultural-competency-oriented learning goals for STEM students that comply with STEM curricula and standards frameworks, as well as on suggested assessments for measuring student achievement in cultural competency skills.
- Issue nonregulatory guidance on federal funding streams for paid teacher professional development opportunities that improve their ability to teach students to apply STEM concepts to public service projects.
- Consider adding cultural competency assessments and measures into federally funded programs such as the What Works Clearinghouse, Blue Ribbon Schools Program, and the National Assessment of Educational Progress science questionnaire.
- Highlight and reward educators and schools that demonstrate high student achievement in science and cultural competence skills.1
Working toward these priorities will equip the next generation of STEM professionals with cultural competence skills. The task force will form effective STEM teaching methods that result in measurable improvement in STEM major diversity and career readiness.
This approach meets the objectives of existing federal STEM education efforts without imposing classroom standards on U.S. educators. In the Federal STEM Education Strategic Plan, the Committee on Science, Technology, Engineering, and Math Education (Co-STEM) aims to (1) increase work-based learning and training, (2) lend successful practices from across the learning landscape, and (3) encourage transdisciplinary learning. The Department of Education also prioritizes the professional development of educators to strengthen student learning, as well as meet students’ social, emotional, and academic needs. In these ways, the STEMpathy Task Force furthers the Administration’s education goals.
Conclusion
Current national frameworks for high school STEM learning do not provide students with a strong sense of belonging or an awareness of how STEM can be leveraged to alleviate social inequities. The STEMpathy Task Force would establish a rigorous, adaptable framework to address these challenges head-on and ensure that the United States provides high school students with inclusive, hands-on science classrooms that prepare them to serve the diverse communities of their country. Following the implementation of the STEMpathy Task Force, the Biden-Harris Administration can expect to see (1) an increase in the number and diversity of students pursuing STEM degrees, (2) a reduction in race/ethnicity- and gender-based gaps in the STEM workforce, and (3) an increase in STEM innovations that solve critical challenges for communities across the United States.
In any team setting, students will function effectively and with empathy. They will interact respectfully with people from varied cultural backgrounds. To achieve these behavioral goals, students will learn three key skills, as outlined by the Nebraska Extension NebGuide:
- Increasing cultural and global knowledge. Students understand the historical background of current events, including relevant cultural practices, values, and beliefs. They know how to ask open-minded, open-ended questions to learn more information.
- Self-assessment. Students reflect critically on their biases to engage with others. They understand how their life experience may differ from others based on their identity.
- Active Listening. Students listen for the total meaning of a person’s message. They avoid mental chatter about how they will respond to a person or question, and they do not jump directly to giving advice or offering solutions.
No. Although the task force will conduct research on STEM- and cultural-competency-related learning standards and lesson plans, the OSTP will not create incentives or regulations to force states to adopt the standards or curricula. The task force is careful to work within the existing, approved educational systems to advance the goals of the Department of Education and Committee on Science, Technology, Engineering, and Math Education (Co-STEM).
As observed during recent efforts to teach American students about structural racism and systemic inequality, some parents may find topics pertaining to diversity, equity, inclusion, and accessibility sensitive. The STEMpathy Task Force’s cultural competency-focused efforts, however, are primarily related to empathy and public service. These values are upheld by constituents and their representatives regardless of political leaning. As such, the STEMpathy Task Force may be understood as a bipartisan effort to advance innovation and the economic competitiveness of U.S. graduates.
Another associated risk is the burden created for teachers to incorporate new material into their already-packed schedules and lesson plans. Many teachers are leaving their jobs due to the stressful post-pandemic classroom environment, as well as the imbalance between their paychecks and the strain and value of their work. These concerns may be addressed through the STEMpathy Task Force’s objectives of paid training and rewards systems for educators who model effective teaching methods for others. In these ways, teachers may receive just compensation for their efforts in supporting both their students and the country’s STEM workforce.
In its first two years, the STEMpathy Task Force would complete the following:
- Revise FC-STEM’s “Best Practices For Diversity and Inclusion in Stem Education and Research” guide to include information on evidence-based or emerging practices that promote cultural competence skills in the STEM classroom.
- Train 500+ teachers across the nation to employ teaching strategies and curricula that improve the cultural competence skills of STEM students.
In the next two years, further progress would be made on the following:
- Measure the efficacy of the teacher training program by assessing ~10,000 students’ cultural competence skill development, STEM interest retention and performance, and classroom sense of belonging.
- Reward/recognize 100 schools for high achievement in cultural competency development.
STEM subjects and professionals have the greatest potential to mitigate inequities in American society. Consider the following examples wherein marginalized communities would benefit from STEM professionals who act with cultural competency while working alongside or separate from decision-makers:
- Native Hawaiians aim to protect their land from a telescope that may be built elsewhere
- Women and non-binary people who require precision medicine face built-in biases from biomedical technologies
- Defendants of color are more likely to be wrongly labeled as “high-risk” than white defendants at bail hearings
- Low-income neighborhoods aim to promote healthy eating and skill building by designing an urban farm
- Transgender individuals require specialized, destigmatized healthcare
Furthermore, although the number of STEM jobs in the United States has grown by 7.6 million since 1990, the STEM workforce has been very slow to diversify. Over the past 30 years, the proportion of Black STEM workers increased by only 2 percent and that of Latinx STEM workers by only 3 percent. Women hold only 15 percent of direct science and engineering jobs. LGBTQ male students are 17 percent more likely to leave STEM fields than their heterosexual counterparts.
Hundreds of professional networks, after-school programs, and nonprofit organizations have attempted to tackle these issues by targeting students of color and female-identifying students within STEM. While these commendable efforts have had a profound impact on many individuals’ lives, they are not providing the sweeping, transformative change that could promote not only diversity in the STEM workforce but a generation of STEM professionals who actively participate in helping diverse communities across the United States.
Based on the president’s budget for ongoing STEM-related programming, we estimate that the agency task force would require approximately $100 million. This amount will be divided across involved agencies for STEMpathy Task Force programming.
The STEMpathy Task Force must combine interagency expertise with nongovernmental organizations such as educational nonprofits, research institutions, and philanthropy foundations.
Pandemic Readiness Requires Bold Federal Financing for Vaccines
Summary
Most people will experience a severe pandemic within their lifetime, and the world remains dangerously unprepared. In fact, scientists predict a nearly 50% chance––the same probability as flipping heads or tails on a coin––that we will endure another COVID-19-level pandemic within the next 25 years. Shifting America’s pandemic response capability from reactive to proactive is, therefore, urgent. Failure to do so risks the country’s welfare.
Getting ahead of the next pandemic is impossible without government financing. Vaccine production is costly, and these expenses will hinder industries from preemptively developing the tools needed to halt disease transmission. For example, the total expected revenues over a 20-year vaccine patent lifecycle would cover just half of the upfront research and development (R&D) costs.
However, research suggests that a portfolio-based approach to vaccine development — especially now with new, broadly applicable mRNA technology — dramatically increases the returns on investment while also guarding against an estimated 31 of the next 45 epidemic outbreaks. With lessons learned from Operation Warp Speed, Congress can deploy this approach by (i) authorizing and appropriating $10 billion to the Biomedical Advanced Research and Development Authority (BARDA) (ii) developing a vaccine portfolio for 10 emerging infectious diseases (EIDs), and (iii) a White House Office of Science and Technology Policy (OSTP)-led interagency effort focused on scaling up production of priority vaccines.
Challenge & Opportunity
The COVID-19 pandemic continues to wreak havoc across the world, with an ongoing total cost of $16 trillion and more than 6 million dead. Three conditions increase the likelihood that we will experience another pandemic that is just as disastrous:
- New outbreaks of infectious diseases––like ––are emerging due to population growth, increased zoonotic transmission from animals, habitat loss, climate change, and more. Over 1.6 million yet-to-be-discovered, human-infecting viral species are thought to exist in mammals and birds.
- More laboratories are handling dangerous pathogens around the world, which increases the likelihood of an accidental contagion release.
- It is easier than ever to purchase biotechnologies once reserved only for scientists. Consequently, malign actors now have more resources to develop a human-engineered bioweapon.
The United States and the rest of the world are still woefully unprepared for future pandemic or epidemic threats. The lack of progress is largely due to little to no vaccine development for these six EIDs, all of which have pandemic potential:
- Middle East respiratory syndrome coronavirus (MERS-CoV)
- Lassa fever virus
- Nipah virus
- Rift Valley fever virus
- Chikungunya virus
- Ebola virus
Failure to produce and supply vaccines doses to Americans could undermine the U.S. government’s response to a vaccine crisis. This is illustrated in the recent monkeypox response. The federal government invested in a new monkeypox vaccine with a significantly longer shelf life. While focused on this effort, it failed to replace its existing vaccine stockpile as it expired, leaving the American population woefully unprepared during the recent monkeypox outbreak.
An immediate national strategy is needed to course correct, the beginnings of which are articulated in the recent plan for American Pandemic Preparedness: Transforming our Capabilities. These overarching concerns were also echoed in a bipartisan letter from the Senate Health, Education, Labor, and Pensions and Armed Services Committees, urging the Biden Administration to re-establish a “2.0” version of Operation Warp Speed (OWS)––the government’s prior effort to accelerate COVID-19 vaccine production.
The President’s recent FY23 Budget advocates for a historic pandemic preparedness investment. The plan allocates nearly $40 billion to the Department of Health and Human Services Assistant Secretary for Preparedness and Response to “invest in advanced development and manufacturing of countermeasures for high priority threats and viral families, including vaccines, therapeutics, diagnostics, and personal protective equipment.” BARDA also declared the need to prepare prototype vaccines for virus families with pandemic potential and has included such investments in its most recent strategic plan. And, the recent calls for increased “piloting and prototyping efforts in biotechnology and biomanufacturing to accelerate the translation of basic research results into practice.”
Robust federal investment in America’s vaccine industry is especially needed since––as demonstrated by COVID-19––industries garner minimal profit from vaccine development before or during a widespread outbreak. A recent study predicted that in the unlikely scenario where 10 million vaccines are manufactured during a crisis response, pharmaceutical companies can expect to recoup only half of the upfront R&D costs. The same research states that “new drug development has become slower, more expensive, and less likely to succeed” because:
- The probability of developing a successful vaccine candidate is low.
- A lengthy investment time (i.e., a long investment horizon) is required before selling for profit is possible.
- Clinical trials are very expensive.
- To justify and overcome all costs, a high financial return is needed (i.e., there is a high cost of capital).
With clinical costs accounting for 96% of total investment, companies have a weak financial justification for investing in risky vaccine research.
To minimize these uncertainties and improve investment returns for vaccine and therapeutic production, the federal government should embrace two key lessons from OWS:
- Guaranteed government demand enables the pursuit of innovative, speedy, and effective vaccine R&D. OWS selected companies pursuing different scientific methods to develop a vaccine, each of which possessed breakthrough potential. Moderna and Pfizer/BioNTech utilized mRNA, AstraZeneca and Janssen worked with replication-defective live vectors, and Novavax and Sanofi/GSK utilized a recombinant protein. Merck is working on a live attenuated virus that may be given orally. By frequently evaluating vaccine candidates, scientists ensured that only the most promising contenders continued to subsequent regulatory phases. This workflow dramatically expedited vaccine development. Relatedly, companies were able to invest in large-scale vaccine manufacturing during clinical trials thanks to government financial support. They not only received guaranteed investment installments, but also advanced commitments to purchase vaccines. This significantly decreased the financial risk and saved tremendous amounts of time and resources.
- Public-private partnerships utilize incentives and rewards to foster highly effective and dynamic teams. OWS created a “unique distribution of responsibilities … based upon core competencies rather than on political or financial considerations.” The interests of eight pharmaceutical companies were aligned based on the potential to receive an upfront commitment from the federal government to bulk purchase vaccines. Such approaches are critical to ensuring vaccine R&D not only happens in an efficient, coordinated manner but also that such R&D yields production at scale. Moreover, it enabled a suite of approaches to vaccine development rather than one method, raising the overall probability of developing a successful vaccine.
Repeating these lessons in subsequent EID vaccine developments would generate both significant returns on investment and benefits to society.
Plan of Action
By incentivizing vaccine development for priority EIDs, the federal government can preemptively solve market failures without picking winners or losers.
First, Congress should authorize and appropriate $10 billion to BARDA over 10 years to create a Dynamic Vaccine Development Fund. This fund would build on BARDA’s unique competencies as an engagement platform with the private sector. would allow for new developments to emerge
It would also enact the following strategies, gleaned from all of which were proven to be effective in OWS:
- Advanced market commitments to purchase large quantities of vaccines in cases of an outbreak.
- Ensuring steady incremental progress in combatting the most dangerous EIDs.
- Supporting manufacturing and distribution facilities.
- Providing limited government guarantees, equities, and securities to investors funding vaccine programs for a pre-specified list of priority diseases.
As illustrated by its successful history, BARDA is well-positioned to manage a large-scale vaccine initiative. Last year, BARDA announced the first venture capital partnership with the Global Health Investment Corporation to “allow direct linkage with the investment community and establish sustained and long-term efforts to identify, nurture, and commercialize technologies that aid the U.S. in responding effectively to future health security threats.” During the COVID-19 pandemic, BARDA and Janssen shared the R&D costs to help move Janssen’s investigational novel coronavirus vaccine into clinical evaluation—a collaboration supported by their previous successes on the Ebola vaccine. The Government Accountability Office reported that BARDA had also supported scaled production by identifying additional manufacturing partners. This partnership record shows that BARDA not only knows how to manage global health projects to completion but also is particularly adept at interfacing with the private sector. As such, it stands out as an ideal manager for the Dynamic Vaccine Development Fund.
With $10 billion, this Fund could not only support the vaccine economy, but also save millions of lives and trillions of dollars. Although the price tag is admittedly hefty, it is reasonable. After all, OWS had a price tag of $12+ billion––a small investment compared to the $16+ trillion cost of COVID-19. As seen in OWS, the long-term benefits of upfront, robust financing are even more impactful. One back-of-the-envelope calculation suggests immense economic returns for the Fund:
- With a 50% chance of another $16 trillion COVID-like pandemic in the next 25 years, the expected cost over this timeframe is $8 trillion globally.
- One expected outcome of this Fund would be to prevent 31 of the next 45 pandemics, or a nearly 69% chance of preventing the next epidemic in expectation.
- A 69% chance of preventing an $8 trillion cost over the next 25 years would yield an expected value of $5.6 trillion globally.
A $10 billion down payment would allow the Fund to excel in its normal operations (see bulleted list above) and support up to 120 vaccine candidates. OWS also spawned more than just new breakthrough R&D in the use of mRNA vaccine models. It also led to a health and biotechnology innovation windfall:
“Now that we know that mRNA vaccines work, there is no reason we could not start the process of developing those for the top 20 most likely pandemic pathogen prototypes”
Dr. Francis Collins, former director of the National Institutes of Health
Ten billion dollars would ensure the Fund’s impact could be similarly force-multiplied by private sector partnerships. There would be more time available and more opportunity for creative partnerships with the private sector. The Fund’s purpose is to lower financial risks and attract large amounts of capital from the bond market, whose size outweighs the venture capital, public equity, or private equity markets. Indeed, there has been growing interest in the application of social bonds to pandemic preparedness as a unique instrument for rapidly frontloading resources from capital markets. Though this Fund will assume a different form, the International Finance Facility for Immunisation represents a proof of concept for coordinating philanthropic foundations, governments, and supranational organizations for the purpose of “raising money more quickly.” With seed capital, this Fund could provide a strong signal — and perhaps an anchor for coordination — to debt capital markets to make issuances for vaccines. To this end, the targeted critical mass of $10 billion is estimated to generate both tremendous societal value by preventing future epidemic outbreaks as well as producing positive returns for investors.
Second, in executing Fund activities, BARDA should leverage investment strategies––such as milestone-based payments––to incentivize maximum vaccine innovation. When combatting EIDs, the U.S. will need as many vaccine options as possible. To facilitate this outcome, vaccine manufacturers should be rewarded for producing multiple kinds of vaccines at the same time. For example, BARDA might support the development of vaccines for a given EID by funding progress for four novel methods (e.g., mRNA, recombinant protein, gene-therapy, and live attenuated, orally-administered vaccines).
Furthermore, these rewards should come regularly during major events––or “milestones”––during development. Initial-stage milestones include vaccine candidates that protect an animal model against disease; later-stage milestones include human clinical trials. This financing model would provide companies with clear, short-term targets, reducing uncertainty and rewarding progress dynamically. Additionally, it would support the recent executive order, which calls for “increasing piloting and prototyping efforts in biotechnology and biomanufacturing to accelerate the translation of basic research results into practice.”
BARDA could expand the milestone-based financing mechanism further by employing early-stage challenges. In this scenario, it would only fund the first two of three candidates that successfully complete small-scale clinical trials. The final milestone stage––which should only be offered to a limited number of candidates––should provide an advanced market commitment to house complete vaccines within U.S. storage facilities, based on the interagency effort (described in the paragraph below). The selections process would retain sufficient competition throughout the development process, while ensuring a sustainable method for scaling up certain vaccines based on mission priorities.
Third, to support Fund activities towards late-stage clinical trials, the White House Office of Science and Technology Policy (OSTP) should coordinate a larger-scale interagency effort leveraging advanced market commitments, prize challenges, and other innovative procurement techniques. OSTP should be a coordinator across federal agencies that address pandemic preparedness, which might include: the Department of Defense, BARDA, the U.S. Agency for International Development, the National Institute of Allergy and Infectious Diseases, the Federal Emergency Management Agency, and the Development Finance Corporation. In doing so, the OSTP can (i) consolidate investments for particular vaccine candidates, and (ii) utilize networks and incentive strategies across the U.S. government to secure vaccines. Separately––and based on urgent priorities shared by agencies––OSTP should work closely with the Food and Drug Administration (FDA) to explore opportunities for pre-approval of vaccines as they develop through the trial phase.
Conclusion
Vaccines are among the most powerful tools for fighting pandemics. Unfortunately, bringing vaccines to market at scale is challenging. However, Operation Warp Speed (OWS) established a new precedent for tackling vaccine innovation market failures, laying the groundwork for a new era of industrial strategy. Congress should take advantage and supercharge U.S. pandemic preparedness by enabling the Biomedical Advanced Research and Development Authority (BARDA) to build a Dynamic Vaccine Development Fund. Embracing lessons learned from OWS, the Fund would incentivize companies to create vaccines for the six emerging infectious diseases most likely to cause the next pandemic.
The regulatory process for approving vaccines is even more reason to develop them ahead of time—before they are needed, rather than after an outbreak. Having access to an effective vaccine even days sooner can save thousands of lives due to the exponential rate of growth of all infectious diseases. Moreover, the FDA approval process—especially its Emergency Use Authorization Program—is extremely efficient, and is not the bottleneck for vaccine development. The main delay involved in vaccine development is the time it takes to conduct randomized clinical trials. Unfortunately, there are no shortcuts to this process if we want to ensure that vaccines are safe and effective. That is why we need to develop vaccines before pandemics occur. The idea here is simply to develop the minimum viable product of vaccines for priority EIDs that positions these vaccines to rapidly scale in the event of a pandemic.
Yes, there are several examples of vaccine initiatives using this strategy. To list a few:
- The Coalition for Epidemic Preparedness Innovations (CEPI) has a “megafund” vaccine portfolio (i.e., they have 32 vaccine candidates as of April 2022). This portfolio spans 13 different therapeutic mechanisms and five different stages of clinical development, from preclinical to “Emergency Use Listing” by the World Health Organization.
- BridgeBio, Roivant Sciences have used portfolio-based approaches for drug development.
- The National Brain Tumor Society is also leveraging this approach to finance novel drug candidates that can treat glioblastoma.
Ideally, vaccines in the final milestone stage would be stored in the United States and in line with new CDC guidance in the Vaccine Storage and Handling toolkit. This prevents the scenario where vaccines are held up in transit due to complex international negotiations and, potentially, expire during the lengthy proceedings. This exact scenario occurred when the 300,000 doses of monkeypox vaccine held in a Denmark-based facility were slowly and inconsistently onshored back to the U.S.
In addition, vaccines that are financed through the Fund would not always be final products. Instead, they would potentially be at varying stages of development thanks to the milestone-based payment strategy and frequent progress reviews. This would make it easier for the federal government to closely coordinate vaccine development with manufacturing professionals and rapidly increase vaccine production if necessary. The strategy offered in this memo lowers the risk of a similar situation occurring again.
We recommend that the executive order on biomanufacturing continue exploring this issue and investigate ways to securely store completed vaccines. The Government Accountability Office, for example, recently suggested several promising and discrete changes to update the requirements and operations of the Strategic National Stockpile.
This list was derived from justifications listed on CEPI’s website, linked here.
There are simply too many infectious diseases in nature, and most of are too rare to pose a significant threat. It would be scientifically and financially impractical––and unnecessary––to develop vaccines against all of them. However, we can greatly increase our readiness by widening our scope and developing a library of prototyped vaccines based on the 25 viral families (as called for by CEPI). Doing so would allow us to respond quickly against even unlikely pandemic scenarios.
Public Value Evidence for Public Value Outcomes: Integrating Public Values into Federal Policymaking
Summary
The federal government––through efforts like the White House Year of Evidence for Action––has made a laudable push to ensure that policy decisions are grounded in empirical evidence. While these efforts acknowledge the importance of social, cultural and Indigenous knowledges, they do not draw adequate attention to the challenges of generating, operationalizing, and integrating such evidence in routine policy and decision making. In particular, these endeavors are generally poor at incorporating the living and lived experiences, knowledge, and values of the public. This evidence—which we call evidence about public values—provides important insights for decision making and contributes to better policy or program designs and outcomes.
The federal government should broaden institutional capacity to collect and integrate evidence on public values into policy and decision making. Specifically, we propose that the White House Office of Management and Budget (OMB) and the White House Office of Science and Technology Policy (OSTP):
- Provide a directive on the importance of public value evidence.
- Develop an implementation roadmap for integrating public value evidence into federal operations (e.g., describe best practices for integrating it into federal decision making, developing skill-building opportunities for federal employees).
Challenge and Opportunity
Evidence about public values informs and improves policies and programs
Evidence about public values is, to put it most simply, information about what people prioritize, care, or think about with respect to a particular issue, which may differ from ideas prioritized by experts. It includes data collected through focus groups, deliberations, citizen review panels, and community-based research, or public opinion surveys. Some of these methods rely on one-way flows of information (e.g., surveys) while others prioritize mutual exchange of information among policy makers and participating publics (e.g., deliberations).
Agencies facing complex policymaking challenges can utilize evidence about public values––along with expert- and evaluation-based evidence––to ensure decisions truly serve the broader public good. If collected as part of the policy-making process, evidence about public values can inform policy goals and programs in real time, including when program goals are taking shape or as programs are deployed.
Evidence about public values within the federal government: three challenges to integration
To fully understand and use public values in policymaking, the U.S. government must first broadly address three challenges.
First, the federal government does not sufficiently value evidence about public values when it researches and designs policy solutions. Federal employees often lack any directive or guidance from leadership that collecting evidence about public values is valuable or important to evidence-based decision making. Efforts like the White House Year of Evidence for Action seek to better integrate evidence into policy making. Yet––for many contexts and topics––scientific or evaluation-based evidence is just one type of evidence. The public’s wisdom, hopes, and perspectives play an important mediating factor in determining and achieving desired public outcomes. The following examples illustrate ways public value evidence can support federal decision making:
- An effort to implement climate intervention technologies (e.g., solar geoengineering) might be well-grounded in evidence from the scientific community. However, that same strategy may not consider the diverse values Americans hold about (i) how such research might be governed, (ii) who ought to develop those technologies, and (iii) whether or not they should be used at all. Public values are imperative for such complex, socio-technical decisions if we are to make good on the Year of Evidence’s dual commitment to scientific integrity (including expanded concepts of expertise and evidence) and equity (better understanding of “what works, for whom, and under what circumstances”).
- Evidence about the impacts of rising sea levels on national park infrastructure and protected features has historically been tense. To acknowledge the social-environmental complexity in play, park leadership have strived to include both expert assessments and engagement with publics on their own risk tolerance for various mitigation measures. This has helped officials prioritize limited resources as they consider tough decisions on what and how to continue to preserve various park features and artifacts.
Second, the federal government lacks effective mechanisms for collecting evidence about public values. Presently, public comment periods favor credentialed participants—advocacy groups, consultants, business groups, etc.—who possess established avenues for sharing their opinions and positions to policy makers. As a result, these credentialed participants shape policy and other experiences, voices, and inputs go unheard. While the general public can contribute to government programs through platforms like Challenge.gov, credentialed participants still tend to dominate these processes. Effective mechanisms for collecting public values into decision making or research are generally confined to university, local government, and community settings. These methods include participatory budgeting, methods from usable or co-produced science, and participatory technology assessment. Some of these methods have been developed and applied to complex science and technology policy issues in particular, including climate change and various emerging technologies. Their use in federal agencies is far more limited. Even when an agency might seek to collect public values, it may be impeded by regulatory hurdles, such as the Paperwork Reduction Act (PRA), which can limit the collection of public values, ideas, or other input due to potentially long timelines for approval and perceived data collection burden on the public. Cumulatively, these factors prevent agencies from accurately gauging––and being adaptive to––public responses.
Third, federal agencies face challenges integrating evidence about public values into policy making. These challenges can be rooted in the regulatory hurdles described above, difficulties integrating with existing processes, and unfamiliarity with the benefits of collecting evidence about public values. Fortunately, studies have found specific attributes present among policymakers and agencies that allowed for the implementation and use of mechanisms for capturing public values. These attributes included:
- Leadership who prioritized public involvement and helped address administrative uncertainties.
- An agency culture responsive to broader public needs, concerns, and wants.
- Agency staff familiar with mechanisms to capture public values and integrate them in the policy- and decision-making process. The latter can help address translation issues, deal with regulatory hurdles, and can better communicate the benefits of collecting public values with regard to agency needs. Unfortunately, many agencies do not have such staff, and there are no existing roadmaps or professional development programs to help build this capacity across agencies.
Aligning public values with current government policies promotes scientific integrity and equity
The White House Year of Evidence for Action presents an opportunity to address the primary challenges––namely a lack of clear direction, collection protocols, and evidence integration strategies––currently impeding public values evidence’s widespread use in the federal government. Our proposal below is well aligned with the Year of Evidence’s central commitments, including:
- A commitment to scientific integrity. Complex problems require expanded concepts of expertise and evidence to ensure that important details and public concerns are not lost or overlooked.
- A commitment to equity. We have a better understanding of “what works, for whom, and under what circumstances” when we have ways of discerning and integrating public values into evidence-based decision making. Methods for integrating public values into decision making complement other emerging best practices––such as the co-creation of evaluation studies and including Indigenous knowledges and perspectives––in the policy making process.
Furthermore, this proposal aligns with the goals of the Year of Evidence for Action to “share leading practices to generate and use research-backed knowledge to advance better, more equitable outcomes for all America…” and to “…develop new strategies and structures to promote consistent evidence-based decision-making inside the Federal Government.”
Plan of Action
To integrate public values into federal policy making, the White House Office of Management and Budget (OMB) and the White House Office of Science and Technology Policy (OSTP) should:
- Develop a high-level directive for agencies about the importance of collecting public values as a form of evidence to inform policy making.
- Oversee the development of a roadmap for the integration of evidence about public values across government, including pathways for training federal employees.
Recommendation 1. OMB and OSTP should issue a high-level directive providing clear direction and strong backing for agencies to collect and integrate evidence on public values into their evidence-based decision-making procedures.
Given the potential utility of integrating public value evidence into science and technology policy as well as OSTP’s involvement in efforts to promote evidence-based policy, OSTP makes a natural partner in crafting this directive alongside OMB. This directive should clearly connect public value evidence to the current policy environment. As described above, efforts like the Foundations for Evidence-Based policy making Act (Evidence Act) and the White House Year of Evidence for Action provide a strong rationale for the collection and integration of evidence about public values. Longer-standing policies––including the Crowdsourcing and Citizen Science Act––provide further context and guidance for the importance of collecting input from broad publics.
Recommendation 2. As part of the directive, or as a follow up to it, OMB and OSTP should oversee the development of a roadmap for integrating evidence about public values across government.
The roadmap should be developed in consultation with various federal stakeholders, such as members of the Evaluation Officer Council, representatives from the Equitable Data Working Group, customer experience strategists, and relevant conceptual and methods experts from within and outside the government.
A comprehensive roadmap would include the following components:
- Appropriate contexts and uses for gathering and integrating public values as evidence. Public values should be collected when the issue is one where scientific or expert evidence is necessary, but not sufficient to address the question at hand. This may be due to (i) uncertainty, (ii) high levels of value disagreement, (iii) cases where the societal implications of a policy or program could be wide ranging, or (iv) situations where policy or program outcomes have inequitable impacts on certain communities.
- Specific approaches to collecting and integrating public values evidence, accompanied by illustrative case studies describing how the methods have been used. While various approaches for measuring and applying public values evidence exist, a few additional conditions can help enable success. These include staff knowledgeable about social science methods and the importance of public value input; clarity of regulatory requirements; and buy-in from agency leadership. These could include practices for: recruiting and convening diverse public participants; promoting exchanges among those participants; comparing public values against scientific or expert evidence; and ensuring that public values are translated into actionable policy solutions.
- Potential training program designs for federal employees. The goal of these training programs should be to develop a workforce that can integrate public value evidence into U.S. policymaking. Participants in these trainings should learn about the importance of integrating evidence about public values alongside other types of evidence, as well as strategies to collect and integrate that evidence into policy and programs. These training programs should promote active learning through applied pilot projects with the learner’s agency or unit.
- Specifying a center tasked with improving methods and tools for integrating evidence about public values into federal decision making. This center could exist as a public-private partnership, a federally-funded research and development center, or an innovation lab1 within an agency. This center could conduct ongoing research, evaluation, and pilot programs of new evidence-gathering methods and tools. This would ensure that as agencies collect and apply evidence about public values, they do so with the latest expertise and techniques.
Conclusion
Collecting evidence about the living and lived experiences, knowledge, and aspirations of the public can help inform policies and programs across government. While methods for collecting evidence about public values have proven effective, they have not been integrated into evidence-based policy efforts within the federal government. The integration of evidence about public values into policy making can promote the provision of broader public goods, elevate the perspectives of historically marginalized communities, and reveal policy or program directions different from those prioritized by experts. The proposed directive and roadmap––while only a first step––would help ensure the federal government considers, respects, and responds to our diverse nation’s values.
Federal agencies can use public value evidence where additional information about what the public thinks, prioritizes, and cares about could improve programs and policies. For example, policy decisions characterized by high uncertainty, potential value disputes, and high stakes could benefit from a broader review of considerations by diverse members of the public to ensure that novel options and unintended consequences are considered in the decision making process. In the context of science and technology related decision making, these situations were called “post-normal science” by Silvio Funtowicz and Jerome Ravetz. They called for an extension of who counts as a subject matter expert in the face of such challenges, citing the potential for technical analyses to overlook important societal values and considerations.
Many issues where science and technology meet societal needs and policy considerations warrant broad public value input. These issues include emerging technologies with societal implications and existing S&T challenges that have far reaching impacts on society (e.g., climate change). Further, OSTP is already involved in Evidence for Action initiatives and can assist in bringing in external expertise on methods and approaches.
While guidance from elected officials is an important mechanism for representing public values, evidence collected about public values through other means can be tailored to specific policy making contexts and can explore issue-specific challenges and opportunities.
There are likely more current examples of identifying and integrating public value evidence than we can point out in government. The roadmap building process should involve identifying those and finding common language to describe diverse public value evidence efforts across government. For specific known examples, see footnotes 1 and 2.
Evidence about public values might include evidence collected through program and policy evaluations but includes broader types of evidence. The evaluation of policies and programs generally focuses on assessing effectiveness or efficiency. Evidence about public values would be used in broader questions about the aims or goals of a program or policy.
Unlocking Federal Grant Data To Inform Evidence-Based Science Funding
Summary
Federal science-funding agencies spend tens of billions of dollars each year on extramural research. There is growing concern that this funding may be inefficiently awarded (e.g., by under-allocating grants to early-career researchers or to high-risk, high-reward projects). But because there is a dearth of empirical evidence on best practices for funding research, much of this concern is anecdotal or speculative at best.
The National Institutes of Health (NIH) and the National Science Foundation (NSF), as the two largest funders of basic science in the United States, should therefore develop a platform to provide researchers with structured access to historical federal data on grant review, scoring, and funding. This action would build on momentum from both the legislative and executive branches surrounding evidence-based policymaking, as well as on ample support from the research community. And though grantmaking data are often sensitive, there are numerous successful models from other sectors for sharing sensitive data responsibly. Applying these models to grantmaking data would strengthen the incorporation of evidence into grantmaking policy while also guiding future research (such as larger-scale randomized controlled trials) on efficient science funding.
Challenge and Opportunity
The NIH and NSF together disburse tens of billions of dollars each year in the form of competitive research grants. At a high level, the funding process typically works like this: researchers submit detailed proposals for scientific studies, often to particular program areas or topics that have designated funding. Then, expert panels assembled by the funding agency read and score the proposals. These scores are used to decide which proposals will or will not receive funding. (The FAQ provides more details on how the NIH and NSF review competitive research grants.)
A growing number of scholars have advocated for reforming this process to address perceived inefficiencies and biases. Citing evidence that the NIH has become increasingly incremental in its funding decisions, for instance, commentators have called on federal funding agencies to explicitly fund riskier science. These calls grew louder following the success of mRNA vaccines against COVID-19, a technology that struggled for years to receive federal funding due to its high-risk profile.
Others are concerned that the average NIH grant-winner has become too old, especially in light of research suggesting that some scientists do their best work before turning 40. Still others lament the “crippling demands” that grant applications exert on scientists’ time, and argue that a better approach could be to replace or supplement conventional peer-review evaluations with lottery-based mechanisms.
These hypotheses are all reasonable and thought-provoking. Yet there exists surprisingly little empirical evidence to support these theories. If we want to effectively reimagine—or even just tweak—the way the United States funds science, we need better data on how well various funding policies work.
Academics and policymakers interested in the science of science have rightly called for increased experimentation with grantmaking policies in order to build this evidence base. But, realistically, such experiments would likely need to be conducted hand-in-hand with the institutions that fund and support science, investigating how changes in policies and practices shape outcomes. While there is progress in such experimentation becoming a reality, the knowledge gap about how best to support science would ideally be filled sooner rather than later.
Fortunately, we need not wait that long for new insights. The NIH and NSF have a powerful resource at their disposal: decades of historical data on grant proposals, scores, funding status, and eventual research outcomes. These data hold immense value for those investigating the comparative benefits of various science-funding strategies. Indeed, these data have already supported excellent and policy-relevant research. Examples include Ginther et. al (2011) which studies how race and ethnicity affect the probability of receiving an NIH award, and Myers (2020), which studies whether scientists are willing to change the direction of their research in response to increased resources. And there is potential for more. While randomized control trials (RCTs) remain the gold standard for assessing causal inference, economists have for decades been developing methods for drawing causal conclusions from observational data. Applying these methods to federal grantmaking data could quickly and cheaply yield evidence-based recommendations for optimizing federal science funding.
Opening up federal grantmaking data by providing a structured and streamlined access protocol would increase the supply of valuable studies such as those cited above. It would also build on growing governmental interest in evidence-based policymaking. Since its first week in office, the Biden-Harris administration has emphasized the importance of ensuring that “policy and program decisions are informed by the best-available facts, data and research-backed information.” Landmark guidance issued in August 2022 by the White House Office of Science and Technology Policy directs agencies to ensure that federally funded research—and underlying research data—are freely available to the public (i.e., not paywalled) at the time of publication.
On the legislative side, the 2018 Foundations for Evidence-based Policymaking Act (popularly known as the Evidence Act) calls on federal agencies to develop a “systematic plan for identifying and addressing policy questions” relevant to their missions. The Evidence Act specifies that the general public and researchers should be included in developing these plans. The Evidence Act also calls on agencies to “engage the public in using public data assets [and] providing the public with the opportunity to request specific data assets to be prioritized for disclosure.” The recently proposed Secure Research Data Network Act calls for building exactly the type of infrastructure that would be necessary to share federal grantmaking data in a secure and structured way.
Plan of Action
There is clearly appetite to expand access to and use of federally held evidence assets. Below, we recommend four actions for unlocking the insights contained in NIH- and NSF-held grantmaking data—and applying those insights to improve how federal agencies fund science.
Recommendation 1. Review legal and regulatory frameworks applicable to federally held grantmaking data.
The White House Office of Management and Budget (OMB)’s Evidence Team, working with the NIH’s Office of Data Science Strategy and the NSF’s Evaluation and Assessment Capability, should review existing statutory and regulatory frameworks to see whether there are any legal obstacles to sharing federal grantmaking data. If the review team finds that the NIH and NSF face significant legal constraints when it comes to sharing these data, then the White House should work with Congress to amend prevailing law. Otherwise, OMB—in a possible joint capacity with the White House Office of Science and Technology Policy (OSTP)—should issue a memo clarifying that agencies are generally permitted to share federal grantmaking data in a secure, structured way, and stating any categorical exceptions.
Recommendation 2. Build the infrastructure to provide external stakeholders with secure, structured access to federally held grantmaking data for research.
Federal grantmaking data are inherently sensitive, containing information that could jeopardize personal privacy or compromise the integrity of review processes. But even sensitive data can be responsibly shared. The NIH has previously shared historical grantmaking data with some researchers, but the next step is for the NIH and NSF to develop a system that enables broader and easier researcher access. Other federal agencies have developed strategies for handling highly sensitive data in a systematic fashion, which can provide helpful precedent and lessons. Examples include:
- The U.S. Census Bureau (USCB)’s Longitudinal Employer-Household Data. These data link individual workers to their respective firms, and provide information on salary, job characteristics, and worker and firm location. Approved researchers have relied on these data to better understand labor-market trends.
- The Department of Transportation (DOT)’s Secure Data Commons. The Secure Data Commons allows third-party firms (such as Uber, Lyft, and Waze) to provide individual-level mobility data on trips taken. Approved researchers have used these data to understand mobility patterns in cities.
In both cases, the data in question are available to external researchers contingent on agency approval of a research request that clearly explains the purpose of a proposed study, why the requested data are needed, and how those data will be managed. Federal agencies managing access to sensitive data have also implemented additional security and privacy-preserving measures, such as:
- Only allowing researchers to access data via a remote server, or in some cases, inside a Federal Statistical Research Data Center. In other words, the data are never copied onto a researcher’s personal computer.
- Replacing any personal identifiers with random number identifiers once any data merges that require personal identifiers are complete.
- Reviewing any tables or figures prior to circulating or publishing results, to ensure that all results are appropriately aggregated and that no individual-level information can be inferred.
Building on these precedents, the NIH and NSF should (ideally jointly) develop secure repositories to house grantmaking data. This action aligns closely with recommendations from the U.S. Commission on Evidence-Based Policymaking, as well as with the above-referenced Secure Research Data Network Act (SRDNA). Both the Commission recommendations and the SRDNA advocate for secure ways to share data between agencies. Creating one or more repositories for federal grantmaking data would be an action that is simultaneously narrower and broader in scope (narrower in terms of the types of data included, broader in terms of the parties eligible for access). As such, this action could be considered either a precursor to or an expansion of the SRDNA, and could be logically pursued alongside SRDNA passage.
Once a secure repository is created, the NIH and NSF should (again, ideally jointly) develop protocols for researchers seeking access. These protocols should clearly specify who is eligible to submit a data-access request, the types of requests that are likely to be granted, and technical capabilities that the requester will need in order to access and use the data. Data requests should be evaluated by a small committee at the NIH and/or NSF (depending on the precise data being requested). In reviewing the requests, the committee should consider questions such as:
- How important and policy-relevant is the question that the researcher is seeking to answer? If policymakers knew the answer, what would they do with that information? Would it inform policy in a meaningful way?
- How well can the researcher answer the question using the data they are requesting? Can they establish a clear causal relationship? Would we be comfortable relying on their conclusions to inform policy?
Finally, NIH and NSF should consider including right-to-review clauses in agreements governing sharing of grantmaking data. Such clauses are typical when using personally identifiable data, as they give the data provider (here, the NIH and NSF) the chance to ensure that all data presented in the final research product has been properly aggregated and no individuals are identifiable. The Census Bureau’s Disclosure Review Board can provide some helpful guidance for NIH and NSF to follow on this front.
Recommendation 3. Encourage researchers to utilize these newly available data, and draw on the resulting research to inform possible improvements to grant funding.
The NIH and NSF frequently face questions and trade-offs when deciding if and how to change existing grantmaking processes. Examples include:
- How can we identify promising early-career researchers if they have less of a track record? What signals should we look for?
- Should we cap the amount of federal funding that individual scientists can receive, or should we let star researchers take on more grants? In general, is it better to spread funding across more researchers or concentrate it among star researchers?
- Is it better to let new grantmaking agencies operate independently, or to embed them within larger, existing agencies?
Typically, these agencies have very little academic or empirical evidence to draw on for answers. A large part of the problem has been the lack of access to data that researchers need to conduct relevant studies. Expanding access, per Recommendations 1 and 2 above, is a necessary part of but not a sufficient solution. Agencies must also invest in attracting researchers to use the data in a socially useful way.
Broadly advertising the new data will be critical. Announcing a new request for proposals (RFP) through the NIH and/or the NSF for projects explicitly using the data could also help. These RFPs could guide researchers toward the highest-impact and most policy-relevant questions, such as those above. The NSF’s “Science of Science: Discovery, Communication and Impact” program would be a natural fit to take the lead on encouraging researchers to use these data.
The goal is to create funding opportunities and programs that give academics clarity on the key issues and questions that federal grantmaking agencies need guidance on, and in turn the evidence academics build should help inform grantmaking policy.
Conclusion
Basic science is a critical input into innovation, which in turn fuels economic growth, health, prosperity, and national security. The NIH and NSF were founded with these critical missions in mind. To fully realize their missions, the NIH and NSF must understand how to maximize scientific return on federal research spending. And to help, researchers need to be able to analyze federal grantmaking data. Thoughtfully expanding access to this key evidence resource is a straightforward, low-cost way to grow the efficiency—and hence impact—of our federally backed national scientific enterprise.
For an excellent discussion of this question, see Li (2017). Briefly, the NIH is organized around 27 “Institutes or Centers” (ICs) which typically correspond to disease areas or body systems. ICs have budgets each year that are set by Congress. Research proposals are first evaluated by around 180 different “study sections”, which are committees organized by scientific areas or methods. After being evaluated by the study sections, proposals are returned to their respective ICs. The highest-scoring proposals in each IC are funded, up to budget limits.
Research proposals are typically submitted in response to announced funding opportunities, which are organized around different programs (topics). Each proposal is sent by the Program Officer to at least three independent reviewers who do not work at the NSF. These reviewers judge the proposal on its Intellectual Merit and Broader Impacts. The Program Officer then uses the independent reviews to make a funding recommendation to the Division Director, who makes the final award/decline decision. More details can be found on the NSF’s webpage.
The NIH and NSF both provide data on approved proposals. These data can be found on the RePORTER site for the NIH and award search site for the NSF. However, these data do not provide any information on the rejected applications, nor do they provide information on the underlying scores of approved proposals.
Masks via Mail: Maintaining Critical COVID-19 Infrastructure for Future Public Health Threats
Summary
To protect against future infectious disease outbreaks, the Department of Health and Human Services (HHS) Coordination Operations and Response Element (H-CORE) should develop and maintain the capacity to regularly deliver N95 respirator masks to every home using a mail delivery system. H-CORE previously developed a mailing system to provide free, rapid antigen tests to homes across the U.S. in response to the COVID-19 pandemic. H-CORE can build upon this system to supply the American public with additional disease prevention equipment––notably face masks. H-CORE can helm this expanded mail-delivery system by (i) gathering technical expertise from partnering federal agencies, (ii) deciding which masks are appropriate for public use, (iii) pulling from a rotating face-mask inventory at the Strategic National Stockpile (SNS), and (iv) centralizing subsequent equipment shipping and delivery. In doing so, H-CORE will fortify the pandemic response infrastructure established during the COVID-19 pandemic, allowing the U.S. government to face future pathogens with preparedness and resilience.
Challenge and Opportunity
The infrastructure put in place to respond to COVID-19 should be maintained and improved to better prepare for and respond to the next pandemic. As the federal government thinks about the future of COVID-19 response programs, it should prioritize maintaining systems that can be flexibly used to address a variety of health threats. One critical capability to maintain is the ability to quickly deliver medical countermeasures across the US. This was already done to provide the American public with COVID-19 rapid tests, but additional medical countermeasures––such as N95 respirators––should also be included.
N95s are an incredibly effective means of preventing deadly infectious disease spread. Wearing an N95 respirator reduces the odds of testing positive for COVID-19 by 83%, compared to 66% for surgical masks and 56% for cloth masks. The significant difference between N95 respirators and other face coverings means that N95 respirators can provide real public health benefits against a variety of biothreats, not just COVID-19. Adding N95 respirators to H-CORE’s mailing program would increase public access to a highly effective medical countermeasure that protects against a variety of harmful diseases. Providing equitable access to N95 masks can also protect the United States against other dangerous public health emergencies, not just pandemics. Additionally, N95s protect individuals from harmful, wildfire-smoke-derived airborne particles, providing another use-case beyond protection against viruses.
Beyond the benefit of expanding access to masks in particular, it is important to have an active public health mailing system that can be quickly scaled up to respond to emergencies. In times of need, this established mailing system could distribute a wide array of medical countermeasures, medicines, information, and personal protective equipment––including N95s. Thankfully, the agencies needed to coordinate this effort are already primed to do so. These authorities already have the momentum, expertise, and experience to convert existing COVID-19 response programs and pandemic preparedness investments into permanent health response infrastructure.
Plan of Action
The newly-elevated Administration for Strategic Preparedness and Response (ASPR) should house the N95 respirator mailing system, granting H-CORE key management and distribution responsibilities. Evolving out of the operational capacities built from Operation Warp Speed, H-CORE has demonstrated strong logistical capabilities in distributing COVID-19 vaccines, therapeutics, and at-home tests across the United States. H-CORE should continue operating some of these preparedness programs to increase public access to key medical countermeasures. At the same time, it should also maintain the flexibility to pivot and scale up these response programs as soon as the next public health emergency arises.
H-CORE should bolster its free COVID-19 test mailing program and include the option to order one box of 10 free N95 respirator masks every quarter.
H-CORE partnered with the U.S. Postal Service (USPS) to develop an unprecedented initiative––creating an online ordering system for rapid COVID-19 testing to be sent via mail to American households. ASPR should maintain its relationships with USPS and other shipping companies to distribute other needed medical supplies––like N95s. To ensure public comfort, a simple N95 ordering website could be designed to mimic the COVID-19 test ordering site.
An N95-distribution program has already been piloted and proven successful. Thanks to ASPR and the National Institute for Occupational Safety and Health (NIOSH), masks previously held at SNS were made available to the public at select retail pharmacies. This program should be made permanent and expanded to maximize the convenience of obtaining medical countermeasures, like masks. Doing so will likely increase the chance that the general population will acquire and use them. Additionally––if supplies are sourced primarily from domestic mask manufacturers––this program can stabilize demand and incentivize further manufacturing within the United States. Keeping this production at a steady base level will also make it easier to scale up quickly, should America face another pandemic or other public health crisis.
H-CORE and ASPR should coordinate with the SNS to provide N95 respirators through a rotating inventory system.
As evidenced by the 2009 H1N1 influenza pandemic and the COVID-19 pandemic, static stockpiling large quantities of masks is not an effective way to prepare for the next bio-incident.
Congress has long recognized the need to shift the stockpiling status quo within HSS, including within the SNS. Recent draft legislation––including the Protecting Providers Everywhere (PPE) in America Act and PREVENT Pandemics Act, as well as being mentioned in the National Strategy for a Resilient Public Health Supply Chain––have advocated for a rotating stock system. While the concept is mentioned in these documents, there are few details on what the system would look like in practice or a timeline for its implementation.
Ultimately, the SNS should use a rotating inventory system where its stored masks get rotated out to other uses in the supply chain using a “first in, first out” approach. This will prevent N95s from being stored beyond their recommended shelf-life and encourage continual replenishment of the SNS’ mask stockpile.
To make this new rotating inventory system possible, ASPR should pilot rotating inventory through this H-CORE mask mailing program while they decide if and how rotating inventory could be implemented in larger quantities (e.g. rotating out to Veterans Affairs, the Department of Defense, and other purchasers). To pilot a rotating inventory system, the Secretary of HHS may enter into contracts and cooperative agreements with vendors, through the SNS contracting mechanisms, and structure the contracts to include maintaining a constant supply and re-stock capacity of the stated product in such quantities as required by the contract. As a guide, the SNS can model these agreements after select pharmaceutical contracts, especially those that have stipulated similar rotating inventory systems (i.e., the radiological countermeasure Neupogen).
The N95 mail-delivery system will allow ASPR, H-CORE, and the SNS to test the rotating stock model in a way that avoids serious risk or negative consequences. The small quantity of N95s needed for the pilot program should not tax the SNS’ supply-at-large. After all, the afore-mentioned H-CORE/NIOSH mask-distribution programs are similarly designed to this pilot, and they do not disrupt the SNS supply for healthcare workers.
Conclusion
To be fully prepared for the next public health emergency, the United States must learn from its previous experience with COVID-19 and continue building the public health infrastructures that proved efficient during this pandemic. Widespread distribution of COVID-19 rapid diagnostic tests is one such success story. The logistics and protocols that made this resource dispersal possible should be continued for other flexible medical countermeasures, like N95 respirators. After all, while the need for COVID-19 tests may wane over time, the relevance of N95 respirators will not.
HHS should therefore distribute N95 respirators to the general public through H-CORE to (i) maintain the existing mailing infrastructure and (ii) increase access to a medical countermeasure that efficiently impedes transmission for many diseases. The masks for this effort should be sourced from the Strategic National Stockpile. This will not only prevent stock expiration, but also pilot rotating inventory as a strategy for larger-scale integration into the SNS. These actions will together equip the public with medical countermeasures relevant to a variety of diseases and strengthen a critical distribution program that should be maintained for future pandemic response.
Medical countermeasures (MCMs) can include both pharmaceutical interventions (such as vaccines, antimicrobials, antivirals, etc.) and non-pharmaceutical interventions (such as ventilators, diagnostics, personal protective equipment, etc.) that are used to prevent, mitigate, or treat the adverse health effects or a public health emergency. Examples of MCM deployment during the COVID-19 pandemic include the COVID-19 vaccines, therapeutics for COVID-19-hospitalized patients (e.g., antivirals and monoclonal antibodies), and personal protective equipment (e.g., respirators and gloves) deployed to healthcare providers and the public.
This proposal would build off of capabilities already being executed under the Department of Health and Human Services, Administration for Strategic Preparedness and Response (HHS ASPR). ASPR oversees both H-CORE and the Strategic National Stockpile (SNS) and was recently reclassified from a staff division to an operating division. This change allowed ASPR to better mobilize and respond to health-related emergencies. ASPR established H-CORE at the beginning of 2022 to create a permanent team responsible for coordinating medical countermeasures and strengthening preparedness for future pandemics. While H-CORE is currently focused on providing COVID-19 countermeasures––including vaccines, therapeutics, masks, and test kits––their longer-term mission is to augment capabilities within HHS to solve emerging health threats. As such, their ingrained mission and expertise match those required to successfully launch an N95 mail-delivery system.
Presently, 270 million masks have been made available to the U.S. population. It’s estimated that this same number of masks would be enough for American households to receive 10 masks per quarter, assuming a 50% participation rate in the program.
The total annual cost of this program is an estimated $280 million to purchase 270 million masks and facilitate shipping across the United States.
There are several ways this initiative could be funded. Initial funding to purchase and mail COVID-19 tests to homes came from the American Rescue Plan. By passing the COVID Supplemental Appropriations Act, Congress could provide supplemental funds to maintain standing COVID-19 programs and help pivot them to address evolving and future health threats.
The FY2023 President’s Budget for HHS also provides ample funding for H-CORE, the SNS, and ASPR, meaning it could also provide alternative funding for an N95 mail-delivery system. Presently, the budget asks for: $133 million for H-CORE and mentions their role in making masks available nationwide. Additionally, $975 million has been allotted to the SNS, which includes coordination with HHS and maintaining the stockpile. Furthermore, is petitions for ASPR to receive $12 billion to generally prepare for pandemics and other future biological threats (and here it also specifically recommends strong coordination with HHS agency efforts).
N95 respirators have a number of benefits that make them a critical defense strategy in a public health emergency. First, they are pathogen-agnostic, shelf-stable countermeasures that filter airborne particles very efficiently, meaning they can impede transmission for a variety of diseases––especially airborne and aerosolized ones. This is important, since these two latter disease categories are the most likely naturally occurring and intentional biothreats. Second, N95 respirators are useful beyond pandemic responses and also protect against wildfire smoke. Additionally, N95 masks have a long shelf-life. Therefore, the ability to quickly and widely distribute N95s is a critical public health preparedness measure.
Domestic mask manufacturers have also frequently experienced boom and bust cycles as public demand for masks can change rapidly and without warning. This inconsistent market makes it difficult for manufacturers to invest in increased manufacturing capacity in the long-term. One example is the company Prestige Ameritech, which invested over $1 million in new equipment and hired 150 new workers to produce masks in response to the 2009 swine flu outbreak. However, by the time production was ready, demand for masks had dropped and the company almost went bankrupt. Given overwhelmingly positive benefits of having mask manufacturing capacity available when needed, it is worthwhile for the government to provide some ongoing demand certainty.
Furthermore, making masks free and easily available to the general public could increase the public’s mask usage during the annual flu season and other periods of sickness. While personal protective equipment has decreased in cost since the peak of the pandemic, making them as accessible as possible will disproportionately increase access for low-income citizens and help ensure equitable access to protective medical countermeasures.
It is true that N95s are not regulated outside of healthcare settings, but that shouldn’t dissuade public use. Presently, there is no federal agency currently tasked with regulating respiratory protection for the public. The Food and Drug Administration (FDA) and the Centers for Disease Control and Prevention (CDC) National Institute for Occupational Safety and Health (NIOSH) currently have a Memorandum of Understanding (MOU) coordinating regulatory authority over N95 respirators for medical use. Neither the FDA nor NIOSH, though, have jurisdiction of mask use in a non-medical, non-occupational setting. Using an N95 respirator outside of a medical setting does not satisfy all of the regulatory requirements, like undergoing a fit-test to ensure proper seal. However, using N95 respirators for every-day respiratory protection (i) provides better protection than no mask, a cloth mask, or a surgical mask, and (ii) realistically should not need to meet the same regulatory standards as medical use as people are not regularly exposed to the same level of risk as medical professionals.
Presently, there is no central regulator for public respiratory protection in general. In fact, the National Academies of Science Engineering and Medicine recently issued a recommendation for Congress to “expeditiously establish a coordinating entity within the Department of Health and Human Services (HHS) with the necessary responsibility, authority, and resources (financial, personnel, and infrastructure) to provide a unified and authoritative source of information and effective oversight in the development, approval, and use of respiratory protective devices that can meet the needs of the public and protect the public health.”
Moving forward, NIOSH alone should regulate N95 use for the public just as they do in occupational settings. The approval process used by other regulators––like the FDA––is more restrictive than necessary for public use. The FDA’s standards for medical protection understandably need to be high in order to protect doctors, nurses, and other medical professionals against a wide variety of dangerous exposure situations. NIOSH can provide alternative regulation and guidance for the general public, who realistically are unlikely to be in similar circumstances.
Aside from federal agencies, professional scientific societies have also provided their input in regulating N95s. The American Society for Testing and Materials (ASTM), for example, recently published standards for barrier face coverings not intended for medical use or currently regulated under NIOSH standards. While ASTM does not have any regulatory or enforcement authority, HHS could use these standards for protection, comfort, and usability as a starting point for developing guidelines for respirators suitable for public distribution and use.
After the 2009 H1N1 influenza pandemic and the COVID-19 pandemic, it became evident that SNS must change its stockpile management practices. The stockpile’s reserves of N95 respirators were not sufficiently replenished after the 2009 H1N1 pandemic, in large part due to the significant up-front supply restocking cost. During the early days of COVID-19 response, many states received expired respirators and broken ventilators from the SNS. These incidents revealed a number of issues with the current stockpiling paradigm. Shifting to a rotating inventory system would prevent issues with expiration, smooth out the costs of large periodic restocks, and help maintain a capable and responsive manufacturing base.
Strengthening Policy by Bringing Evidence to Life
Summary
In a 2021 memorandum, President Biden instructed all federal executive departments and agencies to “make evidence-based decisions guided by the best available science and data.” This policy is sound in theory but increasingly difficult to implement in practice. With millions of new scientific papers published every year, parsing and acting on research insights presents a formidable challenge.
A solution, and one that has proven successful in helping clinicians effectively treat COVID-19, is to take a “living” approach to evidence synthesis. Conventional systematic reviews, meta-analyses, and associated guidelines and standards, are published as static products, and are updated infrequently (e.g., every four to five years)—if at all. This approach is inefficient and produces evidence products that quickly go out of date. It also leads to research waste and poorly allocated research funding.
By contrast, emerging “Living Evidence” models treat knowledge synthesis as an ongoing endeavor. By combining (i) established, scientific methods of summarizing science with (ii) continuous workflows and technology-based solutions for information discovery and processing, Living Evidence approaches yield systematic reviews—and other evidence and guidance—products that are always current.
The recent launch of the White House Year of Evidence for Action provides a pivotal opportunity to harness the Living Evidence model to accelerate research translation and advance evidence-based policymaking. The federal government should consider a two-part strategy to embrace and promote Living Evidence. The first part of this strategy positions the U.S. government to lead by example by embedding Living Evidence within federal agencies. The second part focuses on supporting external actors in launching and maintaining Living Evidence resources for the public good.
Challenge and Opportunity
We live in a time of veritable “scientific overload”. The number of scientific papers in the world has surged exponentially over the past several decades (Figure 1), and millions of new scientific papers are published every year. Making sense of this deluge of documents presents a formidable challenge. For any given topic, experts have to (i) scour the scientific literature for studies on that topic, (ii) separate out low-quality (or even fraudulent) research, (iii) weigh and reconcile contradictory findings from different studies, and (iv) synthesize study results into a product that can usefully inform both societal decision-making and future scientific inquiry.
This process has evolved over several decades into a scientific method known as “systematic review” or “meta-analysis”. Systematic reviews and meta-analyses are detailed and credible, but often take over a year to produce and rapidly go out of date once published. Experts often compensate by drawing attention to the latest research in blog posts, op-eds, “narrative” reviews, informal memos, and the like. But while such “quick and dirty” scanning of the literature is timely, it lacks scientific rigor. Hence those relying on “the best available science” to make informed decisions must choose between summaries of science that are reliable or current…but not both.
The lack of trustworthy and up-to-date summaries of science constrains efforts, including efforts championed by the White House, to promote evidence-informed policymaking. It also leads to research waste when scientists conduct research that is duplicative and unnecessary, and degrades the efficiency of the scientific ecosystem when funders support research that does not address true knowledge gaps.

Total number of scientific papers published over time, according to the Microsoft Access Graph (MAG) dataset. (Source: Herrmannova and Knoth, 2016)
The emerging Living Evidence paradigm solves these problems by treating knowledge synthesis as an ongoing rather than static endeavor. By combining (i) established, scientific methods of summarizing science with (ii) continuous workflows and technology-based solutions for information discovery and processing, Living Evidence approaches yield systematic reviews that are always up to date with the latest research. An opinion piece published in The New York Times called this approach “a quiet revolution to surface the best-available research and make it accessible for all.”
To take a Living Evidence approach, multidisciplinary teams of subject-matter experts and methods experts (e.g., information specialists and data scientists) first develop an evidence resource—such as a systematic review—using standard approaches. But the teams then commit to regular updates of the evidence resource at a frequency that makes sense for their end users (e.g., once a month). Using technologies such as natural-language processing and machine learning, the teams continually monitor online databases to identify new research. Any new research is rapidly incorporated into the evidence resource using established methods for high-quality evidence synthesis. Figure 2 illustrates how Living Evidence builds on and improves traditional approaches for evidence-informed development of guidelines, standards, and other policy instruments.

Illustration of how a Living Evidence approach to development of evidence-informed policies (such as clinical guidelines) is more current and reliable than traditional approaches. (Source: Author-developed graphic)
Living Evidence products are more trusted by stakeholders, enjoy greater engagement (up to a 300% increase in access/use, based on internal data from the Australian Stroke Foundation), and support improved translation of research into practice and policy. Living Evidence holds particular value for domains in which research evidence is emerging rapidly, current evidence is uncertain, and new research might change policy or practice. For example, Nature has credited Living Evidence with “help[ing] chart a route out” of the worst stages of the COVID-19 pandemic. The World Health Organization (WHO) has since committed to using the Living Evidence approach as the organization’s “main platform” for knowledge synthesis and guideline development across all health issues.
Yet Living Evidence approaches remain underutilized in most domains. Many scientists are unaware of Living Evidence approaches. The minority who are familiar often lack the tools and incentives to carry out Living Evidence projects directly. The result is an “evidence to action” pipeline far leakier than it needs to be. Entities like government agencies need credible and up-to-date evidence to efficiently and effectively translate knowledge into impact.
It is time to change the status quo. The 2019 Foundations for Evidence-Based Policymaking Act (“Evidence Act”) advances “a vision for a nation that relies on evidence and data to make decisions at all levels of government.” The Biden Administration’s “Year of Evidence” push has generated significant momentum around evidence-informed policymaking. Demonstrated successes of Living Evidence approaches with respect to COVID-19 have sparked interest in these approaches specifically. The time is ripe for the federal government to position Living Evidence as the “gold standard” of evidence products—and the United States as a leader in knowledge discovery and synthesis.
Plan of Action
The federal government should consider a two-part strategy to embrace and promote Living Evidence. The first part of this strategy positions the U.S. government to lead by example by embedding Living Evidence within federal agencies. The second part focuses on supporting external actors in launching and maintaining Living Evidence resources for the public good.
Part 1. Embedding Living Evidence within federal agencies
Federal science agencies are well positioned to carry out Living Evidence approaches directly. Living Evidence requires “a sustained commitment for the period that the review remains living.” Federal agencies can support the continuous workflows and multidisciplinary project teams needed for excellent Living Evidence products.
In addition, Living Evidence projects can be very powerful mechanisms for building effective, multi-stakeholder partnerships that last—a key objective for a federal government seeking to bolster the U.S. scientific enterprise. A recent example is Wellcome Trust’s decision to fund suites of living systematic reviews in mental health as a foundational investment in its new mental-health strategy, recognizing this as an important opportunity to build a global research community around a shared knowledge source.
Greater interagency coordination and external collaboration will facilitate implementation of Living Evidence across government. As such, President Biden should issue an Executive Order establishing an Living Evidence Interagency Policy Committee (LEIPC) modeled on the effective Interagency Arctic Research Policy Committee (IARPC). The LEIPC would be chartered as an Interagency Working Group of the National Science and Technology Council (NSTC) Committee on Science and Technology Enterprise, and chaired by the Director of the White House Office of Science and Technology Policy (OSTP; or their delegate). Membership would comprise representatives from federal science agencies, including agencies that currently create and maintain evidence clearinghouses, other agencies deeply invested in evidence-informed decision making, and non-governmental experts with deep experience in the practice of Living Evidence and/or associated capabilities (e.g., information science, machine learning).
The LEIPC would be tasked with (1) supporting federal implementation of Living Evidence, (2) identifying priority areas1 and opportunities for federally managed Living Evidence projects, and (3) fostering greater collaboration between government and external stakeholders in the evidence community. More detail on each of these roles is provided below.
Supporting federal implementation of Living Evidence
Widely accepted guidance for living systematic reviews (LSRs), one type of Living Evidence product, has been published. The LEIPC—working closely with OSTP, the White House Office of Management and Budget (OMB), and the federal Evaluation Officer Council (EOC), should adapt this guidance for the U.S. federal context, resulting in an informational resource for federal agencies seeking to launch or fund Living Evidence projects. The guidance should also be used to update systematic-review processes used by federal agencies and organizations contributing to national evidence clearinghouses.2
Once the federally tailored guidance has been developed, the White House should direct federal agencies to consider and pursue opportunities to embed Living Evidence within their programs and operations. The policy directive could take the form of a Presidential Memorandum, a joint management memo from the heads of OSTP and OMB, or similar. This directive would (i) emphasize the national benefits that Living Evidence could deliver, and (ii) provide agencies with high-level justification for using discretionary funding on Living Evidence projects and for making decisions based on Living Evidence insights.
Identifying priority areas and opportunities for federally managed Living Evidence projects
The LEIPC—again working closely with OSTP, OMB, and the EOC—should survey the federal government for opportunities to deploy Living Evidence internally. Box 1 provides examples of opportunities that the LEIPC could consider.
Below are four illustrative examples of existing federal efforts that could be augmented with Living Evidence.
Example 1: National Primary Drinking Water Regulations. The U.S. Environmental Protection Agency (EPA) currently reviews and updates the National Primary Drinking Water Regulations every six years. But society now produces millions of new chemicals each year, including numerous contaminants of emerging concern (CEC) for drinking water. Taking a Living Evidence approach to drinking-water safety could yield drinking-water regulations that are updated continuously as information on new contaminants comes in, rather than periodically (and potentially after new contaminants have already begun to cause harm).
Example 2: Guidelines for entities participating in the National Wastewater Surveillance System. Australia has demonstrated how valuable Living Evidence can be for COVID-19 management and response. Meanwhile, declines in clinical testing and the continued emergence of new SARS-CoV-2 variants are positioning wastewater surveillance as an increasingly important public-health tool. But no agency or organization has yet taken a Living Evidence approach to the practice of testing wastewater for disease monitoring. Living Evidence could inform practitioners in real time on evolving best protocols and practices for wastewater sampling, concentration, testing, and data analysis.
Example 3: Department of Education Best Practices Clearinghouse. The Best Practices Clearinghouse was launched at President Biden’s direction to support a variety of stakeholders in reopening and operating post-pandemic. Applying Living Evidence analysis to the resources that the Clearinghouse has assembled would help ensure that instruction remains safe and effective in a dramatically transformed and evolving educational landscape.
Example 4: National Climate Assessment. The National Climate Assessment (NCA) is a Congressionally mandated review of climate science and impacts on the United States. The NCA is issued quadrennially, but climate change is presenting itself in new and worrying ways every year. Urgent climate action must be backed up by emergent climate knowledge. While a longer-term goal could be to transition the entire NCA into a frequently updated “living” mode, a near-term effort could focus on transitioning NCA subtopics where the need for new knowledge is especially pressing. For instance, the emergence and intensification of megafires in the West has upended our understanding of fire dynamics. A Living Evidence resource on fire science could give policymakers and program officials critical, up-to-date information on how best to mitigate, respond to, and recover from catastrophic megafires.
The product of this exercise should be a report that describes each of the opportunities identified, and recommends priority projects to pursue. In developing its priority list, the LEIPC should account for both the likely impact of a potential Living Evidence project as well as the near-term feasibility of that project. While the report could outline visions for ambitious Living Evidence undertakings that would require a significant time investment to realize fully (e.g., transitioning the entire National Climate Assessment into a frequently updated “living” mode), it should also scope projects that could be completed within two years and serve as pilots/proofs of concept. Lessons learned from the pilots could ultimately inform a national strategy for incorporating Living Evidence into federal government more systematically. Successful pilots could continue and grow beyond the end of the two-year period, as appropriate.
Fostering greater collaboration between government and external stakeholders
The LEIPC should create an online “LEIPC Collaborations” platform that connects researchers, practitioners, and other stakeholders both inside and outside government. The platform would emulate IARPC Collaborations, which has built out a community of more than 3,000 members and dozens of communities of practice dedicated to the holistic advancement of Arctic science. As one stakeholder has explained:
“IARPC Collaborations members interact primarily in virtual spaces including both video conferencing and a social networking website. Open to anyone who wishes to join, the website serves not only as a venue for sharing information in-between meetings, but also lowers the barrier to meetings and to the IARPC Collaborations community in general, allows the video conferences as well as the IARPC Collaborations community to be open to all, not just an exclusive group of people who happen to be included in an email. Together, IARPC Collaborations members have realized an unprecedented degree of communication, coordination and collaboration, creating new knowledge and contributing to science-informed decision making. The IARPC community managers utilize the IARPC Collaborations website not only for project management, but also to support public engagement. The website contains user-generated-content sharing system where members log-in to share resources such as funding opportunities, publications and reports, events, and general announcements. The community managers also provide communication training for two types of members of IARPC Collaborations: team leaders in order to enhance leadership skill and team engagement, and early career scientists in order to enhance their careers through networking and building interdisciplinary collaborations.”
LEIPC Collaborations could deliver the same participatory opportunities and benefits for members of the evidence community, facilitating holistic advancement of Living Evidence.
Part 2. Make it easier for scientists and researchers to develop LSRs
Many government efforts could be supported by internal Living Evidence initiatives, but not every valuable Living Evidence effort should be conducted by government. Many useful Living Evidence programs will require deep domain knowledge and specialized skills that teams of scientists and researchers working outside of government are best positioned to deliver.
But experts interested in pursuing Living Evidence efforts face two major difficulties. The first is securing funding. Very little research funding is awarded for the sole purpose of conducting systematic reviews and other types of evidence syntheses. The funding that is available is typically not commensurate with the resource and personnel needs of a high-quality synthesis. Living Evidence demands efficient knowledge discovery and the involvement of multidisciplinary teams possessing overlapping skill sets. Yet federal research grants are often structured in a way that precludes principal investigators from hiring research software engineers or from founding co-led research groups.
The second is aligning with incentives. Systematic reviews and other types of evidence syntheses are often not recognized as “true” research outputs by funding agencies or university tenure committees—i.e., they are often not given the same weight in research metrics, despite (i) utilizing well-established scientific methodologies involving detailed protocols and advanced data and statistical techniques, and (ii) resulting in new knowledge. The result is that talented experts are discouraged from investing their time on projects that can contribute significant new insights and could dramatically improve the efficiency and impact of our nation’s research enterprise.
To begin addressing these problems, the two biggest STEM-funding agencies—NIH and NSF—should consider the following actions:
- Perform a landscape analysis of federal funding for evidence synthesis. Rigorously documenting the funding opportunities available (or lack thereof) for researchers wishing to pursue evidence synthesis will help NIH and NSF determine where to focus potential new opportunities. The landscape analysis should consider currently available funding opportunities for systematic, scoping, and rapid reviews, and could also include surveys and focus groups to assess the appetite in the research community for pursuing additional evidence-synthesis activities if supported.
- Establish new grant opportunities designed to support Living Evidence projects. The goal of these grant opportunities would be to deliver definitive and always up-to-date summaries of research evidence and associated data in specified topics. The opportunities could align with particular research focuses (for instance, a living systematic review on tissue-electronic interfacing could facilitate progress on bionic limb development under NSF’s current “Enhancing Opportunities for Persons with Disabilities” Convergence Accelerator track). The opportunities could also be topic-agnostic, but require applicants to justify a proposed project by demonstrating that (i) the research evidence is emerging rapidly, (ii) current evidence is uncertain, and (iii) new research might materially change policy or practice.
- Increase support for career research staff in academia. Although contributors to Living Evidence projects can cycle in and out (analogous to turnover in large research collaboratives), such projects benefit from longevity in a portion of the team. With this core team in place, Living Evidence projects are excellent avenues for grad students to build core research skills, including in research study design.
- Leverage prestigious existing grant programs and awards to incentivize work on Living Evidence. For instance, NSF could encourage early-career faculty to propose LSRs in applications for CAREER grants.
- Recognize evidence syntheses as research outputs. In all assessments of scientific track record (particularly research-funding schemes), systematic reviews and other types of rigorous evidence synthesis should be recognized as research outputs equivalent to “primary” research.
The grant opportunities should also:
- Support collaborative, multidisciplinary research teams.
- Include an explicit requirement to build significant stakeholder engagement, including with practitioners and relevant government agencies.
- Include opportunities to apply for follow-on funding to support maintenance of high-value Living Evidence products.
- Allow funds to be spent on non-traditional personnel resources; e.g., an information scientist to systematically survey for new research.
Conclusion
Policymaking can only be meaningfully informed by evidence if underpinning systems for evidence synthesis are robust. The Biden administration’s Year of Evidence for Action provides a pivotal opportunity to pursue concrete actions that strengthen use of science for the betterment of the American people. Federal investment in Living Evidence is one such action.
Living Evidence has emerged as a powerful mechanism for translating scientific discoveries into policy and practice. The Living Evidence approach is being rapidly embraced by international actors, and the United States has an opportunity to position itself as a leader. A federal initiative on Living Evidence will contribute additional energy and momentum to the Year of Evidence for Action, ensure that our nation does not fall behind on evidence-informed policymaking, and arm federal agencies with the most current and best-available scientific evidence as they pursue their statutory missions.
Living Evidence projects do require funding for enough time to complete the initial “baseline” systematic review (typically 3-12 months, depending on scope and complexity), transition to maintenance (“living”) mode, and continue in living mode for sufficient time (usually about 6–12 months) for all involved to become familiar with maintaining and using the living resource. Hence Living Evidence projects work best when fully funded for a minimum of two years.
If there is support for funding beyond this minimum period, there are operational advantages of instantiating the follow-on funding before the previous funding period concludes. If follow-on funding is not immediately available, Living Evidence resources can simply revert to a conventional static form until and if follow-on funding becomes available.
While dissemination of conventional evidence products involves sharing several dozen key messages in a once-in-several-years communications push, dissemination of Living Evidence amounts to a regular cycle of “what’s new” updates (typically one to two key insights). Living Evidence dissemination feeds become known and trusted by end users, inspiring confidence that end users can “keep up” with the implications of new research. Publication of Living Evidence can take many forms. Typically, the core evidence resource is housed in an organizational website that can be easily and frequently updated, sometimes with an ability for users to access previous versions of the resource. Living Evidence may also be published as articles in academic journals. This could be intermittent overviews of the evidence resource with links back to the main Living Evidence summaries, or (more ambitiously) as a series of frequently updated versions of an article that are logically linked. Multiple academic journals are innovating to better support “living” publications.
Even across broad topics in fast-moving research fields, though, the overall findings and conclusions of Living Evidence products change infrequently since the threshold for changing a conclusion drawn from a whole body of evidence is high. The largest Living Evidence projects in existence only yield about one to two new major findings or recommendations each update. Furthermore, any good evidence-synthesis product will contextualize conclusions and recommendations with confidence.
Living Evidence has the potential to accelerate knowledge translation: not because of any changes to the knowledge-translation enterprise, but because Living Evidence identifies earlier the high-certainty evidence that underpins knowledge-translation activities.
Living Evidence may also enhance knowledge translation in two ways. First, Living Evidence is a better evidence product and has been shown to increase trust, engagement, and intention to use among stakeholders. Second, as mentioned above, Living Evidence creates opportunities for deep and effective partnerships. Together, these advantages could position Living Evidence to yield a more effective “enabling environment” for knowledge translation.
Creating a Digital Service for the Planet
Summary
Most federal environmental initiatives involve multiple departments and agencies, meaning that environmental agencies frequently have overlapping data and technology needs.1 To implement cross-cutting environmental initiatives efficiently, the federal government should build, buy, manage, and deploy digital resources in ways that meet the needs of multiple agencies at once. Achieving this necessitates a centralized entity with the expertise and mission to coordinate environmental data and technology across agencies.
The Biden administration should therefore create a “Digital Service for the Planet (DSP)” that is modeled on, or established as an expansion of, the U.S. Digital Service (USDS), but tailored to the U.S. government’s environmentally focused departments and agencies. DSP would support cross-agency technology development and improve digital infrastructure to better foster collaboration and reduce duplication of federal environmental efforts. The result will be a more integrated approach to technology—one that makes it easier for all stakeholders to achieve goals related to health, environmental justice, and other positive outcomes for the American people.
Challenge and Opportunity
The Biden administration—through directives such as Executive Order 14008 on Tackling the Climate Crises at Home and Abroad and President Biden’s Memorandum on Restoring Trust in Government Through Scientific Integrity and Evidence-Based Policymaking, as well as through initiatives such as Justice40 and America the Beautiful (30×30)—has laid the blueprint for a data-driven environmental agenda.
However, the data to advance this agenda are held and managed by multiple agencies, making them difficult to standardize, share, and use to their full potential. For example, water data are collected by 25 federal entities across 57 data platforms and 462 different data types. Permitting for wetlands, forest fuel treatments, and other important natural-resource management tasks still involves a significant amount of manual data entry, and protocols for handling relevant data vary by region or district. Staff at environmental agencies have privately noted that it can take weeks or months to receive necessary data from colleagues in other agencies, and that they have trouble knowing what data exist at other agencies. Accelerating the success and breadth of environmental initiatives requires digitized, timely, and accessible information for planning and execution of agency strategies.
The state of federal environmental data today echoes the state of public-health data in 2014, when President Obama recognized that the Department of Health and Human Services lacked the technical skill sets and capacity needed to stand up Healthcare.gov. The Obama administration responded by creating the U.S. Digital Service (USDS), which provides federal agencies with on-demand access to the technical expertise they need to design, procure, and deploy technology for the public good. Over the past eight years, USDS has developed a scalable and replicable model of working across government agencies. Projects that USDS has been involved in—like improving federal procurement and hiring processes, deploying healthcare.gov, and modernizing administrative tasks for veterans and immigrants—have saved agencies such as the Department of Veterans Affairs millions of dollars.
But USDS lacks the specialized capacity and skills, experience, and specific directives needed to fully meet the shared digital-infrastructure needs of environmental agencies. The Climate and Economic Justice Screening Tool (CEJST) is an example of how crucial digital-service capacity is for tackling the nation’s environmental priorities, and the need for a DSP. While USDS was instrumental in getting the tool off the ground, several issues with the launch point to a lack of specialized environmental capabilities and expertise within USDS. Many known environmental-justice issues—including wildfire, drought, and flooding—were not reflected in the tool’s first iteration. In addition, the CEJST should have been published in July 2021, but the beta version was not released until February 2022. A DSP familiar with environmental data would have started with a stronger foundation to help anticipate and incorporate such key environmental concerns, and may have been able to deliver the tool on a tighter timeline.
There is hope in this challenge. The fact that many environmental programs across multiple federal agencies have overlapping data and technology needs means that a centralized and dedicated team focused on addressing these needs could significantly and cost-effectively advance the capacities of environmental agencies to:
- Incorporate the most up-to-date information in their operations.
- Automatically share relevant datasets with other programs and agencies.
- Deploy innovative technologies to advance environmental progress.
- Design effective requests for proposals (RFPs) and other contracts for key tech capabilities.
- Provide clear signals to the private sector about the value of data, digital infrastructure, and technology to advance agency goals and environmental outcomes.
Plan of Action
To best position federal agencies to meet environmental goals, the Biden administration should establish a “Digital Service for the Planet (DSP).” The DSP would build off the successes of USDS to provide support across three key areas for environmental agencies:
- Strategic planning and procurement. Scoping, designing, and procuring technology solutions for programmatic goals. For example, a DSP could help the Fish and Wildlife Service (FWS) accelerate updates to the National Wetlands Inventory, which are currently estimated to take 10 years and cost $20 million dollars.
- Technical development. Implementing targeted technical-development activities to achieve mission-related goals in collaboration with agency staff. For example, a DSP could help update the accessibility and utility for many government tools that the public rely heavily on, such as the Army Corps system that tracks mitigation banks (known as the Regulatory In lieu fee and Bank Information Tracking System (RIBITS)).
- Cross-agency coordination on digital infrastructure. Facilitating data inventory and sharing, and development of the databases, tools, and technological processes that make cross-agency efforts possible. A DSP could be a helpful partner for facilitating information sharing among agencies that monitor interrelated events, environments, or problems, including droughts, wildfires, and algal blooms.
The DSP could be established either as a new branch of USDS, or as a new and separate but parallel entity housed within the White House Office of Management and Budget. The former option would enable DSP to leverage the accumulated knowledge and existing structures of USDS. The latter option would enable DSP to be established with a more focused mandate, and would also provide a clear entry point for federal agencies seeking data and technology support specific to environmental issues.
Regardless of the organizational structure selected, DSP should include the essential elements that have helped USDS succeed—per the following recommendations.
Recommendation 1. The DSP should emulate the USDS’s staffing model and position within the Executive Office of the President (EOP).
The USDS hires employees on short-term contracts, with each contract term lasting between six months and four years. This contract-based model enables USDS to attract high-level technologists, product designers, and programmers who are interested in public service, but not necessarily committed to careers in government. USDS’s staffing model also ensures that the Service does not take over core agency capacities, but rather is deployed to design and procure tech solutions that agencies will ultimately operate in-house (i.e., without USDS involvement). USDS’s position within the EOP makes USDS an attractive place for top-level talent to work, gives staff access to high-level government officials, and enables the Service to work flexibly across agencies.
Recommendation 2. Staff the DSP with specialists who have prior experience working on environmental projects.
Working on data and technology issues within environmental contexts requires specialized skill sets and experience. For example, geospatial data and analysis are fundamental to environmental protection and conservation, but this has not been a focal point of USDS hiring. In addition, a DSP staff fluent in the vast and specific terminologies used in environmental fields (such as water management) will be better able to communicate with the many subject-matter experts and data stewards working in environmental agencies.
Recommendation 3. Place interagency collaboration at the core of the DSP mission.
Most USDS projects focus on a single federal agency, but environmental initiatives—and the data and tech needs they present—almost always involve multiple agencies. Major national challenges, including flood-risk management, harmful algal blooms, and environmental justice, all demand an integrated approach to realize cross-agency benefits. For example, EPA-funded green stormwater infrastructure could reduce flood risk for housing units subsidized by the Department of Housing and Urban Development. DSP should be explicitly tasked with devising approaches for tackling complex data and technology issues that cut across agencies. Fulfilling this mandate may require DSP to bring on additional expertise in core competencies such as data sharing and integration.
Recommendation 4. Actively promote the DSP to relevant federal agencies.
Despite USDS’s eight-year existence, many staff members at agencies involved in environmental initiatives know little about the Service and what it can do for them. To avoid underutilization due to lack of awareness, the DSP’s launch should include an outreach campaign targeted at key agencies, including but not limited to the U.S. Army Corps of Engineers (USACE), the Department of Energy (DOE), the Department of the Interior (DOI), the Environmental Protection Agency (EPA), the National Aeronautics and Space Administration (NASA), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Department of Agriculture, and the U.S. Global Change Research Program (USGCRP).
Conclusion
A new Digital Service for the Planet could accelerate progress on environmental and natural-resource challenges through better use of data and technology. USDS has shown that a relatively small and flexible team can have a profound and lasting effect on how agencies operate, save taxpayer money, and encourage new ways of thinking about long standing problems. However, current capacity at USDS is limited and not specifically tailored to the needs of environmental agencies. From issues ranging from water management to environmental justice, ensuring better use of technology and data will yield benefits for generations to come. This is an important step for the federal government to be a better buyer, partner, and consumer of the data technology and innovations that are necessary to support the country’s conservation, water, and stewardship priorities.
The DSP would build on the successful USDS model, but would have two distinguishing characteristics. First, the DSP would employ staff experienced in using or managing environmental data and possessing special expertise in geospatial technologies, remote sensing, and other environmentally relevant tech capabilities. Second, DSP would have an explicit mandate to develop processes for tackling data and technology issues that frequently cut across agencies. For example, the Internet of Water found that at least 25 different federal entities collect water data, while the USGCRP has identified at least 217 examples of earth observation efforts spanning many agencies. USDS is not designed to work with so many agencies at once on a single project—but DSP would be.
Not in most cases. The DSP would focus on meeting data and technology needs shared by multiple agencies. Agencies would still be free—and encouraged!—to pursue agency-specific data- and tech-improvement projects independently.
Indeed, a hope would be that by showcasing the value of digital services for environmental projects on a cross-agency basis, the DSP would inspire individual agencies to establish their own digital services teams. Precedent for this evolution exists: the USDS provided initial resources to solve digital challenges for healthcare.gov and Department of Veteran Affairs. The Department of Veteran Affairs and Department of Defense have since started their internal digital services teams. However, even with agency-based digital service teams, there will always be a need for a team with a cross-agency view, especially given that so many environmental problems and solutions extend well beyond the borders of a single agency. Digital-service teams at multiple levels can be complementary and would focus on different project scopes and groups of users. For example, agency-specific digital-service teams would be much better positioned to help sustain agency-specific components of an effort established by DSP.
We propose the DSP start with a mid-sized team of twenty to thirty full-time equivalent employees (FTEs) and a budget around $8 million. These personnel and financial allocations are in line with allocations for USDS. DSP could be scaled up over time if needed, just as USDS grew from approximately 12 FTEs in fiscal year (FY) 2014 to over 200 FTEs in FY 2022. The long-term target size of the DSP team should be informed by the uptake and success of DSP-led work.
From our conversations with agency staff, we (the authors) have heard time and again that agencies see immense value in a DSP, and find that two scenarios often inhibit improved adoption of environmental data and technology. The first scenario is that environmental-agency staff see the value in pursuing a technology solution to make their program more effective, but do not have the authority or resources to implement the idea, or are not aware of the avenues available to do so. DSP can help agency staff design and implement modern solutions to realize their vision and coordinate with important stakeholders to facilitate the process.
The second scenario is that environmental-agency staff are trained experts in environmental science, but not in evaluating technology solutions. As such, they are poorly equipped to evaluate the integrity of proposed solutions from external vendors. If they end up trialing a solution that is a poor fit, they may become risk-averse to technology at large. In this scenario, there is tremendous value in having a dedicated team of experts within the government available to help agencies source the appropriate technology or technologies for their programmatic goals.
Expanding Pathways for Career Research Scientists in Academia
Summary
The U.S. university research enterprise is plagued by an odd bug: it encourages experts in science, technology, engineering, and math (STEM) to leave it at the very moment they become recognized as experts. People who pursue advanced degrees in STEM are often compelled by deep interest in research. But upon graduation from master’s, Ph.D., or postdoctoral programs, these research-oriented individuals face a difficult choice: largely cede hands-on involvement in research to pursue faculty positions (which increasingly demand that a majority of time be spent on managerial responsibilities, such as applying for grants), give up the higher pay and prestige of the tenure track in order to continue “doing the science” via lower-status staff positions (e.g., lab manager, research software engineer), or leave the academic sector altogether.
Many choose the latter. And when that happens at scale, it harms the broader U.S. scientific enterprise by (i) decreasing federal returns on investment in training STEM researchers, and (ii) slowing scientific progress by creating a dearth of experienced personnel conducting basic research in university labs and mentoring the next generation of researchers. The solution is to strengthen and elevate the role of the career research scientist1 in academia—the highly trained senior research-group member who is hands-on and in the lab every day—in the university ecosystem. This is, fundamentally, a fairly straightforward workforce-pipeline issue that federal STEM-funding agencies have the power to address. The National Institutes of Health (NIH) and the National Science Foundation (NSF) — two of the largest sources of academic research funding — could begin by hosting high-level discussions around the problem: specifically, through an NSF-led workshop and an NIH-led task force. In parallel, the two agencies can launch immediately tractable efforts to begin making headway in addressing the problem. NSF, for instance, could increase visibility and funding for research software engineers, while NSF and/or NIH could consider providing grants to support “co-founded” research labs jointly led by an established professor or principal investigator (PI) working alongside an experienced career research scientist.
The collective goal of these activities is to infuse technical expertise into the day-to-day ideation and execution of science (especially basic research), thereby accelerating scientific progress and helping the United States retain world scientific leadership.
Challenge and Opportunity
The scientific status quo in the United States is increasingly diverting STEM experts away from direct research opportunities at universities. STEM graduate students interested in hands-on research have few attractive career opportunities in academia: those working as staff scientists, lab managers, research software engineers, and similar forego the higher pay and status of the tenure track, while those working as faculty members find themselves encumbered by tasks that are largely unrelated to research.
Making it difficult for STEM experts to pursue hands-on research in university settings harms the broader U.S. scientific enterprise in two ways. First, the federal government disburses huge amounts of money every year—via fellowship funding, research grants, tuition support, and other avenues—to help train early-career STEM researchers. This expenditure is warranted because, as the Association of American Universities explains, “There is broad consensus that university research is a long-term national investment in the future.” This investment hinges on university contributions to basic research; universities and colleges account for just 13% of overall U.S. research and development (R&D) activity, but nearly half (48%) of basic research. Limited career opportunities for talented STEM researchers to continue “doing the science” in academic settings therefore limits our national returns on investment in these researchers.
Box 1. Productivity benefits of senior researchers in software-driven fields. |
Cutting-edge research in nearly all STEM fields increasingly depends on software. Indeed, NSF observes that software is “directly responsible for increased scientific productivity and significant enhancement of researchers’ capabilities.” Problematically, there is minimal support within academia for development and ongoing maintenance of software. It is all too common for a promising research project at a university lab to wither when the graduate student who wrote the code upon which the project depends finishes their degree and leaves. The field of deep learning (a branch of artificial intelligence (AI) and machine learning) underscores the value of research software. Progress in deep learning was slow and stuttering until development of user-friendly software tools in the mid-2010s: a development spurred mostly by private-sector investment. The result has been an explosion of productivity in deep learning. Even now, top AI research teams cite software-engineering talent as a critical input upon which their scientific output depends. But while research software engineers are some of the most in-demand and valuable team members in the private sector, career positions for research software engineers are uncommon at academic institutions. How much potential scientific discovery are U.S. university labs failing to recognize as a result of this underinvestment? |
Second, attrition of STEM talent from academia slows the pace of U.S. scientific progress because most hands-on research activities are conducted by graduate students rather than more experienced personnel. Yet, senior researchers are far more scientifically productive. With years of experience under their belt, senior researchers possess tacit knowledge of how to effectively get research done in a field, can help a team avoid repeating mistakes, and can provide the technical mentorship needed for graduate students to acquire research skills quickly and well. And with graduate students and postdocs typically remaining with a research group for only a few years, career research scientists also provide important continuity across projects. The productivity boosts that senior researchers can deliver are especially well established for software-driven fields (see box).
The absence of attractive job opportunities for career research scientists at most academic institutions is an anomaly. Such opportunities are prevalent in the private sector, at national labs (e.g., those run by the NIH and the Department of Energy) and other government institutions, and in select well-endowed university labs that enjoy more discretionary spending ability. As the dominant funder of university research in the United States, the federal government has massive leverage over the structure of research labs. With some small changes in grant-funding incentives, federal agencies can address this anomaly and bring more senior research scientists into the academic research system.
Plan of Action
Federal STEM-funding agencies — led by NSF and NIH, as the two largest sources of federal funding for academic research — should explore and pursue strategies for changing grant-funding incentives in ways that strengthen and elevate the role of the career research scientist in academia. We split our recommendations into two parts.
The first part focuses on encouraging discussion. The problem of limited career options for trained STEM professionals who want to engage in hands-on research in the academic sector currently flies beneath the radar of many extremely knowledgeable stakeholders inside and outside of the federal government. Bringing these stakeholders together might result in excellent actionable suggestions on how to retain talented research scientists in academia. Second, we suggest two specific projects to make headway on the problem: (i) further support for research software engineers and (ii) a pilot program supporting co-founded research labs. While the recommendations below are targeted to NSF and NIH, other federal STEM-funding agencies (e.g., the Departments of Energy and Defense) can and should consider similar actions.
Part 1. Identify needs, opportunities, and options for federal actions to support and incentivize career research scientists.2
Shifting academic employment towards a model more welcoming to career research scientists will require a mix of specific new programs and small and large changes to existing funding structures. However, it is not yet clear which reforms should be prioritized. Our first set of suggestions is designed to start the necessary discussion.
Specifically, NSF should start by convening key community members at a workshop (modeled on previous NSF-sponsored workshops, such as the workshop on a National Network of Research Institutes [NNRI]) focused on how the agency can encourage creation of additional career research scientist positions at universities. The workshop should also (i) discuss strategies for publicizing and encouraging outstanding STEM talent to pursue such positions, (ii) identify barriers that discourage universities from posting for career research scientists, and (iii) brainstorm solutions to these barriers. Workshop participants should include representatives from federal agencies that sponsor national labs as well as industry sectors (software, biotech, etc.) that conduct extensive R&D, as these entities are more experienced employers of career research scientists. The workshop should address the following questions:
- How can NSF minimize the effects of the “research scientist tax”?3
- What are the specific problems that a research scientist-centered workforce could address?
- What tools does NSF have to affect academic-employment structures? Are there ways to incentivize the employment of research scientists within a project-based funding framework?
- Are there ways to relax grant-funding constraints such that PIs could hire contract research scientists when appropriate?
- In what areas of research and education does the faculty-as-manager paradigm dominate and in what areas are senior research scientists critical but currently unavailable? To what extent do these areas (problematically) overlap?
- How could career research scientists support NSF’s core mission of “advancing research and education” (including by training graduate students)?
- In an industry-employment landscape that provides highly paid opportunities for career research scientists, how can NSF support universities in talent retention?
- What best practices for hiring and retaining career research scientists can be gleaned from existing employment models in national labs and industry?
- Are there tools that would increase the prestige and attractiveness of non-faculty but research-oriented positions within academia?
The primary audience for the workshop will be NSF leadership and policymakers. The output of the workshop should be a report suggesting a clear, actionable path forward for those stakeholders to pursue.
NIH should pursue an analogous fact-finding effort, possibly structured as a working group of the Advisory Committee to the Directorate. This working group would identify strategies for incentivizing labs to hire professional staff members, including expert lab technicians, professional biostatisticians, and RSEs. This working group will ultimately recommend to the NIH Director actions that the agency can take to expand the roles of career research scientists in the academic sector. The working group would address questions similar to those explored in the NSF workshop.
Part 2. Launch two pilot projects to begin expanding opportunities for career research scientists.
Pilot 1. Create a new NSF initiative to solicit and fund requests for research software engineer (RSE) support.
Research software engineers (RSEs) build and maintain research software, and train scientists to use that software. Incentivizing the creation of long-term RSE positions at universities will increase scientific productivity and build the infrastructure for sustained scientific progress in the academic sector. Though a wide range of STEM disciplines could benefit from RSE involvement, NSF’s Computer and Information Science and Engineering (CISE) Directorate is a good place to start expanding support for RSEs in academic projects.
CISE has previously invested in nascent support structures for professional staff in software and computing fields. The CISE Research Initiation Initiative (CRII) was created to build research independence among early-career researchers working in CISE-related fields by funding graduate-student appointments. Much CRII-funded work involves producing — and in turn, depends on — shared community software. Similarly, the Campus Research Computing Consortium (CaRCC) and RCD Nexus are NSF-supported programs focused on creating guidelines and resources for campus research computing operations and infrastructure. Through these two programs, NSF is helping universities build a foundation of (i) software production and (ii) computing hardware and infrastructure needed to support that software.
However, effective RSEs are crucial for progress in scientific fields outside of CISE’s domain. For example, one of this memo’s authors has personal experience with NSF-funded geosciences research. PIs working in this field are desperate for funding to hire RSEs, but do not have access to funding for that purpose. Instead, they depend almost entirely on graduate students.
As a component of the workshop recommended above, NSF should highlight other research areas hamstrung by an acute need for RSEs. In addition, CISE should create a follow-on CISE Software Infrastructure Initiative (CSII) that solicits and funds requests from pre-tenure academic researchers in a variety of fields for RSE support. Requests should explain how the requested RSE would (i) catalyze cutting-edge research, and (ii) maintain critical community open-source scientific software. Moreover, academia severely lacks strong mentorship in software engineering. A specific goal of CSII funding should be to support at least a 1:3 ratio of RSEs to graduate students in funded labs. Creative evaluation mechanisms will be needed to assess the success of CSII. The goal of this initiative will be a community of university researchers productively using software created and supported by RSEs hired through CSII funding.
Pilot 2. Provide grants to support “co-founded” research labs jointly led by an established professor or principal investigator (PI) working alongside an experienced career research scientist.
Academic PIs (typically faculty) normally lead their labs and research groups alone. This state of affairs leads to high rates of burnout, possibly leading to poor research success. In some cases, starting an ambitious new project or company with a co-founder makes the endeavor more likely to succeed while being less stressful and isolating. A co-founder can provide a complementary set of skills. For example, the startup incubator Y Combinator is well known for wanting teams to include a CEO visionary and manager working alongside a CTO builder and designer. By contrast, academic PIs are expected to be talented at all aspects of running a modern scientific lab. Developing mechanisms to help scientists come together and benefit from complementary skill sets should be a high priority for science-funding agencies.
We recommend that NSF and/or NIH create a pilot grant program to fund co-founded research labs at universities. Formally co-founded research groups have been successful across scientific domains (e.g., the AbuGoot Lab at MIT and the Carpenter-Singh Lab at the Broad Institute), but remain quite rare. Federal grants for co-founded research labs would build on this proof of concept by competitively awarding 5–7 years of salary and equipment funding to support a lab jointly run by an early-career PI and a career research scientist. A key anticipated benefit of this grant program is increased retention of outstanding researchers in positions that enable them to keep “doing the science.” Currently, the most talented STEM researchers become faculty members or leave academia altogether. Career research scientist positions simply cannot offer competitive levels of compensation and prestige. Creating a new, high-profile, grant-funded opportunity for STEM talent to remain in hands-on university lab positions could help shift the status quo. Creating a pathway for co-founded and co-led research labs would also help PIs avoid isolation and burnout while building more robust, healthy, and successful research teams.
Conclusion
Many breakthroughs in scientific progress have required massive funding and national coordination. This is not one of them. All that needs to be done is allow expert research scientists to do the hands-on work that they’ve been trained to do. The scientific status quo prevents our nation’s basic research enterprise from achieving its full potential, and from harnessing that potential for the common good. Strengthening and elevating the role of career research scientists in the academic sector will empower existing STEM talent to drive scientific progress forward.
Yes. The tech sector is a good example. Multiple tech companies have developed senior individual contributor (IC) career paths. These IC career paths allow people to grow their influence while remaining mostly in a hands-on technical role. The most common role of a senior software engineering IC is that of the “tech lead”, guiding the technical decision making and execution of a team. Other paths might involve prototyping and architecting a critical new system or diving in and solving an emergency problem. For more details on this kind of career, look at the Staff Engineer book and accompanying discussion.
The United States has long been the international leader in scientific progress, but that position is being threatened as more countries develop the human capital and infrastructure to compete in a knowledge-oriented economy. In an era where humankind faces mounting existential risks requiring scientific innovation, maintaining U.S. scientific leadership is more important than ever. This requires retaining high-level scientific talent in hands-on, basic research activities. But that goal is undermined by the current structure of employment in American academic science.
Key federal STEM-funding agencies that could also consider ways to support and elevate career research scientist positions include the Departments of Agriculture, Defense, and Energy, as well as the National Aeronautics and Space Administration (NASA).
Regulating Use of Mobile Sentry Devices by U.S. Customs and Border Protection
Summary
Robotic and automated systems have the potential to remove humans from dangerous situations, but their current intended use as aids or replacements for human officers conducting border patrols raises ethical concerns if not regulated to ensure that this use “promot[es] the safety of the officer/agent and the public” (emphasis added). U.S. Customs and Border Protection (CBP) should update its use-of-force policy to cover the use of robotic and other autonomous systems for CBP-specific applications that differ from the military applications assumed in existing regulations. The most relevant existing regulation, Department of Defense Directive 3000.09, governs how semi-autonomous weapons may be used to engage with enemy combatants in the context of war. This use case is quite different from mobile sentry duty, which may include interactions with civilians (whether U.S. citizens or migrants). With robotic and automated systems about to come into regular use at CBP, the agency should proactively issue regulations to forestall adverse effects—specifically, by only permitting use of these systems in ways that presume all encountered humans to be non-combatants.
Challenge and Opportunity
CBP is currently developing mobile sentry devices as a new technology to force-multiply its presence at the border. Mobile sentry devices, such as legged and flying robots, have the potential to reduce deaths at the border by making it easier to locate and provide aid to migrants in distress. According to an American Civil Liberties Union (ACLU) report, 22% of migrant deaths between 2010 and 2021 that involved an on-duty CBP agent or officer were caused by medical distress that began before the agent or officer arrived on the scene. However, the eventual use cases, rules of engagement, and functionalities of these robots are unclear. If not properly regulated, mobile sentry devices could also be used to harm or threaten people at the border—thereby contributing to the 44% of deaths that occurred as a direct result of vehicular or foot pursuit by a CBP agent. Regulations on mobile sentry device use—rather than merely acquisition—are needed because even originally unarmed devices can be weaponized after purchase. Devices that remain unarmed can also harm civilians using a limb or propeller.
Existing Department of Homeland Security (DHS) regulations governing autonomous systems seek to minimize technological bias in artificially intelligent risk-assessment systems. Existing military regulations seek to minimize risks of misused or misunderstood capabilities for autonomous systems. However, no existing federal regulations govern how uncrewed vehicles, whether remotely controlled or autonomous, can be used by CBP. The answer is not as simple as extending military regulations to the CBP. Military regulations governing autonomous systems assume that the robots in question are armed and interacting with enemy combatants. This assumption does not apply to most, if not all, possible CBP use cases.
With the CBP already testing robotic dogs for deployment on the Southwestern border, the need for tailored regulation is pressing. Recent backlash over the New York Police Department testing similar autonomous systems makes this topic even more timely. While the robots used by CBP are currently unarmed, the same company that developed the robots being tested by CBP is working with another company to mount weapons on them. The rapid innovation and manufacturing of these systems requires implementation of policies governing their use by CBP before CBP has fully incorporated such systems into its workflows, and before the companies that build these systems have formed a powerful enough lobby to resist appropriate oversight.
Plan of Action
CBP should immediately update its Use of Force policy to include restrictions on use of force by mobile sentry devices. Specifically, CBP should add a chapter to the policy with the following language:
- A “Mobile Sentry Device” should be defined as any remotely controlled, autonomous, or semi-autonomous mobile technology used for surveillance. Examples of Mobile Sentry Devices include self-driving cars, legged robots, or quadcopter drones.
- No amount of force may be determined “reasonable” if administered by a Mobile Sentry Device, whether the Device is (i) completely controlled by an agent or officer, or (ii) operating in an autonomous or semi-autonomous mode.
- No Mobile Sentry Device may be authorized to administer Lethal Force, Less-Lethal Force, or any type of force applied directly by contact with the Device (i.e., contact equivalent to an “Empty Hand” technique). For example, a legged robot may not be used to discharge a firearm, disperse Oleoresin Capsicum spray (pepper spray), or strike a human with a limb.
- A Mobile Sentry Device may not be used as a Vehicular Immobilization Device (or used to deploy such a device), whether the Mobile Sentry Device is (i) completely controlled by an agent or officer, or (ii) operating in an autonomous or semi-autonomous mode.
- When powered on, Mobile Sentry Devices must maintain a distance of at least two feet from any humans not authorized to operate the Device. The Device and its operator are responsible for maintaining this distance.
- Mobile Sentry Devices may not be used to detain or perform arrests, nor to threaten or intimidate with the implicit threat of detainment or arrest.
- A Mobile Sentry Device may be used to administer humanitarian aid or provide a two-way visual or auditory connection to a CBP officer or agent.
- When approaching people to offer humanitarian aid, the Device must use de-escalation techniques to indicate that it is not a threat. These techniques will necessarily vary based on the specific technology. Some examples might include a flying device landing and immediately unfolding a screen playing a non-threatening video, or a legged device sitting with its legs underneath it and cycling through non-threatening audio recordings in multiple languages.
- When used for humanitarian purposes, the Device may not touch its human target(s) or request them to touch it. To transfer an item (such as food, water, or emergency medical supplies) to the target(s), the Device must drop the package with the items while maintaining at least two feet of distance from the closest person.
- When used to provide a two-way visual or auditory connection with a CBP officer or agent, the Device must indicate that such a connection is about to be formed and indicate when the connection is broken. For example, the Device could use an audio clip of a ringing phone to signal that a two-way audio connection to a CBP officer is about to commence.
These regulations should go into effect before Mobile Sentry Devices are moved from the testing phase to the deployment phase. Related new technology, whether it increases capabilities for surveillance or autonomous mobility, should undergo review by a committee that includes representatives from the National Use of Force Review Board, migrant rights groups, and citizens living along the border. This review should mirror the process laid out in the Community Control over Police Surveillance project, which has already been successfully implemented in multiple cities.
Conclusion
U.S. Customs and Border Patrol (CBP) is developing an application for legged robots as mobile sentry devices at the southwest border. However, the use cases, functionality, and rules of engagement for these robots remain unclear. New regulations are needed to forestall adverse effects of autonomous robots used by the federal government for non-military applications, such as those envisioned by CBP. These regulations should specify that mobile sentry devices can only be used as humanitarian aids, and must use de-escalation methods to indicate that they are not threatening. Regulations should further mandate that mobile sentry devices maintain clear distance from human targets, that use of force by mobile sentry devices is never considered “reasonable,” and that mobile sentry devices may never be used to pursue, detain, or arrest humans. Such regulations will help ensure that the legged robots currently being tested as mobile sentry devices by CBP—as well as any future mobile sentry devices—are used ethically and in line with CBP’s goals, alleviating concerns for migrant advocates and citizens along the border.
Regulations on purchasing are not sufficient to prevent mobile sentry device technology from being weaponized after it is purchased. However, DHS could certainly also consider updating its acquisition regulations to include clauses resulting in fines when mobile sentry devices acquired by the CBP are not used for humanitarian purposes.
DOD Directive 3000.09 regulates the use of autonomous weapons systems in the context of war. For an autonomous, semi-autonomous, or remotely controlled system that is deployed with the intention to be a weapon in an active battlefield, this regulation makes sense. But applications of robotic and automated systems currently being developed by DHS are oriented towards mobile sentry duty along stretches of American land where civilians are likely to be found. This sentry duty is likely to be performed by uncrewed ground robots following GPS breadcrumb trails along predetermined regular patrols along the border. Applying Directive 3000.09, the use of a robot to kill or harm a person during a routine patrol along the border would not be a violation as long as a human had “meaningful control” over the robot at that time. The upshot is that mobile sentry devices used by CBP should be subject to stricter regulations.
Most companies selling legged robots in the United States have explicit end-user policies prohibiting the use of their machines to harm or intimidate humans or animals. Some companies selling quadcopter drones have similar policies. But these policies lack any enforcement mechanism. As such, there is a regulatory gap that the federal government must fill.
No, but it is an immediately actionable strategy. An alternative—albeit more time-consuming—option would be for CBP to form a committee comprising representatives from the National Use of Force Review Board, the military, migrant-rights activist groups, and experts on ethics to develop a directive for CBP’s use of mobile sentry devices. This directive should be modeled after DoD Directive 3000.09, which regulates the use of lethal autonomous weapons systems by the military. As the autonomous systems in DOD Directive 3000.09 are assumed to be interacting with enemy combatants while CBP’s jurisdiction consists mostly of civilians, the CBP directive should be considerably more stringent than Directive 3000.09.
The policies proposed in this memo govern what mobile sentry devices are and are not permitted to do, regardless of the extent to which humans are involved in device operation and/or the degree of autonomy possessed by the technology in question. The policies proposed in this memo could therefore be applied consistently as the technology continues to be developed. AI is always changing and improving, and by creating policies that are tech-agnostic, CPB can avoid updating regulations as mobile sentry device technology evolves.
CLimate Improvements through Modern Biotechnology (CLIMB) — A National Center for Bioengineering Solutions to Climate Change and Environmental Challenges
Summary
Tackling pressing environmental challenges — such as climate change, biodiversity loss, environmental toxins and pollution — requires bold, novel approaches that can act at the scale and expediency needed to stop irreversible damage. Environmental biotechnology can provide viable and effective solutions. The America COMPETES Act, if passed, would establish a National Engineering Biology Research and Development Initiative. To lead the way in innovative environmental protection, a center should be created within this initiative that focuses on applying biotechnology and bioengineering to environmental challenges. The CLimate Improvements through Modern Biotechnology (CLIMB) Center will fast-track our nation’s ability to meet domestic and international decarbonization goals, remediate contaminated habitats, detect toxins and pathogens, and deliver on environmental-justice goals.
The CLIMB Center would (i) provide competitive grant funding across three key tracks — bioremediation, biomonitoring, and carbon capture — to catalyze comprehensive environmental biotechnology research; (ii) house a bioethics council to develop and update guidelines for safe, equitable environmental biotechnology use; (iii) manage testbeds to efficiently prototype environmental biotechnology solutions; and (iv) facilitate public-private partnerships to help transition solutions from prototype to commercial scale. Investing in the development of environmental biotechnology through the CLIMB Center will overall advance U.S. leadership on biotechnology and environmental stewardship, while helping the Biden-Harris Administration deliver on its climate and environmental-justice goals.
Challenge and Opportunity
The rapidly advancing field of biotechnology has considerable potential to aid the fight against climate change and other pressing environmental challenges. Fast and inexpensive genetic sequencing of bacterial populations, for instance, allows researchers to identify genes that enable microorganisms to degrade pollutants and synthesize toxins. Existing tools like CRISPR, as well as up-and-coming techniques such as retron-library recombineering, allow researchers to effectively design microorganisms that can break down pollutants more efficiently or capture more carbon. Biotechnology as a sector has been growing rapidly over the past two decades, with the global market value estimated to be worth nearly $3.5 trillion by 2030. These and numerous other biotechnological advances are already being used to transform sectors like medicine (which comprises nearly 50% of the biotechnology sector), but have to date been underutilized in the fight for a more sustainable world.
One reason why biotechnology and bioengineering approaches have not been widely applied to advance climate and environmental goals is that returns on investment are too uncertain, too delayed, or too small to motivate private capital — even if solving pressing environmental issues through biotechnology would deliver massive societal benefits. The federal government can act to address this market failure by creating a designated environmental-biotechnology research center as part of the National Engineering Biology Research and Development Initiative (America COMPETES act, Sec. 10403). Doing so will help the Biden-Harris Administration achieve its ambitious targets for climate action and environmental justice.
Plan of Action
The America COMPETES Act would establish a National Engineering Biology Research and Development Initiative “to establish new research directions and technology goals, improve interagency coordination and planning processes, drive technology transfer to the private sector, and help ensure optimal returns on the Federal investment.” The Initiative is set to be funded through agency contributions and White House Office and Science and Technology Policy (OSTP) budget requests. The America COMPETES Act also calls for creation of undesignated research centers within the Initiative. We propose creating such a center focused on environmental-biotechnology research: The CLimate Improvements through Modern Biotechnology (CLIMB) Center. The Center would be housed under the new National Science Foundation (NSF) Directorate for Technology, Innovation and Partnerships and co-led by the NSF Directorate of Biological Sciences. The Center would take a multipronged approach to support biotechnological and bioengineering solutions to environmental and climate challenges and rapid technology deployment.
We propose the Center be funded with an initial commitment of $60 million, with continuing funds of $300 million over five years. The main contributing federal agencies research offices would be determined by OSTP, but should at minimum include: NSF; the Departments of Agriculture, Defense, and Energy (USDA, DOD, and DOE); the Environmental Protection Agency (EPA), the National Oceanic and Atmospheric Administration (NOAA), and the U.S. Geological Survey (USGS).
Specifically, the CLIMB Center would:
- Provide competitive grant funding across three key tracks — bioremediation, biomonitoring, and carbon capture — to catalyze comprehensive environmental-biotechnology research.
- House a bioethics council to develop and update guidelines for safe, equitable environmental-biotechnology use.
- Manage testbeds to efficiently prototype environmental-biotechnology solutions.
- Facilitate public-private partnerships to help transition solutions from prototype to commercial scale.
More detail on each of these components is provided below.
Component 1: Provide competitive grant funding across key tracks to catalyze comprehensive environmental biotechnology research.
The CLIMB Center will competitively fund research proposals related to (i) bioremediation, (ii) biomonitoring, and (iii) carbon capture. These three key research tracks were chosen to span the approaches to tackle environmental problems from prevention, monitoring to large-scale remediation. Within these tracks, the Center’s research portfolio will span the entire technology-development pathway, from early-stage research to market-ready applications.
Track 1: Bioremediation
Environmental pollutants are detrimental to ecosystems and human health. While the Biden-Harris Administration has taken strides to prevent the release of pollutants such as per- and polyfluoroalkyl substances (PFAS), many pollutants that have already been released into the environment persist for years or even decades. Bioremediation is the use of biological processes to degrade contaminants within the environment. It is either done within a contaminated site (in-situ bioremediation) or away from it (ex-situ). Traditional in-situ bioremediation is primarily accomplished by bioaugmentation (addition of pollutant-degrading microbes) or by biostimulation (supplying oxygen or nutrients to stimulate the growth of pollutant-degrading microbes that are already present). While these approaches work, they are costly, time-consuming, and cannot be done at large spatial scales.
Environmental biotechnology can enhance the ability of microbes to degrade contaminants quickly and at scale. Environmental-biotechnology approaches produce bacteria that are better able to break down toxic chemicals, decompose plastic waste, and process wastewater. But the potential of environmental biotechnology to improve bioremediation is still largely untapped, as technology development and regulatory regimes still need to be developed to enable widespread use. CLIMB Center research grants could support the early discovery phase to identify more gene targets for bioremediation as well as efforts to test more developed bioremediation technologies for scalability.
Track 2: Biomonitoring
Optimizing responses to environmental challenges requires collection of data on pollutant levels, toxin prevalence, spread of invasive species, and much more. Conventional approaches to environmental monitoring (like mass spectrometry or DNA amplification) require specialized equipment, are low-throughput, and need highly trained personnel. In contrast, biosensors—devices that use biological molecules to detect compounds of interest—provide rapid, cost-effective, and user-friendly alternatives to measure materials of interest. Due to these characteristics, biosensors enable users to sample more frequently and across larger spatial scales, resulting in more accurate datasets and enhancing our ability to respond. Detection of DNA or RNA is key for identifying pathogens, invasive species, and toxin-producing organisms. Standard DNA- and RNA-detection techniques like polymerase chain reaction (PCR) require specialized equipment and are slow. By contrast, biosensors detect minuscule amounts of DNA and RNA in minutes (rather than hours) and without the need for DNA/RNA amplification. SHERLOCK and DETECTR are two examples of highly successful, marketed tools used for diagnostic applications such as detecting SARS-CoV-2 and for ecological purposes such as distinguishing invasive fish species from similar-looking native species. Moving forward, these technologies could be repurposed for other environmental applications, such as monitoring for the presence of algal toxins in water used for drinking, recreating, agriculture, or aquaculture. Furthermore, while existing biosensors can detect DNA and RNA, detecting compounds like pesticides, DNA-damaging compounds, and heavy metals requires a different class of biosensor. CLIMB Center research grants could support development of new biosensors as well as modification of existing biomonitoring tools for new applications.
Track 3: Carbon capture
Rising atmospheric levels of greenhouse gases like carbon dioxide are driving irreversible climate change. The problem has become so bad that it is no longer sufficient to merely reduce future emissions—limiting average global warming below 2°C by 2100 will require achieving negative emissions through capture and removal of atmospheric carbon. A number of carbon-capture approaches are currently being developed. These range from engineered approaches such as direct air capture, chemical weathering, and geologic sequestration to biological approaches such as reforestation, soil amendment, algal cultivation, and ocean fertilization.
Environmental-biotechnology approaches such as synthetic biology (“designed biology”) can vastly increase the amount of carbon that could be captured by natural processes. For instance, plants and crops can be engineered to produce larger root biomass that sequesters more carbon into the soil, or to store more carbon in harder-to-break-down molecules such as lignin, suberin, or sporopollenin instead of easily more metabolized sugars and cellulose. Alternatively, carbon capture efficiency can be improved by modifying enzymes in the photosynthetic pathway or limiting photorespiration through synthetic biology. Microalgae in particular hold great promise for enhanced carbon capture. Microalgae can be bioengineered to not only capture more carbon but also produce a greater density of lipids that can be used for biofuel. The potential for synthetic biology and other environmental-biotechnology approaches to enhanced carbon capture is vast, largely unexplored, and certainly under commercialized. CLIMB Center research grants could propel such approaches quickly.
Component 2: House a bioethics council to develop and update guidelines for safe, equitable environmental-biotechnology use.
The ethical, ecological, and social implications of environmental biotechnology must be carefully considered and proactively addressed to avoid unintended damage and to ensure that benefits are distributed equitably. As such, the CLIMB Center should assemble a bioethics council comprising representatives from:
- The NSF’s Directorate for Biological Sciences, which oversees funding biological research and has insights into the ethical implications of such technologies.
- The DOE’s Offices of Science, Energy Efficiency and Renewable Energy, Fossil Fuel and Carbon Management, and Joint Genome Institute, as well as ARPA-E. These entities each have interests in proposed CLIMB Center research, especially research related to carbon capture.
- The National Institutes of Standard and Technology (NIST)’s Biomarker and Genomic Sciences Group, which sets standards for tracking, monitoring, and classifying biological and genomic tools.
- NOAA’s National Ocean Service Office, which oversees the Harmful Algal Bloom Monitoring Network and could speak to the implications of environmental monitoring in ocean environments.
- The National Institutes of Health’s Office of Science Policy and Office of Biosafety, Biosecurity, and Emerging Biotechnology, which are responsible for assessing the ethical implications of emerging biotechnologies. Representatives from these offices can also provide insights and lessons learned from the biomedical field.
- The EPA’s Office of Land and Emergency Management, which oversees the Superfund program and will identify key bioremediation priorities and feasible, safe deployment strategies.
- The USGS’s Ecosystems Division, which has interests in species and land management, biological threats, and environmental health and toxins monitoring.
- The USDA’s Office of the Chief Scientist, Natural Resources Conservation Center, Agricultural Research Service, and Forest Service, which oversee research and management efforts in agriculture, energy, and land management.
- The White House Environmental Justice Advisory Council, which was recently established by Executive Order 14008 and provides recommendations for environmental-justice issues related to climate-change mitigation along with toxins, pesticides, and pollution reduction. Council representatives can provide guidance for equitable ways to deploy technologies that prioritize underserved communities.
The bioethics council will identify key ethical and equity issues surrounding emerging environmental biotechnologies. The council will then develop guidelines to ensure transparency of research to the public, engagement of key stakeholders, and safe and equitable technology deployment. These guidelines will ensure that there is a framework for the use of field-ready environmental-biotechnology devices, and that risk assessment is built consistently into regulatory-approval processes. The council’s findings and guidelines will be reported to the National Engineering Biology Research and Development Initiative’s interagency governance committee which will work with federal and state regulatory agencies to incorporate guidance and streamline regulation and oversight of environmental biotechnology products.
Component 3. Manage testbeds to efficiently prototype environmental-biotechnology solutions.
The “valley of death” separating early research and prototyping and commercialization is a well-known bottleneck hampering innovation. This bottleneck could certainly inhibit innovation in environmental biotechnology, given that environmental-biotechnology tools are often intended for use in complex natural environments that are difficult to replicate in a lab. The CLIMB Center should serve as a centralized node to connect researchers with testing facilities and test sites where environmental biotechnologies can be properly validated and risk-assessed. There are numerous federal facilities that could be leveraged for environmental biotechnology testbeds, including:
- DOE National Laboratories
- Smithsonian Institution field stations
- NOAA field laboratories
- NIST Laboratories and Research Test Beds
- U.S. Forest Service Research Stations
- USDA National Wildlife Research Center Stations
- USGS Science Centers
- NSF Long Term Ecological Research Stations
- The Centers for Disease Control and Prevention’s Biotechnology Core Facility
- DOD Environmental Laboratories
The CLIMB Center could also work with industry, state, and local partners to establish other environmental-biotechnology testbeds. Access to these testbeds could be provided to researchers and technology developers as follow-on opportunities to CLIMB Center research grants and/or through stand-alone testing programs managed by the CLIMB Center.
Component 4: Facilitate public-private partnerships to help transition solutions from prototype to commercial scale.
Public-private partnerships have been highly successful in advancing biotechnology for medicine. Operation Warp Speed, to cite one recent and salient example, enabled research, development, testing, and distribution of vaccines against SARS-CoV-2 at unprecedented speeds. Public-private partnerships could play a similarly key role in advancing the efficient deployment of market-ready environmental biotechnological devices. To this end, the CLIMB Center can reduce barriers for negotiating partnerships between environmental engineers and biotechnology manufacturers. For example, the CLIMB center can develop templates for Memoranda of Understandings (MOUs) and Collaborative Research Agreements (CDAs) to facilitate the initial establishment of the partnerships, as well as help connect interested parties.The CLIMB center could also facilitate access for both smaller companies and researchers to existing government infrastructure necessary to deploy these technologies. For example, an established public-private partnership team could have access to government-managed gene and protein libraries, microbial strain collections, sequencing platforms, computing power, and other specialized equipment. The Center could further negotiate with companies to identify resources (equipment, safety data, and access to employee experts) they are willing to provide. Finally, the Center could determine and fast-track opportunities where the federal government would be uniquely suited to serve as an end user of biotechnology products. For instance, in the bioremediation space, the EPA’s purview for management and cleanup of Superfund sites would immensely benefit from the use of novel, safe, and effective tools to quickly address pollution and restore habitats.
Conclusion
Environmental and climate challenges are some of the most pressing problems facing society today. Fortunately, advances in biotechnology that enable manipulation, acceleration, and improvement of natural processes offer powerful tools to tackle these challenges. The federal government can accelerate capabilities and applications of environmental biotechnology by establishing the CLimate Improvements through Modern Biotechnology (CLIMB) Center. This center, established as part of the National Engineering Biology Research and Development Initiative, will be dedicated to advancing research, development, and commercialization of environmental biotechnology. CLIMB Center research grants will focus on advances in bioremediation, biomonitoring, and biologically assisted carbon capture, while other CLIMB Center activities will scale and commercialize emerging environmental biotechnologies safely, responsibly, and equitably. Overall, the CLIMB Center will further solidify U.S. leadership in biotechnology while helping the Biden-Harris Administration meet its ambitious climate, energy, and environmental-justice goals.
Environmental biotechnology can help address wide-reaching, interdisciplinary issues with huge benefits for society. Many of the applications for environmental biotechnology are within realms where the federal government is an interested or responsible party. For instance, bioremediation largely falls within governmental purview. Creating regulatory guidelines in parallel to the development of these new technologies will enable an expedited rollout. Furthermore, environmental biotechnology approaches are still novel and using them on a wide scale in our natural environments will require careful handling, testing, and regulation to prevent unintended harm. Here again, the federal government can play a key role to help validate and test technologies before they are approved for use on a wide scale.
Finally, the largest benefits from environmental biotechnology will be societal. The development of such technology should hence be largely driven by its potential to improve environmental quality and address environmental injustices, even if these are not profitable. As such, federal investments are better suited than private investments to help develop and scale these technologies, especially during early stages when returns are too small, too uncertain, and too future-oriented.
Bioengineered products already exist and are in use, and bioengineering innovations and technology will continue to grow over the next century. Rather than not develop these tools and lag behind other nations that will continue to do so, it is better to develop a robust regulatory framework that will address the critical ethical and safety concerns surrounding their uses. Importantly, each bioengineered product will present its own set of risks and challenges. For instance, a bacterial species that has been genetically engineered to metabolize a toxin is very different from an enzyme or DNA probe that could be used as a biosensor. The bacteria are living, can reproduce, and can impact other organisms around them, especially when released into the environment. In contrast, the biosensor probe would contain biological parts (not a living organism) and would only exist in a device. It is thus critical to ensure that every biotechnology product, with its unique characteristics, is properly tested, validated, and designed to minimize its environmental impact and maximize societal benefits. The CLIMB Center will greatly enhance the safety of environmental-biotechnology products by facilitating access to test beds and the scientific infrastructure necessary to quantify these risk-benefit trade-offs.
The Biden-Harris Administration has recognized the vast disparity in environmental quality and exposure to contaminants that exist across communities in the United States. Communities of color are more likely to be exposed to environmental hazards and bear the burden of climate change-related events. For example, the closer the distance to a Superfund site—a site deemed contaminated enough to warrant federal oversight—the higher the proportion of Black and the lower the proportion of White families. To address these disparities, the Administration issued Executive Order 14008 to advance environmental justice efforts. Through this order, President Biden created an Environmental Justice Advisory Council and launched the Justice40 initiative, which mandates that 40% of the benefits from climate investments be delivered to underserved communities. The Justice40 initiative includes priorities such as the “remediation and reduction of legacy pollution, and the development of critical clean water infrastructure.” The Executive Order also calls for the creation of a “community notification program to monitor and provide real-time data to the public on current environmental pollution…in frontline and fenceline communities — places with the most significant exposure to such pollution.” Environmental biotechnology offers an incredible opportunity to advance these goals by enhancing water treatment and bioremediation and enabling rapid and efficient monitoring of environmental contaminants.
President Biden has set targets for a 50–52% reduction (relative to 2005 levels) in net greenhouse-gas pollution by the year 2030, and has directed federal government operations to reach 100% carbon-pollution-free electricity by 2030 (Executive Order 14057). It is well established that meeting such climate goals and limiting global warming to less than 2°C will require negative emissions technologies (carbon capture) in addition to reducing the amount of emissions created by energy and other sectors. Carbon-capture technologies will need to be widely available, cost-effective, and scalable. Environmental biotechnology can help address these needs by enhancing our capacity for biological carbon capture through the use of organisms such as microalgae and macroalgae, which can even serve the dual role of producing biofuels, feedstock, and other products in a carbon-neutral or carbon-negative way. The CLIMB Center can establish the United States as the global leader in advancing both biotechnology and the many untapped environmental and climate solutions it can offer.
There are multiple avenues for funding foundational research and development in bioengineering. Federal agencies and offices that currently fund bioengineering with an environmental focus include (but are not necessarily limited to):
- DOE’s Office of Science’s various research programs, ARPA-E, and DOE’s Bioenergy Technologies Office
- EPA’s Office of Research and Development, Science to Achieve Results (STAR) Program
- National Science Foundation’s Biological Sciences and Engineering Directorates
- USDA’s National Institute of Food and Agriculture, Biotechnology Risk Assessment Research Grants Program
- NOAA’s Office of Ocean Exploration and Research
- NASA’s Space Technology Mission Directorate
- The National Institute of Health’s Environmental Health Services and National Institute of Biomedical Imaging and Bioengineering Institutes
- DOD’s DARPA, Biological Technologies Office
Research funding provided by these offices often includes a biomedical focus. The research and development funding provided by the CLIMB Center would seek to build upon these efforts and help coordinate directed research towards environmental-biotechnology applications.
Compared to conventional analytical techniques, biosensors are fast, cost-effective, easy-to-use, and largely portable and largely portable. However, biosensors are not always poised to take-over conventional techniques. In many cases, regulatory bodies have approved analytical techniques that can be used for compliance. Novel biosensors are rarely included in the suite of approved techniques, even though biosensors can complement conventional techniques—such as by allowing regulators to rapidly screen more samples to prioritize which require further processing using approved conventional methods. Moreover, as conventional methods can only provide snapshot measurements, potentially missing critical time periods where toxins, contaminants, or pathogens can go unnoticed. Biosensors, on the other hand, could be used to continuously monitor a given area. For example, algae can accumulate (bloom) and produce potent toxins that accumulate in seafood. To protect human health, seafood is tested using analytical chemical approaches (direct measurement of toxins) or biological assays (health monitoring in exposed laboratory animals). This requires regulators to decide when it is best to sample. However, if a biosensor was deployed in an monitoring array out in the ocean or available to people who collect the seafood, it could serve as an early detection system for the presence of these toxins. This application will become especially important moving forward since climate change has altered the geographic distribution and seasonality of these algal blooms, making it harder to forecast when it is best to measure seawater and seafood for these toxins.
Communities of color are more likely to live near Superfund sites, be disproportionately exposed to pollutants, and bear the heaviest burdens from the effects of climate change. These communities have also been disproportionately affected by unethical environmental and medical-research practices. It is imperative that novel tools designed to improve environmental outcomes benefit these communities and do not cause unintended harm. Guidelines established by the CLIMB Center’s bioethics council coupled with evaluation of environmental biotechnologies in realistic testbeds will help ensure that this is the case.