Unlocking the U.S. Bioeconomy with the Plant Genome Project

Summary

Plants are an important yet often overlooked national asset. We propose creating a Plant Genome Project (PGP), a robust Human Genome Project-style initiative to build a comprehensive dataset of genetic information on all plant species, starting with the 7,000 plant species that have historically been cultivated for food and prioritizing plants that are endangered by climate change and habitat loss. In parallel, we recommend expanding the National Plant Germplasm System (NPGS) to include genomic-standard repositories that connect plant genetic information to physical seed/plant material. The PGP will mobilize a whole-of-government approach to advance genomic science, lower costs, and increase access to plant genomic information. By creating a fully sequenced national germplasm repository and leveraging modern software and data science tools, we will unlock the U.S. bioeconomy, promote crop innovation, and help enable a diversified, localized, and climate-resilient food system.

Challenge and Opportunity

Plants provide our food, animal feed, medicinal compounds, and the fiber and fuel required for economic development. Plants contribute to biodiversity and are critical for the existence of all other living creatures. Plants also sequester atmospheric carbon, thereby combating climate change and sustaining the health of our planet. 

However, as a result of climate change and human practices, we have been losing plants at an alarming rate. Nearly 40% of the world’s 435,000 unique land plant species are extremely rare and at risk of extinction due to climate change. More than 90% of crop varieties have disappeared from fields worldwide as farmers have abandoned diverse local crop varieties in favor of genetically uniform, commercial varieties. 

We currently depend on just 15 plants to provide almost all of the world’s food, making our global food supply extremely vulnerable to climate change, new diseases, and geopolitical upheaval—problems that will be exacerbated as the world’s population rises to 10 billion by 2050. 

We are in a race against time to stop the loss of plant biodiversity—and at the same time, we desperately need to increase the diversity in our cultivated crops. To do this, we must catalog, decode, and preserve valuable data on all existing plants. Yet more than two decades since we sequenced the first plant genome, genome sequence information exists for only 798 plant species—a small fraction of all plant diversity.

Although large agriculture companies have made substantial investments in plant genome sequencing, this genetic information is focused on a small number of crops and is not publicly available. What little information we have is siloed, known only to large corporations and not openly available to researchers, farmers, or policymakers. This is especially true for nations in the Global South, who are not usually included in most genome sequencing projects. Furthermore, current data in existing germplasm repositories, State Agricultural Experiment Stations, and land-grant universities is not easily accessible online, making it nearly impossible for researchers in both public and private settings to explore. These U.S. government collections and resources of germplasm and herbaria, documented by the Interagency Working Group on Scientific Collections, have untapped potential to catalyze the bioeconomy and mobilize investment in the next generation of plant genetic advancements and, as a result, food security and new economic opportunities.

Twenty years ago, the United States launched the Human Genome Project (HGP), a shared knowledge-mapping initiative funded by the federal government. We continue to benefit from this initiative, which has identified the cause of many human diseases and enabled the development of new medicines and diagnostics. The HGP had a $5.4 billion price tag ($2.7 billion from U.S. contributions) but resulted in more than $14.5 billion in follow-on genomics investments that enabled the field to rapidly develop and deploy cutting-edge sequencing and other technologies, leading to a drop in genomic sequencing cost from $300 million per genome to less than $1,000.

Today, we need a Human Genome Project for plants—a unified Plant Genome Project that will create a global database of genetic information on all plants to increase food security and unlock plant innovation for generations to come. Collecting, sequencing, decoding, and cataloging the nation’s plant species will fill a key gap in our national natural capital accounting strategy. The PGP will complement existing conservation initiatives led by the Office of Science and Technology Policy (OSTP) and other agencies, by deepening our understanding of America’s unique biodiversity and its potential benefits to society. Such research and innovation investment would also benefit government initiatives like USAID’s Feed the Future (FTF) Initiative, particularly the Global Food Security Research Strategy, around climate-smart agriculture and genetic diversity of crops. 

PGP-driven advancements in genomic technology and information about U.S. plant genetic diversity will create opportunities to grow the U.S. bioeconomy, create new jobs, and incentivize industry investment. The PGP will also create opportunities to make our food system more climate-resilient and improve national health and well-being. By extending this effort internationally, and ensuring that the Global South is empowered to contribute to and take advantage of these genetic advancements, we can help mitigate climate change, enhance global food security, and promote equitable plant science innovation.

Plan of Action

The Biden Administration should launch a Plant Genome Project to support and enable a whole-of-government approach to advancing plant genomics and the bioeconomy. The PGP will build a comprehensive, open-access dataset of genetic and biological information on all plant species, starting with the 7,000 plant species that have historically been cultivated for food and prioritizing plants that are endangered by climate change and habitat loss. The PGP will convene key stakeholders and technical talent in a novel coalition of partnerships across public and private sectors. We anticipate that the PGP, like the Human Genome Project, will jump-start new technologies that will further drive down the cost of sequencing and advance a new era for plant science innovation and the U.S. bioeconomy. Our plan envisions two Phases and seven Key Actions. 

Phase 1: PGP Planning and Formation

Action 1: Create the Plant Genomics and U.S. Bioeconomy Interagency Working Group

The White House OSTP should convene a Plant Genomics and U.S. Bioeconomy Interagency Working Group to coordinate the creation of a Plant Genome Project and initiate efforts to consult with industry, academic, philanthropy, and social sector partners. The Working Group should include representatives from OSTP, U.S. Department of Agriculture (USDA) and its Agricultural Research Service (ARS), National Plant Germplasm System, Department of Commerce, Department of Interior, National Science Foundation (NSF), National Institutes of Health (NIH), Smithsonian Institution, Environmental Protection Agency, State Department’s Office of Science and Technology Adviser, and USAID’s Feed the Future Initiative. The Working Group should:

Action 2: Launch a White House Summit on Plant Genomics Innovation and Food Security

The Biden Administration should bring together multi-sector (agriculture industry, farmers, academics, and philanthropy) and agency partners with the expertise, resource access, and interest in increasing domestic food security and climate resilience. The Summit will secure commitments for the PGP’s initial activities and identify ways to harmonize existing data and advances in plant genomics. The Summit and follow-up activities should outline the steps that the Working Group will take to identify, combine, and encourage the distribution and access of existing plant genome data. Since public-private partnerships play a core enabling role in the strategy, the Summit should also determine opportunities for potential partners, novel financing through philanthropy, and international cooperation. 

Action 3: Convene Potential International Collaborators and Partners

International cooperation should be explored from the start (beginning with the Working Group and the White House Summit) to ensure that sequencing is conducted not just at a handful of institutions in the Global North but that countries in the Global South are included and all information is made publicly available. 

We envision at least one comprehensive germplasm seed bank in each country or geographical region similar to the Svalbard seed vault or The Royal Botanic Garden at Kew and sequencing contributions from multiple international organizations such as Beijing Genomics Institute and the Sanger Institute. 

Phase 2: PGP Formalization and Launch

Action 4: Launch the Plant Genome Research Institute to centralize and coordinate plant genome sequencing

Congress should create a Plant Genome Research Institute (PGRI) that will drive plant genomics research and be the central owner of U.S. government activities. The PGRI would centralize funding and U.S. government ownership over the PGP. We anticipate the PGP would require $2.5 billion over 10 years, with investment frontloaded and funding raised through matched commitments from philanthropy, public, and private sources. The PGRI could be a virtual institute structured as a distributed collaboration between multiple universities and research centers with centralized project management. PGRI funding could also incorporate novel funding mechanisms akin to the BRAIN Initiative through U.S. philanthropy and private sector collaboration (e.g., Science Philanthropy Alliance). The PGRI would:

Action 5: Expand and Strengthen NPGS-Managed Seed Repositories

We recommend strengthening the distributed seed repository managed by the U.S. National Plant Germplasm System and building a comprehensive and open-source catalog of plant genetic information tied to physical samples. The NPGS already stores seed collections at state land-grant universities in a collaborative effort to safeguard the genetic diversity of agriculturally important plants and may need additional funding to expand its work and increase visibility and access. 

Action 6: Create a Plant Innovation Fund within AgARDA

The Agriculture Advanced Research and Development Authority (AgARDA) is a USDA-based advanced research projects agency like DARPA but for agriculture research. The 2018 Farm Bill authorized AgARDA’s creation to tackle highly ambitious projects that are likely to have an outsize impact on agricultural and environmental challenges—such as the PGP. The existing AgARDA Roadmap could guide program setup. 

Phase 3: Long-Term, Tandem Bioeconomy Investments

Action 7: Bioeconomy Workforce Development and Plant Science Education

Invest in plant science and technical workforce development to build a sustainable foundation for global plant innovation and enable long-term growth in the U.S. bioeconomy. 

Conclusion

We are in a race against time to identify, decode, catalog, preserve, and cultivate the critical biodiversity of the world’s plant species before they are lost forever. By creating the world’s first comprehensive, open-access catalog of plant genetic information tied to physical samples, the Plant Genome Project will unlock plant innovation for food security, help preserve plant biodiversity in a changing climate, and advance the bioeconomy. The PGP’s whole-of-government approach will accelerate a global effort to secure our food systems and the health of the planet while catalyzing a new era of plant science, agricultural innovation and co-operation.

Frequently Asked Questions
How much will this proposal cost?

We estimate that it would cost ~$2.5 billion to sequence the genomes of all plant species. (For reference, the Human Genome Project cost $5.4 billion in 2017 to sequence just one species).

Will the PGP access existing private sequence information?

Yes, we recommend active solicitation of existing sequence information from all entities. This data should be validated and checked from a quality control perspective before being integrated into the PGP.

Who will undertake the sequencing effort?

The newly created Plant Genome Research Institute (PGRI) will coordinate the PGP. The structure and operations of the PGRI will follow recommendations from the OSTP-commissioned Stakeholder Working Group. All work will be conducted in partnership with agencies like the U.S. Department of Agriculture, National Institutes of Health, National Science Foundation, private companies, and public academic institutions.

What about existing sequencing efforts and seed banks?

Existing sequencing efforts and seed banks will be included within the framework of the PGP.

Is the PGP a national or international effort?

The PGP will start as a national initiative, but to have the greatest impact it must be an international effort like the Human Genome Project. The White House Summit and Stakeholder Working Group will help influence scope and staging. The extinction crisis is a global problem, so the PGP should be a global effort in which the United States plays a strong leadership role. 

How will plant collection be prioritized?

In Phase 1, emphasis might be placed on native “lost crops” that can be grown in areas that are suffering from drought or are affected by climate change. Collection and selection would complement and incorporate active Biden Administration initiatives that center Indigenous science and environmental justice and equity. 


In Phase 2, efforts could focus on sequencing all plants in regions or ecosystems within the U.S. that are vulnerable to adverse climate events in collaboration with existing state-level and university programs. An example is the California Conservation Genomics Project, which aims to sequence all the threatened, endangered and commercially exploited flora and fauna of California. Edible and endangered plants will be prioritized, followed by other plants in these ecosystems.


In Phase 3, all remaining plant species will be sequenced. 

Where will the collected plants/germplasm be stored?

All collected seeds will be added to secure, distributed physical repositories, with priority given to collecting physical samples and genetic data from endangered species. 

How will the legal, ethical, and social issues around sample collection and benefit sharing be addressed?

The PGP will work to address and even correct some long-standing inequalities, ensuring that the rights and interests of all nations and Indigenous people are respected in multiple areas from specimen collection to benefit sharing while ensuring open access to genomic information. The foundational work being done by the Earth BioGenome Project’s Ethical, Legal and Social Committee will be critically important. 

Who will be invited to the White House Summit?

Invitees could include but would not be limited to the following entities with corresponding initial commitments to support the PGP’s launch:



  • Genome sequencing companies, such as Illumina, PacBio, Oxford Nanopore Technologies, and others, who would draft a white paper on the current landscape for sequencing technologies and innovation that would be needed to enable a PGP. 

  • Academic institutions with active sequencing core facilities such as the University of California, Davis and Washington University in St. Louis, among others, who would communicate existing capacity for PGP efforts and forecast additional capacity-building needs, summarize strengths of each entity and past contributions, and identify key thought leaders in the space.

  • Large ag companies, such as Bayer Crop Science, Syngenta, Corteva, and others, who are willing to share proprietary sequence information, communicate industry perspectives, identify obstacles to data sharing and potential solutions, and actively participate in the PGP and potentially provide resources. 

  • Government agencies and public institutions such as NIH/NCBI, NSF, USDA, Foundation for Food and Agriculture Research, CGIAR, Missouri Botanical Garden, would draft white papers communicating existing efforts and funding, identify funding gaps, and assess current and future collaborations.

  • Current sequencing groups/consortiums, such as the Wheat Genome Sequencing Consortium, Earth BioGenome Project, Open Green Genomes Project, HudsonAlpha, and others, would draft white papers communicating existing efforts and funding needs, identify gaps, and plan for data connectivity.

  • Tech companies, such as Google and Microsoft, could communicate existing efforts and technologies, assess the potential for new technologies and tools to accelerate PGP, curate data, and provide support such as talent in the fields of data science and software engineering.

The STEMpathy Task Force: Creating a Generation of Culturally Competent STEM Professionals

Summary

Science, technology, engineering, and mathematics (STEM) are powerful levers for improving the quality of life for everyone in the United States. The connection between STEM’s transformative potential and its current impact on structural societal problems starts in the high school classroom. 

Teachers play a critical role in fostering student cultural awareness and competency. Research demonstrates that teachers and students alike are eager to affect progress on issues related to diversity, equity, inclusion, and accessibility (DEIA). Educational research also demonstrates that DEIA and empathy enhance student sense of belonging and persistence in professional STEM pathways. However, formal STEM learning experiences lack opportunities for students to practice cultural competency and explore applications of STEM to social justice issues.

Cultural Competency is the ability to understand, empathize, and communicate with others as part of a diverse community.

The Biden-Harris Administration should establish the STEMpathy Task Force to aid high school STEM teachers in establishing cultural competency as an overarching learning goal. Through this action, the Administration would signal the prioritization of STEM equity—reflected in both the classroom and the broader community—across the United States. The program would address two pertinent issues in the STEM pipeline: the lack of momentum in STEM workforce diversification and STEM’s unfulfilled promise to relieve our society of systems of oppression and bias. Students need to be taught not only the scientific method and scientific discourse, but also how to approach their science in a manner that best uplifts all people.

Challenge & Opportunity

In a 2017 survey, over 1,900 U.S. companies listed the ability to work effectively with customers, clients, and businesses from a range of different countries and cultures as a critical skill. Since then, the importance of cultural competency in the U.S. workforce has become increasingly apparent. 

Culturally competent workers are more creative and better equipped to solve tricky problems. For example, foresters have managed wildfires by following the instruction and guidance of tribal nations and traditional ecological knowledges. Engineers have designed infrastructure that lowers the water bills of farmers in drought-stricken areas. Public health representatives have assuaged concerns about COVID-19 vaccines in under-served communities. STEM professionals who improve Americans’ quality of life do so by collaborating and communicating with people from diverse backgrounds. When students can see these intersections between STEM and social change, they understand that STEM is not limited to a classroom, lab, or field activity but is also a tool for community building and societal progress. 

Today’s middle and high school students are increasingly concerned about issues around race/ethnicity, gender, and equity. Recent college graduates also share these interests, and many demonstrate a growing desire to participate in meaningful work and to pursue social careers. When students realize that STEM fields are compatible with their passion for topics related to identity and social inequities, they are more likely to pursue STEM careers—and stick with them. This is the way to create a generation of professionals who act with STEMpathy.

To unite STEM subjects with themes of social progress, cultural competency must become a critical component of STEM education. Under this framework, teachers would use curricula to address systemic social inequities and augment learning by drawing from students’ personal experiences (Box 1). This focus would align with ongoing efforts to promote project-based learning, social-emotional learning, and career and technical education in classrooms across the United States.

American high school STEM students will demonstrate an understanding of and empathy for how people from varied backgrounds are affected by environmental and social issues. An environmental sciences student in California understands the risks posed by solar farms to agricultural production in the Midwest. They seek to design solar panels that do not disrupt soil drainage systems and financially benefit farmers.An astronomy student in Florida empathizes with Indigenous Hawaiians who are fighting against the construction of a massive telescope on their land. The student signs petitions to prevent the telescope from being built.A chemistry student in Texas learns that many immigrants struggle to understand healthcare professionals. They volunteer as a translator in their local clinic.A computer science student in Georgia discovers that many fellow residents do not know when or where to vote. They develop a chatbot that reminds their neighbors of polling place information.
Box 1. Examples of Cultural Competency Outcomes.

With such changes to the STEM lessons, the average U.S. high school graduate would have both a stronger sense of community within STEM classrooms and the capacity to operate at a professional level in intercultural contexts. STEM classroom culture would shift accordingly to empower and amplify diverse perspectives and redefine STEM as a common good in the service of advancing society. 

Plan of Action

Through an executive order, the Biden-Harris Administration should create a STEMpathy Task Force committed to building values of inclusion and public service into the United States’ STEM workforce. The task force would assist U.S. high schools in producing college- and career-ready, culturally competent STEM students. The intended outcome is to observe a 20 percent increase in the likelihood of students of color and female- and nonbinary-identifying students to pursue a college degree in a STEM field and for at least 40 percent of surveyed U.S. high school students to demonstrate awareness and understanding of cultural competence skills. Both outcomes should be measured by National Center for Education Research data 5–10 years after the task force is established. 

The STEMpathy Task Force would be coordinated by the Subcommittee on Federal Coordination in STEM Education (FC-STEM) from the White House Office of Science and Technology Policy (OSTP). The interagency working group would partner with education-focused organizations, research institutions, and philanthropy foundations to achieve their goals (FAQ #6). These partnerships would allow the White House to draw upon expertise within the STEM education sphere to address the following priorities:

Working toward these priorities will equip the next generation of STEM professionals with cultural competence skills. The task force will form effective STEM teaching methods that result in measurable improvement in STEM major diversity and career readiness.

Diagram
Description automatically generated
Figure 1. Roadmap of STEMpathy Task Force priorities, including reinforcing elements.

This approach meets the objectives of existing federal STEM education efforts without imposing classroom standards on U.S. educators. In the Federal STEM Education Strategic Plan, the Committee on Science, Technology, Engineering, and Math Education (Co-STEM) aims to (1) increase work-based learning and training, (2) lend successful practices from across the learning landscape, and (3) encourage transdisciplinary learning. The Department of Education also prioritizes the professional development of educators to strengthen student learning, as well as meet students’ social, emotional, and academic needs. In these ways, the STEMpathy Task Force furthers the Administration’s education goals.

Conclusion

Current national frameworks for high school STEM learning do not provide students with a strong sense of belonging or an awareness of how STEM can be leveraged to alleviate social inequities. The STEMpathy Task Force would establish a rigorous, adaptable framework to address these challenges head-on and ensure that the United States provides high school students with inclusive, hands-on science classrooms that prepare them to serve the diverse communities of their country. Following the implementation of the STEMpathy Task Force, the Biden-Harris Administration can expect to see (1) an increase in the number and diversity of students pursuing STEM degrees, (2) a reduction in race/ethnicity- and gender-based gaps in the STEM workforce, and (3) an increase in STEM innovations that solve critical challenges for communities across the United States.

Frequently Asked Questions
What cultural competence skills would students learn and apply?

In any team setting, students will function effectively and with empathy. They will interact respectfully with people from varied cultural backgrounds. To achieve these behavioral goals, students will learn three key skills, as outlined by the Nebraska Extension NebGuide:



  1. Increasing cultural and global knowledge. Students understand the historical background of current events, including relevant cultural practices, values, and beliefs. They know how to ask open-minded, open-ended questions to learn more information.

  2. Self-assessment. Students reflect critically on their biases to engage with others. They understand how their life experience may differ from others based on their identity.

  3. Active Listening. Students listen for the total meaning of a person’s message. They avoid mental chatter about how they will respond to a person or question, and they do not jump directly to giving advice or offering solutions. 

Would this task force incentivize, influence, or coerce states into adopting standards or curricula?

No. Although the task force will conduct research on STEM- and cultural-competency-related learning standards and lesson plans, the OSTP will not create incentives or regulations to force states to adopt the standards or curricula. The task force is careful to work within the existing, approved educational systems to advance the goals of the Department of Education and Committee on Science, Technology, Engineering, and Math Education (Co-STEM).

What are the associated risks with teaching cultural competency?

As observed during recent efforts to teach American students about structural racism and systemic inequality, some parents may find topics pertaining to diversity, equity, inclusion, and accessibility sensitive. The STEMpathy Task Force’s cultural competency-focused efforts, however, are primarily related to empathy and public service. These values are upheld by constituents and their representatives regardless of political leaning. As such, the STEMpathy Task Force may be understood as a bipartisan effort to advance innovation and the economic competitiveness of U.S. graduates.


Another associated risk is the burden created for teachers to incorporate new material into their already-packed schedules and lesson plans. Many teachers are leaving their jobs due to the stressful post-pandemic classroom environment, as well as the imbalance between their paychecks and the strain and value of their work. These concerns may be addressed through the STEMpathy Task Force’s objectives of paid training and rewards systems for educators who model effective teaching methods for others. In these ways, teachers may receive just compensation for their efforts in supporting both their students and the country’s STEM workforce.

What would be the outputs and milestones of the STEMpathy Task Force over its first four years?

In its first two years, the STEMpathy Task Force would complete the following:



  • Revise FC-STEM’s “Best Practices For Diversity and Inclusion in Stem Education and Research” guide to include information on evidence-based or emerging practices that promote cultural competence skills in the STEM classroom.

  • Train 500+ teachers across the nation to employ teaching strategies and curricula that improve the cultural competence skills of STEM students.


In the next two years, further progress would be made on the following:



  • Measure the efficacy of the teacher training program by assessing ~10,000 students’ cultural competence skill development, STEM interest retention and performance, and classroom sense of belonging.

  • Reward/recognize 100 schools for high achievement in cultural competency development.

Why approach cultural competency goals through STEM classes?

STEM subjects and professionals have the greatest potential to mitigate inequities in American society. Consider the following examples wherein marginalized communities would benefit from STEM professionals who act with cultural competency while working alongside or separate from decision-makers: 



Furthermore, although the number of STEM jobs in the United States has grown by 7.6 million since 1990, the STEM workforce has been very slow to diversify. Over the past 30 years, the proportion of Black STEM workers increased by only 2 percent and that of Latinx STEM workers by only 3 percent. Women hold only 15 percent of direct science and engineering jobs. LGBTQ male students are 17 percent more likely to leave STEM fields than their heterosexual counterparts.


Hundreds of professional networks, after-school programs, and nonprofit organizations have attempted to tackle these issues by targeting students of color and female-identifying students within STEM. While these commendable efforts have had a profound impact on many individuals’ lives, they are not providing the sweeping, transformative change that could promote not only diversity in the STEM workforce but a generation of STEM professionals who actively participate in helping diverse communities across the United States.

How much funding would the STEMpathy Task Force and its programming require?

Based on the president’s budget for ongoing STEM-related programming, we estimate that the agency task force would require approximately $100 million. This amount will be divided across involved agencies for STEMpathy Task Force programming.

Who are potential experts to include in the STEMpathy Task Force?

Pandemic Readiness Requires Bold Federal Financing for Vaccines

Summary

Most people will experience a severe pandemic within their lifetime, and the world remains dangerously unprepared. In fact, scientists predict a nearly 50% chance––the same probability as flipping heads or tails on a coin––that we will endure another COVID-19-level pandemic within the next 25 years. Shifting America’s pandemic response capability from reactive to proactive is, therefore, urgent. Failure to do so risks the country’s welfare. 

Getting ahead of the next pandemic is impossible without government financing. Vaccine production is costly, and these expenses will hinder industries from preemptively developing the tools needed to halt disease transmission. For example, the total expected revenues over a 20-year vaccine patent lifecycle would cover just half of the upfront research and development (R&D) costs. 

However, research suggests that a portfolio-based approach to vaccine development —  especially now with new, broadly applicable mRNA technology — dramatically increases the returns on investment while also guarding against an estimated 31 of the next 45 epidemic outbreaks. With lessons learned from Operation Warp Speed, Congress can deploy this approach by (i) authorizing and appropriating $10 billion to the Biomedical Advanced Research and Development Authority (BARDA) (ii) developing a vaccine portfolio for 10 emerging infectious diseases (EIDs), and (iii) a White House Office of Science and Technology Policy (OSTP)-led interagency effort focused on scaling up production of priority vaccines. 

Challenge & Opportunity 

The COVID-19 pandemic continues to wreak havoc across the world, with an ongoing total cost of $16 trillion and more than 6 million dead. Three conditions increase the likelihood that we will experience another pandemic that is just as disastrous: 

  1. New outbreaks of infectious diseases––like ––are emerging due to population growth, increased zoonotic transmission from animals, habitat loss, climate change, and more. Over 1.6 million yet-to-be-discovered, human-infecting viral species are thought to exist in mammals and birds.
  2. More laboratories are handling dangerous pathogens around the world, which increases the likelihood of an accidental contagion release.
  3. It is easier than ever to purchase biotechnologies once reserved only for scientists. Consequently, malign actors now have more resources to develop a human-engineered bioweapon. 

The United States and the rest of the world are still woefully unprepared for future pandemic or epidemic threats. The lack of progress is largely due to little to no vaccine development for these six EIDs, all of which have pandemic potential

Failure to produce and supply vaccines doses to Americans could undermine the U.S. government’s response to a vaccine crisis. This is illustrated in the recent monkeypox response. The federal government invested in a new monkeypox vaccine with a significantly longer shelf life. While focused on this effort, it failed to replace its existing vaccine stockpile as it expired, leaving the American population woefully unprepared during the recent monkeypox outbreak. 

An immediate national strategy is needed to course correct, the beginnings of which are articulated in the recent plan for American Pandemic Preparedness: Transforming our Capabilities. These overarching concerns were also echoed in a bipartisan letter from the Senate Health, Education, Labor, and Pensions and Armed Services Committees, urging the Biden Administration to re-establish a “2.0” version of Operation Warp Speed (OWS)––the government’s prior effort to accelerate COVID-19 vaccine production. 

The President’s recent FY23 Budget advocates for a historic pandemic preparedness investment. The plan allocates nearly $40 billion to the Department of Health and Human Services Assistant Secretary for Preparedness and Response to “invest in advanced development and manufacturing of countermeasures for high priority threats and viral families, including vaccines, therapeutics, diagnostics, and personal protective equipment.” BARDA also declared the need to prepare prototype vaccines for virus families with pandemic potential and has included such investments in its most recent strategic plan. And, the recent  calls for increased “piloting and prototyping efforts in biotechnology and biomanufacturing to accelerate the translation of basic research results into practice.”

Robust federal investment in America’s vaccine industry is especially needed since––as demonstrated by COVID-19––industries garner minimal profit from vaccine development before or during a widespread outbreak. A recent study predicted that in the unlikely scenario where 10 million vaccines are manufactured during a crisis response, pharmaceutical companies can expect to recoup only half of the upfront R&D costs. The same research states that “new drug development has become slower, more expensive, and less likely to succeed” because:

With clinical costs accounting for 96% of total investment, companies have a weak financial justification for investing in risky vaccine research. 

To minimize these uncertainties and improve investment returns for vaccine and therapeutic production, the federal government should embrace two key lessons from OWS: 

  1. Guaranteed government demand enables the pursuit of innovative, speedy, and effective vaccine R&D. OWS selected companies pursuing different scientific methods to develop a vaccine, each of which possessed breakthrough potential. Moderna and Pfizer/BioNTech utilized mRNA, AstraZeneca and Janssen worked with replication-defective live vectors, and Novavax and Sanofi/GSK utilized a recombinant protein. Merck is working on a live attenuated virus that may be given orally. By frequently evaluating vaccine candidates, scientists ensured that only the most promising contenders continued to subsequent regulatory phases. This workflow dramatically expedited vaccine development. Relatedly, companies were able to invest in large-scale vaccine manufacturing during clinical trials thanks to government financial support. They not only received guaranteed investment installments, but also advanced commitments to purchase vaccines. This significantly decreased the financial risk and saved tremendous amounts of time and resources. 
  2. Public-private partnerships utilize incentives and rewards to foster highly effective and dynamic teams. OWS created a “unique distribution of responsibilities … based upon core competencies rather than on political or financial considerations.” The interests of eight pharmaceutical companies were aligned based on the potential to receive an upfront commitment from the federal government to bulk purchase vaccines. Such approaches are critical to ensuring vaccine R&D not only happens in an efficient, coordinated manner but also that such R&D yields production at scale. Moreover, it enabled a suite of approaches to vaccine development rather than one method, raising the overall probability of developing a successful vaccine. 

Repeating these lessons in subsequent EID vaccine developments would generate both significant returns on investment and benefits to society. 

Plan of Action

By incentivizing vaccine development for priority EIDs, the federal government can preemptively solve market failures without picking winners or losers. 

First, Congress should authorize and appropriate $10 billion to BARDA over 10 years to create a Dynamic Vaccine Development Fund. This fund would build on BARDA’s unique competencies as an engagement platform with the private sector. would allow for new developments to emerge 

It would also enact the following strategies, gleaned from all of which were proven to be effective in OWS: 

As illustrated by its successful history, BARDA is well-positioned to manage a large-scale vaccine initiative. Last year, BARDA announced the first venture capital partnership with the Global Health Investment Corporation to “allow direct linkage with the investment community and establish sustained and long-term efforts to identify, nurture, and commercialize technologies that aid the U.S. in responding effectively to future health security threats.” During the COVID-19 pandemic, BARDA and Janssen shared the R&D costs to help move Janssen’s investigational novel coronavirus vaccine into clinical evaluation—a collaboration supported by their previous successes on the Ebola vaccine. The Government Accountability Office reported that BARDA had also supported scaled production by identifying additional manufacturing partners. This partnership record shows that BARDA not only knows how to manage global health projects to completion but also is particularly adept at interfacing with the private sector. As such, it stands out as an ideal manager for the Dynamic Vaccine Development Fund.

With $10 billion, this Fund could not only support the vaccine economy, but also save millions of lives and trillions of dollars. Although the price tag is admittedly hefty, it is reasonable. After all, OWS had a price tag of $12+ billion––a small investment compared to the $16+ trillion cost of COVID-19. As seen in OWS, the long-term benefits of upfront, robust financing are even more impactful. One back-of-the-envelope calculation suggests immense economic returns for the Fund: 

A $10 billion down payment would allow the Fund to excel in its normal operations (see bulleted list above) and support up to 120 vaccine candidates. OWS also spawned more than just new breakthrough R&D in the use of mRNA vaccine models. It also led to a health and biotechnology innovation windfall

“Now that we know that mRNA vaccines work, there is no reason we could not start the process of developing those for the top 20 most likely pandemic pathogen prototypes” 

Dr. Francis Collins, former director of the National Institutes of Health

Ten billion dollars would ensure the Fund’s impact could be similarly force-multiplied by private sector partnerships. There would be  more time available and more opportunity for creative partnerships with the private sector. The Fund’s purpose is to lower financial risks and attract large amounts of capital from the bond market, whose size outweighs the venture capital, public equity, or private equity markets. Indeed, there has been growing interest in the application of social bonds to pandemic preparedness as a unique instrument for rapidly frontloading resources from capital markets. Though this Fund will assume a different form, the International Finance Facility for Immunisation represents a proof of concept for coordinating  philanthropic foundations, governments, and supranational organizations for the purpose of “raising money more quickly.” With seed capital, this Fund could provide a strong signal — and perhaps an anchor for coordination — to debt capital markets to make issuances for vaccines. To this end, the targeted critical mass of $10 billion is estimated to generate both tremendous societal value by preventing future epidemic outbreaks as well as producing positive returns for investors. 

Second, in executing Fund activities, BARDA should leverage investment strategies––such as milestone-based payments––to incentivize maximum vaccine innovation. When combatting EIDs, the U.S. will need as many vaccine options as possible. To facilitate this outcome, vaccine manufacturers should be rewarded for producing multiple kinds of vaccines at the same time. For example, BARDA might support the development of vaccines for a given EID by funding progress for four novel methods (e.g., mRNA, recombinant protein, gene-therapy, and live attenuated, orally-administered vaccines).  

Furthermore, these rewards should come regularly during major events––or “milestones”––during development. Initial-stage milestones include vaccine candidates that protect an animal model against disease; later-stage milestones include human clinical trials. This financing model would provide companies with clear, short-term targets, reducing uncertainty and rewarding progress dynamically. Additionally, it would support the recent executive order, which calls for “increasing piloting and prototyping efforts in biotechnology and biomanufacturing to accelerate the translation of basic research results into practice.”

BARDA could expand the milestone-based financing mechanism further by employing early-stage challenges. In this scenario, it would only fund the first two of three candidates that successfully complete small-scale clinical trials. The final milestone stage––which should only be offered to a limited number of candidates––should provide an advanced market commitment to house complete vaccines within U.S. storage facilities, based on the interagency effort (described in the paragraph below). The selections process would retain sufficient competition throughout the development process, while ensuring a sustainable method for scaling up certain vaccines based on mission priorities.

Third, to support Fund activities towards late-stage clinical trials, the White House Office of Science and Technology Policy (OSTP) should coordinate a larger-scale interagency effort leveraging advanced market commitments, prize challenges, and other innovative procurement techniques. OSTP should be a coordinator across federal agencies that address pandemic preparedness, which might include: the Department of Defense, BARDA, the U.S. Agency for International Development, the National Institute of Allergy and Infectious Diseases, the Federal Emergency Management Agency, and the Development Finance Corporation. In doing so, the OSTP can (i) consolidate investments for particular vaccine candidates, and (ii) utilize networks and incentive strategies across the U.S. government to secure vaccines. Separately––and based on urgent priorities shared by agencies––OSTP should work closely with the Food and Drug Administration (FDA) to explore opportunities for pre-approval of vaccines as they develop through the trial phase. 

Conclusion

Vaccines are among the most powerful tools for fighting pandemics. Unfortunately, bringing vaccines to market at scale is challenging. However, Operation Warp Speed (OWS) established a new precedent for tackling vaccine innovation market failures, laying the groundwork for a new era of industrial strategy. Congress should take advantage and supercharge U.S. pandemic preparedness by enabling the Biomedical Advanced Research and Development Authority (BARDA) to build a Dynamic Vaccine Development Fund. Embracing lessons learned from OWS, the Fund would incentivize companies to create vaccines for the six emerging infectious diseases most likely to cause the next pandemic.

Frequently Asked Questions
If it takes so long to approve a new vaccine, why should we invest in developing vaccines ahead of time?

The regulatory process for approving vaccines is even more reason to develop them ahead of time—before they are needed, rather than after an outbreak. Having access to an effective vaccine even days sooner can save thousands of lives due to the exponential rate of growth of all infectious diseases. Moreover, the FDA approval process—especially its Emergency Use Authorization Program—is extremely efficient, and is not the bottleneck for vaccine development. The main delay involved in vaccine development is the time it takes to conduct randomized clinical trials. Unfortunately, there are no shortcuts to this process if we want to ensure that vaccines are safe and effective. That is why we need to develop vaccines before pandemics occur. The idea here is simply to develop the minimum viable product of vaccines for priority EIDs that positions these vaccines to rapidly scale in the event of a pandemic.

Has this large-scale, multi-use investment program been deployed elsewhere?

Yes, there are several examples of vaccine initiatives using this strategy. To list a few:



  1. The Coalition for Epidemic Preparedness Innovations (CEPI) has a “megafund” vaccine portfolio (i.e., they have 32 vaccine candidates as of April 2022). This portfolio spans 13 different therapeutic mechanisms and five different stages of clinical development, from preclinical to “Emergency Use Listing” by the World Health Organization. 

  2. BridgeBioRoivant Sciences have used portfolio-based approaches for drug development.

  3. The National Brain Tumor Society is also leveraging this approach to finance novel drug candidates that can treat glioblastoma.

Where and how would you safely store a large vaccine stockpile? When we tried this before, didn’t 20 million doses of monkeypox vaccines expire?

Ideally, vaccines in the final milestone stage would be stored in the United States and in line with new CDC guidance in the Vaccine Storage and Handling toolkit. This prevents the scenario where vaccines are held up in transit due to complex international negotiations and, potentially, expire during the lengthy proceedings. This exact scenario occurred when the 300,000 doses of monkeypox vaccine held in a Denmark-based facility were slowly and inconsistently onshored back to the U.S. 


In addition, vaccines that are financed through the Fund would not always be final products. Instead, they would potentially be at varying stages of development thanks to the milestone-based payment strategy and frequent progress reviews. This would make it easier for the federal government to closely coordinate vaccine development with manufacturing professionals and rapidly increase vaccine production if necessary. The strategy offered in this memo lowers the risk of a similar situation occurring again.


We recommend that the executive order on biomanufacturing continue exploring this issue and investigate ways to securely store completed vaccines. The Government Accountability Office, for example, recently suggested several promising and discrete changes to update the requirements and operations of the Strategic National Stockpile.

Why did you select these six emerging infectious diseases?

This list was derived from justifications listed on CEPI’s website, linked here

Why not develop a vaccine against all potential viral threats?

There are simply too many infectious diseases in nature, and most of are too rare to pose a significant threat. It would be scientifically and financially impractical––and unnecessary––to develop vaccines against all of them. However, we can greatly increase our readiness by widening our scope and developing a library of prototyped vaccines based on the 25 viral families (as called for by CEPI). Doing so would allow us to respond quickly against even unlikely pandemic scenarios.

How Unmet Desire Surveys Can Advance Learning Agendas and Strengthen Evidence-Based Policymaking

Summary

The 2018 Foundations for Evidence-Based Policymaking Act (Evidence Act) promotes a culture of evidence within federal agencies. A central part of that culture entails new collaboration between decision-makers and those with diverse forms of expertise inside and outside of the federal government. Federal chief evaluation officers lead these efforts, yet they face challenges getting buy-in from agency staff and in obtaining sufficient resources. One tool to overcome these challenges is an “unmet desire survey,” which prompts agency staff to reflect on how the success of their programs relates to what is happening in other agencies and outside government, as well as consider what information about these other programs and organizations would help their work be more effective. The unmet desire survey is an important data-gathering mechanism and also encourages evaluation officers to engage in matchmaking between agency staff and people who have the information they desire. Using existing authorities and resources, agencies can pilot unmet desire surveys as a concrete mechanism for advancing federal learning agendas in a way that builds buy-in by directly meeting the needs of agency staff.

Challenge and Opportunity

A core mission of the Evidence Act is to foster a culture of evidence-based decision-making within federal agencies. Since the problems agencies tackle are multidimensional, with the success of one government program often depending on the performance of others, new collaborative relationships between decision-makers in the federal government and those in other agencies and in organizations outside the federal government are essential to realizing the Evidence Act’s vision. Indeed, Office of Management and Budget (OMB) implementation guidance stresses that learning agendas are “an opportunity to align efforts and promote interagency collaboration in areas of joint focus or shared populations or goals” (OMB M-19-23), and that a culture of evidence “cannot happen solely at the top or in isolated analytical offices, but rather must be embedded throughout each agency…and adopted by the hardworking civil servants who serve on behalf of the American people” (OMB M-21-27). 

Chief evaluation officers at federal agencies are the main point people for fostering cultures of evidence. Yet they and their evaluation staff face many challenges, including getting buy-in from agency staff, understanding the needs of program and operational offices that go beyond the organizational boundaries of those offices, and limited resources. Indeed, OMB guidance acknowledges that many agency staff may view learning agendas as just another compliance exercise.

This memo proposes a flexible tool that evaluation officers can use to generate buy-in among agency staff and leadership while also promoting collaboration as emphasized in OMB guidance and in the Evidence Act. The tool, which has already proven valuable in local government and in the nonprofit sector, is called an “unmet desire survey.” The survey measures unmet desires for collaboration by prompting staff to consider the following: 

Unmet desire surveys elicit critical insights about needs for connection and are highly flexible. For instance, in the first question posed above, evaluation officers can choose to ask staff about new information that would be helpful for any program or only about information relevant to programs that are top priorities for their agency. In other words, unmet desire surveys need not add one more thing to the plate; rather, they can be used to accelerate collaboration directly tied to current learning priorities. 

Unmet desire surveys also legitimize informal collaborative relationships. Too often, calls for new collaboration in the policy sphere immediately segue into overly structured meetings that fail to uncover promising areas for joint learning and problem-solving. Meetings across government agencies are often scripted presentations about each organization’s activities, providing little insight on ways they could partner to achieve better results. Policy discussions with outside research experts tend to focus on formal evaluations and long-term research projects that don’t surface opportunities to accelerate learning in the near term. In contrast, unmet desire surveys explicitly legitimize the idea that diverse thinkers may want to connect only for informal knowledge exchange rather than formal events or partnerships. Indeed, even single conversations can greatly impact decision-makers, and, of course, so can more intensive relationships.

While online platforms for spurring new collaborative relationships have been previously proposed, they have not achieved uptake at scale among federal policymakers. One reason for this is that the problem that needs to be solved is both factual and relational. In other words, the issue isn’t simply that strangers do not know each other—it’s also that strangers do not always know how to talk to one another. People care about how others relate to them and whether they can successfully relate to others. Uncertainty about relationality routinely stops people from interacting with others they do not know. This is why unmet desire surveys also include questions that directly measure hesitations about interacting with people from other agencies and organizations. 

After the surveys are administered, evaluation staff can use survey data to engage in matchmaking: brokering connections among people with similar goals but diverse expertise and helping overcome uncertainty about relationality so that new cross-agency and cross-sector collaborative relationships can take root. In sum, by deliberately inquiring about connections with others who have diverse forms of relevant expertise—and then making those connections anew—evaluation staff can generate greater enthusiasm and ownership among people who may not consider evaluation and evidence-building as part of their core responsibilities.

Plan of Action

Using existing authorities and resources, federal evaluation officers can take three steps to position unmet desire surveys as a standard component of the government’s evidence toolbox. 

Step 1. Design and implement pilot unmet desire surveys. 

Chief evaluation officers are well positioned to pilot unmet desire surveys within their agencies. While individual evaluation officers can work independently to design unmet desire surveys, it may be more fruitful to work together, via the Evaluation Officer Council, to design a baseline survey template. Chief evaluation officers could then work with their teams to adapt the baseline template to their agencies, including identifying which agency staff to prioritize as well as the best way to phrase particular questions (e.g., regarding the types of connections that employees want in order to improve the effectiveness of their work or the types of hesitancies to ask about). Given that the question content is highly flexible, unmet desire surveys can directly accelerate learning agendas and build buy-in at the same time. Thus, they can yield tangible, concrete benefits with very little upfront cost.

Step 2. Meet unmet desires by matchmaking. 

After the pilot surveys are administered, chief evaluation officers should act on their results by matchmaking. There are several ways to do this without new appropriations. One is for evaluation teams within agencies to engage in informal, low-lift matchmaking—wherein those who implement the survey also act as initial matchmakers—as an early proof of concept. A second option is to bring on short-term matchmakers through flexible hiring mechanisms (e.g., through the Intergovernmental Personnel Act). Documenting successes and lessons learned then set the stage for using agency-specific discretionary funds to hire one or more in-house matchmakers as longer-term or staff appointments.

Step 3. Collect information on successes and lessons learned from the pilot.

Unmet desire surveys can be tricky to field because they entail asking employees about topics they may not be used to thinking about. It often takes some trial and error to figure out the best ways to ask about employees’ substantive goals and their hesitations about interacting with people they do not know. Piloting unmet desire surveys and follow-on matchmaking can not only demonstrate value (e.g., the impact of new collaborative relationships fostered through these combined efforts) to justify further investment but also suggest how evaluation leads might best structure future unmet desire surveys and subsequent matchmaking.

Conclusion

An unmet desire survey is an adaptable tool that can reveal fruitful pathways for connection and collaboration. Indeed, unmet desire surveys leverage the science of collaboration by ensuring that efforts to broker connections among strangers consider both substantive goals and uncertainty about relationality. Chief evaluation officers can pilot unmet desire surveys using existing authorities and resources, and then use the information gathered to identify opportunities for productive matchmaking. Ultimately, positioning the survey as a standard component of the government’s evidence toolbox has great potential to support agency staff in advancing federal learning agendas and building a robust culture of evidence across the U.S. government.

Frequently Asked Questions
Who should unmet desire surveys be administered to?

The best place to start—especially when resources are limited—is with potential evidence champions. These are people who already have an idea of what information would help them improve the impact of the programs they run and which people would be helpful to collaborate with. These potential evidence champions may not self-identify as such; rather, they may see themselves as falling into other categories, such as customer-experience experts, bureaucracy hackers, process innovators, or policy entrepreneurs. Regardless of terminology, the unmet desire survey provides people who are already motivated to collaborate and connect with a clear opportunity to articulate their needs. Evaluation staff can then respond by matchmaking to stimulate new and productive relationships for those people.

Who should conduct an unmet desire survey?

The administrator should be someone with whom agency staff feel comfortable discussing their needs (e.g., a member of an agency evaluation team) and who is able to effectively facilitate matchmaking—perhaps because of their network or their reputation within the agency. The latter criterion helps ensure that staff expect useful follow-up, which in turn motivates completion of the survey and participation in follow-on activities; it also generates enthusiasm for engaging in new collaborative relationships (as well as creating broader buy-in for the learning agenda). In some cases, it may make the most sense to have multiple people from an evaluation team surveying different agency staff or co-sponsoring the survey with agency innovation offices. Explicit support from agency leadership for the survey and follow-on activities is also crucial for achieving staff buy-in.

What questions should be asked in an unmet desire survey?

The bulleted list in the body of the memo illustrates the types of questions that an unmet desire survey might ask. Yet survey content is meant to be tailored and agency-specific. For instance, the first suggested question about information that would help increase program effectiveness can be left entirely open-ended or be focused on programs related to learning-agenda priorities. Similarly, the second suggested question may invite responses related to either informal or formal collaboration, or instead may only ask about knowledge exchange (a relatively lower commitment that may be more palatable to agency leadership). The third and fourth questions should refer to specific types of hesitancy that survey administrators believe are most likely (e.g., ask about a few hesitancies that seem most likely to arise, such as lack of explicit permission, concerns about saying something inappropriate, or concerns about lack of trustworthy information). The final question about why these collaborations don’t exist can similarly be left broad or include a few examples to help spark ideas.

Who should conduct matchmaking in response to an unmet desire survey?

Again, the answer will be agency-specific. In many organizations, matchmaking happens informally. Formalizing this duty as a part of one or more people’s official responsibilities sends a signal about how much this work is valued. Exactly who those people are will depend on the agency’s structure, as well as on whether there are already people in a given agency who see matchmaking as part of their job.

When is the right time to field an unmet desire survey?

While unmet desire surveys can be done anytime and on a continuous basis, it is best to field them when there is identified staff capacity for follow-on matchmaking and employee willingness to build collaborative relationships.

Public Value Evidence for Public Value Outcomes: Integrating Public Values into Federal Policymaking

Summary

The federal government––through efforts like the White House Year of Evidence for Action––has made a laudable push to ensure that policy decisions are grounded in empirical evidence. While these efforts acknowledge the importance of social, cultural and Indigenous knowledges, they do not draw adequate attention to the challenges of generating, operationalizing, and integrating such evidence in routine policy and decision making. In particular, these endeavors are generally poor at incorporating the living and lived experiences, knowledge, and values of the public. This evidence—which we call evidence about public values—provides important insights for decision making and contributes to better policy or program designs and outcomes. 

The federal government should broaden institutional capacity to collect and integrate evidence on public values into policy and decision making. Specifically, we propose that the White House Office of Management and Budget (OMB) and the White House Office of Science and Technology Policy (OSTP): 

  1. Provide a directive on the importance of public value evidence.
  2. Develop an implementation roadmap for integrating public value evidence into federal operations (e.g., describe best practices for integrating it into federal decision making, developing skill-building opportunities for federal employees).

Challenge and Opportunity

Evidence about public values informs and improves policies and programs

Evidence about public values is, to put it most simply, information about what people prioritize, care, or think about with respect to a particular issue, which may differ from ideas prioritized by experts. It includes data collected through focus groups, deliberations, citizen review panels, and community-based research, or public opinion surveys. Some of these methods rely on one-way flows of information (e.g., surveys) while others prioritize mutual exchange of information among policy makers and participating publics (e.g., deliberations). 

Agencies facing complex policymaking challenges can utilize evidence about public values––along with expert- and evaluation-based evidence––to ensure decisions truly serve the broader public good. If collected as part of the policy-making process, evidence about public values can inform policy goals and programs in real time, including when program goals are taking shape or as programs are deployed. 

Evidence about public values within the federal government: three challenges to integration

To fully understand and use public values in policymaking, the U.S. government must first broadly address three challenges.

First, the federal government does not sufficiently value evidence about public values when it researches and designs policy solutions. Federal employees often lack any directive or guidance from leadership that collecting evidence about public values is valuable or important to evidence-based decision making. Efforts like the White House Year of Evidence for Action seek to better integrate evidence into policy making. Yet––for many contexts and topics––scientific or evaluation-based evidence is just one type of evidence. The public’s wisdom, hopes, and perspectives play an important mediating factor in determining and achieving desired public outcomes. The following examples illustrate ways public value evidence can support federal decision making:

  1. An effort to implement climate intervention technologies (e.g., solar geoengineering) might be well-grounded in evidence from the scientific community. However, that same strategy may not consider the diverse values Americans hold about (i) how such research might be governed, (ii) who ought to develop those technologies, and (iii) whether or not they should be used at all. Public values are imperative for such complex, socio-technical decisions if we are to make good on the Year of Evidence’s dual commitment to scientific integrity (including expanded concepts of expertise and evidence) and equity (better understanding of “what works, for whom, and under what circumstances”). 
  2. Evidence about the impacts of rising sea levels on national park infrastructure and protected features has historically been tense. To acknowledge the social-environmental complexity in play, park leadership have strived to include both expert assessments and engagement with publics on their own risk tolerance for various mitigation measures. This has helped officials prioritize limited resources as they consider tough decisions on what and how to continue to preserve various park features and artifacts. 

Second, the federal government lacks effective mechanisms for collecting evidence about public values. Presently, public comment periods favor credentialed participants—advocacy groups, consultants, business groups, etc.—who possess established avenues for sharing their opinions and positions to policy makers. As a result, these credentialed participants shape policy and other experiences, voices, and inputs go unheard. While the general public can contribute to government programs through platforms like Challenge.gov, credentialed participants still tend to dominate these processes. Effective mechanisms for collecting public values into decision making or research are generally confined to university, local government, and community settings. These methods include participatory budgeting, methods from usable or co-produced science, and participatory technology assessment. Some of these methods have been developed and applied to complex science and technology policy issues in particular, including climate change and various emerging technologies. Their use in federal agencies is far more limited. Even when an agency might seek to collect public values, it may be impeded by regulatory hurdles, such as the Paperwork Reduction Act (PRA), which can limit the collection of public values, ideas, or other input due to potentially long timelines for approval and perceived data collection burden on the public. Cumulatively, these factors prevent agencies from accurately gauging––and being adaptive to––public responses. 

Third, federal agencies face challenges integrating evidence about public values into policy making. These challenges can be rooted in the regulatory hurdles described above, difficulties integrating with existing processes, and unfamiliarity with the benefits of collecting evidence about public values. Fortunately, studies have found specific attributes present among policymakers and agencies that allowed for the implementation and use of mechanisms for capturing public values. These attributes included: 

  1. Leadership who prioritized public involvement and helped address administrative uncertainties.
  2. An agency culture responsive to broader public needs, concerns, and wants.
  3. Agency staff familiar with mechanisms to capture public values and integrate them in the policy- and decision-making process. The latter can help address translation issues, deal with regulatory hurdles, and can better communicate the benefits of collecting public values with regard to agency needs. Unfortunately, many agencies do not have such staff, and there are no existing roadmaps or professional development programs to help build this capacity across agencies. 

Aligning public values with current government policies promotes scientific integrity and equity

The White House Year of Evidence for Action presents an opportunity to address the primary challenges––namely a lack of clear direction, collection protocols, and evidence integration strategies––currently impeding public values evidence’s widespread use in the federal government. Our proposal below is well aligned with the Year of Evidence’s central commitments, including: 

Furthermore, this proposal aligns with the goals of the Year of Evidence for Action to “share leading practices to generate and use research-backed knowledge to advance better, more equitable outcomes for all America…” and to “…develop new strategies and structures to promote consistent evidence-based decision-making inside the Federal Government.” 

Plan of Action

To integrate public values into federal policy making, the White House Office of Management and Budget (OMB) and the White House Office of Science and Technology Policy (OSTP) should: 

  1. Develop a high-level directive for agencies about the importance of collecting public values as a form of evidence to inform policy making.
  2. Oversee the development of a roadmap for the integration of evidence about public values across government, including pathways for training federal employees. 

Recommendation 1. OMB and OSTP should issue a high-level directive providing clear direction and strong backing for agencies to collect and integrate evidence on public values into their evidence-based decision-making procedures. 

Given the potential utility of integrating public value evidence into science and technology policy as well as OSTP’s involvement in efforts to promote evidence-based policy, OSTP makes a natural partner in crafting this directive alongside OMB. This directive should clearly connect public value evidence to the current policy environment. As described above, efforts like the Foundations for Evidence-Based policy making Act (Evidence Act) and the White House Year of Evidence for Action provide a strong rationale for the collection and integration of evidence about public values. Longer-standing policies––including the Crowdsourcing and Citizen Science Act––provide further context and guidance for the importance of collecting input from broad publics.

Recommendation 2. As part of the directive, or as a follow up to it, OMB and OSTP should oversee the development of a roadmap for integrating evidence about public values across government. 

The roadmap should be developed in consultation with various federal stakeholders, such as members of the Evaluation Officer Council, representatives from the Equitable Data Working Group, customer experience strategists, and relevant conceptual and methods experts from within and outside the government.

A comprehensive roadmap would include the following components:

Conclusion

Collecting evidence about the living and lived experiences, knowledge, and aspirations of the public can help inform policies and programs across government. While methods for collecting evidence about public values have proven effective, they have not been integrated into evidence-based policy efforts within the federal government. The integration of evidence about public values into policy making can promote the provision of broader public goods, elevate the perspectives of historically marginalized communities, and reveal policy or program directions different from those prioritized by experts. The proposed directive and roadmap––while only a first step––would help ensure the federal government considers, respects, and responds to our diverse nation’s values.

Frequently Asked Questions
Which agencies or areas of government could use public value evidence?

Federal agencies can use public value evidence where additional information about what the public thinks, prioritizes, and cares about could improve programs and policies. For example, policy decisions characterized by high uncertainty, potential value disputes, and high stakes could benefit from a broader review of considerations by diverse members of the public to ensure that novel options and unintended consequences are considered in the decision making process. In the context of science and technology related decision making, these situations were called “post-normal science” by Silvio Funtowicz and Jerome Ravetz. They called for an extension of who counts as a subject matter expert in the face of such challenges, citing the potential for technical analyses to overlook important societal values and considerations.

Why should OSTP be engaged in furthering the use of public value evidence?

Many issues where science and technology meet societal needs and policy considerations warrant broad public value input. These issues include emerging technologies with societal implications and existing S&T challenges that have far reaching impacts on society (e.g., climate change). Further, OSTP is already involved in Evidence for Action initiatives and can assist in bringing in external expertise on methods and approaches.

Why do we need this sort of evidence when public values are represented by elective officials?

While guidance from elected officials is an important mechanism for representing public values, evidence collected about public values through other means can be tailored to specific policy making contexts and can explore issue-specific challenges and opportunities. 

Are there any examples of public value evidence being used in the government?

There are likely more current examples of identifying and integrating public value evidence than we can point out in government. The roadmap building process should involve identifying those and finding common language to describe diverse public value evidence efforts across government. For specific known examples, see footnotes 1 and 2.

Is evidence about public values different from evidence collected about evaluations?

Evidence about public values might include evidence collected through program and policy evaluations but includes broader types of evidence. The evaluation of policies and programs generally focuses on assessing effectiveness or efficiency. Evidence about public values would be used in broader questions about the aims or goals of a program or policy.

Unlocking Federal Grant Data To Inform Evidence-Based Science Funding

Summary

Federal science-funding agencies spend tens of billions of dollars each year on extramural research. There is growing concern that this funding may be inefficiently awarded (e.g., by under-allocating grants to early-career researchers or to high-risk, high-reward projects). But because there is a dearth of empirical evidence on best practices for funding research, much of this concern is anecdotal or speculative at best.

The National Institutes of Health (NIH) and the National Science Foundation (NSF), as the two largest funders of basic science in the United States, should therefore develop a platform to provide researchers with structured access to historical federal data on grant review, scoring, and funding. This action would build on momentum from both the legislative and executive branches surrounding evidence-based policymaking, as well as on ample support from the research community. And though grantmaking data are often sensitive, there are numerous successful models from other sectors for sharing sensitive data responsibly. Applying these models to grantmaking data would strengthen the incorporation of evidence into grantmaking policy while also guiding future research (such as larger-scale randomized controlled trials) on efficient science funding.

Challenge and Opportunity

The NIH and NSF together disburse tens of billions of dollars each year in the form of competitive research grants. At a high level, the funding process typically works like this: researchers submit detailed proposals for scientific studies, often to particular program areas or topics that have designated funding. Then, expert panels assembled by the funding agency read and score the proposals. These scores are used to decide which proposals will or will not receive funding. (The FAQ provides more details on how the NIH and NSF review competitive research grants.) 

A growing number of scholars have advocated for reforming this process to address perceived inefficiencies and biases. Citing evidence that the NIH has become increasingly incremental in its funding decisions, for instance, commentators have called on federal funding agencies to explicitly fund riskier science. These calls grew louder following the success of mRNA vaccines against COVID-19, a technology that struggled for years to receive federal funding due to its high-risk profile.

Others are concerned that the average NIH grant-winner has become too old, especially in light of research suggesting that some scientists do their best work before turning 40. Still others lament the “crippling demands” that grant applications exert on scientists’ time, and argue that a better approach could be to replace or supplement conventional peer-review evaluations with lottery-based mechanisms

These hypotheses are all reasonable and thought-provoking. Yet there exists surprisingly little empirical evidence to support these theories. If we want to effectively reimagine—or even just tweak—the way the United States funds science, we need better data on how well various funding policies work.

Academics and policymakers interested in the science of science have rightly called for increased experimentation with grantmaking policies in order to build this evidence base. But, realistically, such experiments would likely need to be conducted hand-in-hand with the institutions that fund and support science, investigating how changes in policies and practices shape outcomes. While there is progress in such experimentation becoming a reality, the knowledge gap about how best to support science would ideally be filled sooner rather than later.

Fortunately, we need not wait that long for new insights. The NIH and NSF have a powerful resource at their disposal: decades of historical data on grant proposals, scores, funding status, and eventual research outcomes. These data hold immense value for those investigating the comparative benefits of various science-funding strategies. Indeed, these data have already supported excellent and policy-relevant research. Examples include Ginther et. al (2011) which studies how race and ethnicity affect the probability of receiving an NIH award, and Myers (2020), which studies whether scientists are willing to change the direction of their research in response to increased resources. And there is potential for more. While randomized control trials (RCTs) remain the gold standard for assessing causal inference, economists have for decades been developing methods for drawing causal conclusions from observational data. Applying these methods to federal grantmaking data could quickly and cheaply yield evidence-based recommendations for optimizing federal science funding.

Opening up federal grantmaking data by providing a structured and streamlined access protocol would increase the supply of valuable studies such as those cited above. It would also build on growing governmental interest in evidence-based policymaking. Since its first week in office, the Biden-Harris administration has emphasized the importance of ensuring that “policy and program decisions are informed by the best-available facts, data and research-backed information.” Landmark guidance issued in August 2022 by the White House Office of Science and Technology Policy directs agencies to ensure that federally funded research—and underlying research data—are freely available to the public (i.e., not paywalled) at the time of publication.

On the legislative side, the 2018 Foundations for Evidence-based Policymaking Act (popularly known as the Evidence Act) calls on federal agencies to develop a “systematic plan for identifying and addressing policy questions” relevant to their missions. The Evidence Act specifies that the general public and researchers should be included in developing these plans. The Evidence Act also calls on agencies to “engage the public in using public data assets [and] providing the public with the opportunity to request specific data assets to be prioritized for disclosure.” The recently proposed Secure Research Data Network Act calls for building exactly the type of infrastructure that would be necessary to share federal grantmaking data in a secure and structured way.

Plan of Action

There is clearly appetite to expand access to and use of federally held evidence assets. Below, we recommend four actions for unlocking the insights contained in NIH- and NSF-held grantmaking data—and applying those insights to improve how federal agencies fund science.

Recommendation 1. Review legal and regulatory frameworks applicable to federally held grantmaking data.

The White House Office of Management and Budget (OMB)’s Evidence Team, working with the NIH’s Office of Data Science Strategy and the NSF’s Evaluation and Assessment Capability, should review existing statutory and regulatory frameworks to see whether there are any legal obstacles to sharing federal grantmaking data. If the review team finds that the NIH and NSF face significant legal constraints when it comes to sharing these data, then the White House should work with Congress to amend prevailing law. Otherwise, OMB—in a possible joint capacity with the White House Office of Science and Technology Policy (OSTP)—should issue a memo clarifying that agencies are generally permitted to share federal grantmaking data in a secure, structured way, and stating any categorical exceptions.

Recommendation 2. Build the infrastructure to provide external stakeholders with secure, structured access to federally held grantmaking data for research. 

Federal grantmaking data are inherently sensitive, containing information that could jeopardize personal privacy or compromise the integrity of review processes. But even sensitive data can be responsibly shared. The NIH has previously shared historical grantmaking data with some researchers, but the next step is for the NIH and NSF to develop a system that enables broader and easier researcher access. Other federal agencies have developed strategies for handling highly sensitive data in a systematic fashion, which can provide helpful precedent and lessons. Examples include:

  1. The U.S. Census Bureau (USCB)’s Longitudinal Employer-Household Data. These data link individual workers to their respective firms, and provide information on salary, job characteristics, and worker and firm location. Approved researchers have relied on these data to better understand labor-market trends.
  2. The Department of Transportation (DOT)’s Secure Data Commons. The Secure Data Commons allows third-party firms (such as Uber, Lyft, and Waze) to provide individual-level mobility data on trips taken. Approved researchers have used these data to understand mobility patterns in cities.

In both cases, the data in question are available to external researchers contingent on agency approval of a research request that clearly explains the purpose of a proposed study, why the requested data are needed, and how those data will be managed. Federal agencies managing access to sensitive data have also implemented additional security and privacy-preserving measures, such as:

Building on these precedents, the NIH and NSF should (ideally jointly) develop secure repositories to house grantmaking data. This action aligns closely with recommendations from the U.S. Commission on Evidence-Based Policymaking, as well as with the above-referenced Secure Research Data Network Act (SRDNA). Both the Commission recommendations and the SRDNA advocate for secure ways to share data between agencies. Creating one or more repositories for federal grantmaking data would be an action that is simultaneously narrower and broader in scope (narrower in terms of the types of data included, broader in terms of the parties eligible for access). As such, this action could be considered either a precursor to or an expansion of the SRDNA, and could be logically pursued alongside SRDNA passage.

Once a secure repository is created, the NIH and NSF should (again, ideally jointly) develop protocols for researchers seeking access. These protocols should clearly specify who is eligible to submit a data-access request, the types of requests that are likely to be granted, and technical capabilities that the requester will need in order to access and use the data. Data requests should be evaluated by a small committee at the NIH and/or NSF (depending on the precise data being requested). In reviewing the requests, the committee should consider questions such as:

  1. How important and policy-relevant is the question that the researcher is seeking to answer? If policymakers knew the answer, what would they do with that information? Would it inform policy in a meaningful way? 
  2. How well can the researcher answer the question using the data they are requesting? Can they establish a clear causal relationship? Would we be comfortable relying on their conclusions to inform policy?

Finally, NIH and NSF should consider including right-to-review clauses in agreements governing sharing of grantmaking data. Such clauses are typical when using personally identifiable data, as they give the data provider (here, the NIH and NSF) the chance to ensure that all data presented in the final research product has been properly aggregated and no individuals are identifiable. The Census Bureau’s Disclosure Review Board can provide some helpful guidance for NIH and NSF to follow on this front.

Recommendation 3. Encourage researchers to utilize these newly available data, and draw on the resulting research to inform possible improvements to grant funding.

The NIH and NSF frequently face questions and trade-offs when deciding if and how to change existing grantmaking processes. Examples include:

Typically, these agencies have very little academic or empirical evidence to draw on for answers. A large part of the problem has been the lack of access to data that researchers need to conduct relevant studies. Expanding access, per Recommendations 1 and 2 above, is a necessary part of but not a sufficient solution. Agencies must also invest in attracting researchers to use the data in a socially useful way.

Broadly advertising the new data will be critical. Announcing a new request for proposals (RFP) through the NIH and/or the NSF for projects explicitly using the data could also help. These RFPs could guide researchers toward the highest-impact and most policy-relevant questions, such as those above. The NSF’s “Science of Science: Discovery, Communication and Impact” program would be a natural fit to take the lead on encouraging researchers to use these data.

The goal is to create funding opportunities and programs that give academics clarity on the key issues and questions that federal grantmaking agencies need guidance on, and in turn the evidence academics build should help inform grantmaking policy.

Conclusion

Basic science is a critical input into innovation, which in turn fuels economic growth, health, prosperity, and national security. The NIH and NSF were founded with these critical missions in mind. To fully realize their missions, the NIH and NSF must understand how to maximize scientific return on federal research spending. And to help, researchers need to be able to analyze federal grantmaking data. Thoughtfully expanding access to this key evidence resource is a straightforward, low-cost way to grow the efficiency—and hence impact—of our federally backed national scientific enterprise.

Frequently Asked Questions
How does the NIH currently select research proposals for funding?

For an excellent discussion of this question, see Li (2017). Briefly, the NIH is organized around 27 “Institutes or Centers” (ICs) which typically correspond to disease areas or body systems. ICs have budgets each year that are set by Congress. Research proposals are first evaluated by around 180 different “study sections”, which are committees organized by scientific areas or methods. After being evaluated by the study sections, proposals are returned to their respective ICs. The highest-scoring proposals in each IC are funded, up to budget limits.

How does the NSF currently select research proposals for funding?

Research proposals are typically submitted in response to announced funding opportunities, which are organized around different programs (topics). Each proposal is sent by the Program Officer to at least three independent reviewers who do not work at the NSF. These reviewers judge the proposal on its Intellectual Merit and Broader Impacts. The Program Officer then uses the independent reviews to make a funding recommendation to the Division Director, who makes the final award/decline decision. More details can be found on the NSF’s webpage.

What data on grant funding at the NIH and NSF is currently (publicly) available?

The NIH and NSF both provide data on approved proposals. These data can be found on the RePORTER site for the NIH and award search site for the NSF. However, these data do not provide any information on the rejected applications, nor do they provide information on the underlying scores of approved proposals.

Masks via Mail: Maintaining Critical COVID-19 Infrastructure for Future Public Health Threats

Summary

To protect against future infectious disease outbreaks, the Department of Health and Human Services (HHS) Coordination Operations and Response Element (H-CORE) should develop and maintain the capacity to regularly deliver N95 respirator masks to every home using a mail delivery system. H-CORE previously developed a mailing system to provide free, rapid antigen tests to homes across the U.S. in response to the COVID-19 pandemic. H-CORE can build upon this system to supply the American public with additional disease prevention equipment––notably face masks. H-CORE can helm this expanded mail-delivery system by (i) gathering technical expertise from partnering federal agencies, (ii) deciding which masks are appropriate for public use, (iii) pulling from a rotating face-mask inventory at the Strategic National Stockpile (SNS), and (iv) centralizing subsequent equipment shipping and delivery. In doing so, H-CORE will fortify the pandemic response infrastructure established during the COVID-19 pandemic, allowing the U.S. government to face future pathogens with preparedness and resilience.

Challenge and Opportunity

The infrastructure put in place to respond to COVID-19 should be maintained and improved to better prepare for and respond to the next pandemic. As the federal government thinks about the future of COVID-19 response programs, it should prioritize maintaining systems that can be flexibly used to address a variety of health threats. One critical capability to maintain is the ability to quickly deliver medical countermeasures across the US. This was already done to provide the American public with COVID-19 rapid tests, but additional medical countermeasures––such as N95 respirators––should also be included. 

N95s are an incredibly effective means of preventing deadly infectious disease spread. Wearing an N95 respirator reduces the odds of testing positive for COVID-19 by 83%, compared to 66% for surgical masks and 56% for cloth masks. The significant difference between N95 respirators and other face coverings means that N95 respirators can provide real public health benefits against a variety of biothreats, not just COVID-19. Adding N95 respirators to H-CORE’s mailing program would increase public access to a highly effective medical countermeasure that protects against a variety of harmful diseases. Providing equitable access to N95 masks can also protect the United States against other dangerous public health emergencies, not just pandemics. Additionally, N95s protect individuals from harmful, wildfire-smoke-derived airborne particles, providing another use-case beyond protection against viruses. 

Beyond the benefit of expanding access to masks in particular, it is important to have an active public health mailing system that can be quickly scaled up to respond to emergencies. In times of need, this established mailing system could distribute a wide array of medical countermeasures, medicines, information, and personal protective equipment––including N95s. Thankfully, the agencies needed to coordinate this effort are already primed to do so. These authorities already have the momentum, expertise, and experience to convert existing COVID-19 response programs and pandemic preparedness investments into permanent health response infrastructure.

Plan of Action

The newly-elevated Administration for Strategic Preparedness and Response (ASPR) should house the N95 respirator mailing system, granting H-CORE key management and distribution responsibilities. Evolving out of the operational capacities built from Operation Warp Speed, H-CORE has demonstrated strong logistical capabilities in distributing COVID-19 vaccines, therapeutics, and at-home tests across the United States. H-CORE should continue operating some of these preparedness programs to increase public access to key medical countermeasures. At the same time, it should also maintain the flexibility to pivot and scale up these response programs as soon as the next public health emergency arises. 

H-CORE should bolster its free COVID-19 test mailing program and include the option to order one box of 10 free N95 respirator masks every quarter. 

H-CORE partnered with the U.S. Postal Service (USPS) to develop an unprecedented initiative––creating an online ordering system for rapid COVID-19 testing to be sent via mail to American households. ASPR should maintain its relationships with USPS and other shipping companies to distribute other needed medical supplies––like N95s. To ensure public comfort, a simple N95 ordering website could be designed to mimic the COVID-19 test ordering site

An N95-distribution program has already been piloted and proven successful. Thanks to ASPR and the National Institute for Occupational Safety and Health (NIOSH), masks previously held at SNS were made available to the public at select retail pharmacies. This program should be made permanent and expanded to maximize the convenience of obtaining medical countermeasures, like masks. Doing so will likely increase the chance that the general population will acquire and use them. Additionally––if supplies are sourced primarily from domestic mask manufacturers––this program can stabilize demand and incentivize further manufacturing within the United States. Keeping this production at a steady base level will also make it easier to scale up quickly, should America face another pandemic or other public health crisis.

H-CORE and ASPR should coordinate with the SNS to provide N95 respirators through a rotating inventory system.  

As evidenced by the 2009 H1N1 influenza pandemic and the COVID-19 pandemic, static stockpiling large quantities of masks is not an effective way to prepare for the next bio-incident. 

Congress has long recognized the need to shift the stockpiling status quo within HSS, including within the SNS. Recent draft legislation––including the Protecting Providers Everywhere (PPE) in America Act and PREVENT Pandemics Act, as well as being mentioned in the National Strategy for a Resilient Public Health Supply Chain––have advocated for a rotating stock system. While the concept is mentioned in these documents, there are few details on what the system would look like in practice or a timeline for its implementation.

Ultimately, the SNS should use a rotating inventory system where its stored masks get rotated out to other uses in the supply chain using a “first in, first out” approach. This will  prevent N95s from being stored beyond their recommended shelf-life and encourage continual replenishment of the SNS’ mask stockpile.

To make this new rotating inventory system possible, ASPR should pilot rotating inventory through this H-CORE mask mailing program while they decide if and how rotating inventory could be implemented in larger quantities (e.g. rotating out to Veterans Affairs, the Department of Defense, and other purchasers). To pilot a rotating inventory system, the Secretary of HHS may enter into contracts and cooperative agreements with vendors, through the SNS contracting mechanisms, and structure the contracts to include maintaining a constant supply and re-stock capacity of the stated product in such quantities as required by the contract. As a guide, the SNS can model these agreements after select pharmaceutical contracts, especially those that have stipulated similar rotating inventory systems (i.e., the radiological countermeasure Neupogen).

The N95 mail-delivery system will allow ASPR, H-CORE, and the SNS to test the rotating stock model in a way that avoids serious risk or negative consequences. The small quantity of N95s needed for the pilot program should not tax the SNS’ supply-at-large. After all, the afore-mentioned H-CORE/NIOSH mask-distribution programs are similarly designed to this pilot, and they do not disrupt the SNS supply for healthcare workers.

Conclusion

To be fully prepared for the next public health emergency, the United States must learn from its previous experience with COVID-19 and continue building the public health infrastructures that proved efficient during this pandemic. Widespread distribution of COVID-19 rapid diagnostic tests is one such success story. The logistics and protocols that made this resource dispersal possible should be continued for other flexible medical countermeasures, like N95 respirators. After all, while the need for COVID-19 tests may wane over time, the relevance of N95 respirators will not.

HHS should therefore distribute N95 respirators to the general public through H-CORE to (i) maintain the existing mailing infrastructure and (ii) increase access to a medical countermeasure that efficiently impedes transmission for many diseases. The masks for this effort should be sourced from the Strategic National Stockpile. This will not only prevent stock expiration, but also pilot rotating inventory as a strategy for larger-scale integration into the SNS. These actions will together equip the public with medical countermeasures relevant to a variety of diseases and strengthen a critical distribution program that should be maintained for future pandemic response.

Frequently Asked Questions
What are medical countermeasures?

Medical countermeasures (MCMs) can include both pharmaceutical interventions (such as vaccines, antimicrobials, antivirals, etc.) and non-pharmaceutical interventions (such as ventilators, diagnostics, personal protective equipment, etc.) that are used to prevent, mitigate, or treat the adverse health effects or a public health emergency. Examples of MCM deployment during the COVID-19 pandemic include the COVID-19 vaccines, therapeutics for COVID-19-hospitalized patients (e.g., antivirals and monoclonal antibodies), and personal protective equipment (e.g., respirators and gloves) deployed to healthcare providers and the public.

Why should the N95 mask delivery system be housed under HHS and managed through ASPR and H-CORE?

This proposal would build off of capabilities already being executed under the Department of Health and Human Services, Administration for Strategic Preparedness and Response (HHS ASPR). ASPR oversees both H-CORE and the Strategic National Stockpile (SNS) and was recently reclassified from a staff division to an operating division. This change allowed ASPR to better mobilize and respond to health-related emergencies. ASPR established H-CORE at the beginning of 2022 to create a permanent team responsible for coordinating medical countermeasures and strengthening preparedness for future pandemics. While H-CORE is currently focused on providing COVID-19 countermeasures––including vaccines, therapeutics, masks, and test kits––their longer-term mission is to augment capabilities within HHS to solve emerging health threats. As such, their ingrained mission and expertise match those required to successfully launch an N95 mail-delivery system.

How many masks would be needed for this program?

Presently, 270 million masks have been made available to the U.S. population. It’s estimated that this same number of masks would be enough for American households to receive 10 masks per quarter, assuming a 50% participation rate in the program.

How much will the N95 delivery system cost?

The total annual cost of this program is an estimated $280 million to purchase 270 million masks and facilitate shipping across the United States.

How should the N95 delivery system be funded?

There are several ways this initiative could be funded. Initial funding to purchase and mail COVID-19 tests to homes came from the American Rescue Plan. By passing the COVID Supplemental Appropriations Act, Congress could provide supplemental funds to maintain standing COVID-19 programs and help pivot them to address evolving and future health threats.


The FY2023 President’s Budget for HHS also provides ample funding for H-CORE, the SNS, and ASPR, meaning it could also provide alternative funding for an N95 mail-delivery system. Presently, the budget asks for: $133 million for H-CORE and mentions their role in making masks available nationwide. Additionally, $975 million has been allotted to the SNS, which includes coordination with HHS and maintaining the stockpile. Furthermore, is petitions for ASPR to receive $12 billion to generally prepare for pandemics and other future biological threats (and here it also specifically recommends strong coordination with HHS agency efforts).

Why are N95 masks important?

N95 respirators have a number of benefits that make them a critical defense strategy in a public health emergency. First, they are pathogen-agnostic, shelf-stable countermeasures that filter airborne particles very efficiently, meaning they can impede transmission for a variety of diseases––especially airborne and aerosolized ones. This is important, since these two latter disease categories are the most likely naturally occurring and intentional biothreats. Second, N95 respirators are useful beyond pandemic responses and also protect against wildfire smoke. Additionally, N95 masks have a long shelf-life. Therefore, the ability to quickly and widely distribute N95s is a critical public health preparedness measure.

Why should the U.S. government fund increased N95 manufacturing capacity?

Domestic mask manufacturers have also frequently experienced boom and bust cycles as public demand for masks can change rapidly and without warning. This inconsistent market makes it difficult for manufacturers to invest in increased manufacturing capacity in the long-term. One example is the company Prestige Ameritech, which invested over $1 million in new equipment and hired 150 new workers to produce masks in response to the 2009 swine flu outbreak. However, by the time production was ready, demand for masks had dropped and the company almost went bankrupt. Given overwhelmingly positive benefits of having mask manufacturing capacity available when needed, it is worthwhile for the government to provide some ongoing demand certainty.


Furthermore, making masks free and easily available to the general public could increase the public’s mask usage during the annual flu season and other periods of sickness. While personal protective equipment has decreased in cost since the peak of the pandemic, making them as accessible as possible will disproportionately increase access for low-income citizens and help ensure equitable access to protective medical countermeasures.

Can N95 respirators be deployed to the public if they are only approved for use in a healthcare setting?

It is true that N95s are not regulated outside of healthcare settings, but that shouldn’t dissuade public use. Presently, there is no federal agency currently tasked with regulating respiratory protection for the public. The Food and Drug Administration (FDA) and the Centers for Disease Control and Prevention (CDC) National Institute for Occupational Safety and Health (NIOSH) currently have a Memorandum of Understanding (MOU) coordinating regulatory authority over N95 respirators for medical use. Neither the FDA nor NIOSH, though, have jurisdiction of mask use in a non-medical, non-occupational setting. Using an N95 respirator outside of a medical setting does not satisfy all of the regulatory requirements, like undergoing a fit-test to ensure proper seal. However, using N95 respirators for every-day respiratory protection (i) provides better protection than no mask, a cloth mask, or a surgical mask, and (ii) realistically should not need to meet the same regulatory standards as medical use as people are not regularly exposed to the same level of risk as medical professionals.

Who currently regulates N95 safety standards?

Presently, there is no central regulator for public respiratory protection in general. In fact, the National Academies of Science Engineering and Medicine recently issued a recommendation for Congress to “expeditiously establish a coordinating entity within the Department of Health and Human Services (HHS) with the necessary responsibility, authority, and resources (financial, personnel, and infrastructure) to provide a unified and authoritative source of information and effective oversight in the development, approval, and use of respiratory protective devices that can meet the needs of the public and protect the public health.”


Moving forward, NIOSH alone should regulate N95 use for the public just as they do in occupational settings. The approval process used by other regulators––like the FDA––is more restrictive than necessary for public use. The FDA’s standards for medical protection understandably need to be high in order to protect doctors, nurses, and other medical professionals against a wide variety of dangerous exposure situations. NIOSH can provide alternative regulation and guidance for the general public, who realistically are unlikely to be in similar circumstances.


Aside from federal agencies, professional scientific societies have also provided their input in regulating N95s. The American Society for Testing and Materials (ASTM), for example, recently published standards for barrier face coverings not intended for medical use or currently regulated under NIOSH standards. While ASTM does not have any regulatory or enforcement authority, HHS could use these standards for protection, comfort, and usability as a starting point for developing guidelines for respirators suitable for public distribution and use.

Why use a rotating inventory system?

After the 2009 H1N1 influenza pandemic and the COVID-19 pandemic, it became evident that SNS must change its stockpile management practices. The stockpile’s reserves of N95 respirators were not sufficiently replenished after the 2009 H1N1 pandemic, in large part due to the significant up-front supply restocking cost. During the early days of COVID-19 response, many states received expired respirators and broken ventilators from the SNS. These incidents revealed a number of issues with the current stockpiling paradigm. Shifting to a rotating inventory system would prevent issues with expiration, smooth out the costs of large periodic restocks, and help maintain a capable and responsive manufacturing base.

Strengthening Policy by Bringing Evidence to Life

Summary

In a 2021 memorandum, President Biden instructed all federal executive departments and agencies to “make evidence-based decisions guided by the best available science and data.” This policy is sound in theory but increasingly difficult to implement in practice. With millions of new scientific papers published every year, parsing and acting on research insights presents a formidable challenge.

A solution, and one that has proven successful in helping clinicians effectively treat COVID-19, is to take a “living” approach to evidence synthesis. Conventional systematic reviews,  meta-analyses, and associated guidelines and standards, are published as static products, and are updated infrequently (e.g., every four to five years)—if at all. This approach is inefficient and produces evidence products that quickly go out of date. It also leads to research waste and poorly allocated research funding.

By contrast, emerging “Living Evidence” models treat knowledge synthesis as an ongoing endeavor. By combining (i) established, scientific methods of summarizing science with (ii) continuous workflows and technology-based solutions for information discovery and processing, Living Evidence approaches yield systematic reviews—and other evidence and guidance—products that are always current. 
The recent launch of the White House Year of Evidence for Action provides a pivotal opportunity to harness the Living Evidence model to accelerate research translation and advance evidence-based policymaking. The federal government should consider a two-part strategy to embrace and promote Living Evidence. The first part of this strategy positions the U.S. government to lead by example by embedding Living Evidence within federal agencies. The second part focuses on supporting external actors in launching and maintaining Living Evidence resources for the public good.

Challenge and Opportunity

We live in a time of veritable “scientific overload”. The number of scientific papers in the world has surged exponentially over the past several decades (Figure 1), and millions of new scientific papers are published every year. Making sense of this deluge of documents presents a formidable challenge. For any given topic, experts have to (i) scour the scientific literature for studies on that topic, (ii) separate out low-quality (or even fraudulent) research, (iii) weigh and reconcile contradictory findings from different studies, and (iv) synthesize study results into a product that can usefully inform both societal decision-making and future scientific inquiry.

This process has evolved over several decades into a scientific method known as “systematic review” or “meta-analysis”. Systematic reviews and meta-analyses are detailed and credible, but often take over a year to produce and rapidly go out of date once published. Experts often compensate by drawing attention to the latest research in blog posts, op-eds, “narrative” reviews, informal memos, and the like. But while such “quick and dirty” scanning of the literature is timely, it lacks scientific rigor. Hence those relying on “the best available science” to make informed decisions must choose between summaries of science that are reliable or current…but not both.

The lack of trustworthy and up-to-date summaries of science constrains efforts, including efforts championed by the White House, to promote evidence-informed policymaking. It also leads to research waste when scientists conduct research that is duplicative and unnecessary, and degrades the efficiency of the scientific ecosystem when funders support research that does not address true knowledge gaps.

Figure 1

Total number of scientific papers published over time, according to the Microsoft Access Graph (MAG) dataset. (Source: Herrmannova and Knoth, 2016)

The emerging Living Evidence paradigm solves these problems by treating knowledge synthesis as an ongoing rather than static endeavor. By combining (i) established, scientific methods of summarizing science with (ii) continuous workflows and technology-based solutions for information discovery and processing, Living Evidence approaches yield systematic reviews that are always up to date with the latest research. An opinion piece published in The New York Times called this approach “a quiet revolution to surface the best-available research and make it accessible for all.”

To take a Living Evidence approach, multidisciplinary teams of subject-matter experts and methods experts (e.g., information specialists and data scientists) first develop an evidence resource—such as a systematic review—using standard approaches. But the teams then commit to regular updates of the evidence resource at a frequency that makes sense for their end users (e.g., once a month). Using technologies such as natural-language processing and machine learning, the teams continually monitor online databases to identify new research. Any new research is rapidly incorporated into the evidence resource using established methods for high-quality evidence synthesis. Figure 2 illustrates how Living Evidence builds on and improves traditional approaches for evidence-informed development of guidelines, standards, and other policy instruments.

Figure 2

Illustration of how a Living Evidence approach to development of evidence-informed policies (such as clinical guidelines) is more current and reliable than traditional approaches. (Source: Author-developed graphic)

Living Evidence products are more trusted by stakeholders, enjoy greater engagement (up to a 300% increase in access/use, based on internal data from the Australian Stroke Foundation), and support improved translation of research into practice and policy. Living Evidence holds particular value for domains in which research evidence is emerging rapidly, current evidence is uncertain, and new research might change policy or practice. For example, Nature has credited Living Evidence with “help[ing] chart a route out” of the worst stages of the COVID-19 pandemic. The World Health Organization (WHO) has since committed to using the Living Evidence approach as the organization’s “main platform” for knowledge synthesis and guideline development across all health issues. 

Yet Living Evidence approaches remain underutilized in most domains. Many scientists are unaware of Living Evidence approaches. The minority who are familiar often lack the tools and incentives to carry out Living Evidence projects directly. The result is an “evidence to action” pipeline far leakier than it needs to be. Entities like government agencies need credible and up-to-date evidence to efficiently and effectively translate knowledge into impact.

It is time to change the status quo. The 2019 Foundations for Evidence-Based Policymaking Act (“Evidence Act”) advances “a vision for a nation that relies on evidence and data to make decisions at all levels of government.” The Biden Administration’s “Year of Evidence” push has generated significant momentum around evidence-informed policymaking. Demonstrated successes of Living Evidence approaches with respect to COVID-19 have sparked interest in these approaches specifically. The time is ripe for the federal government to position Living Evidence as the “gold standard” of evidence products—and the United States as a leader in knowledge discovery and synthesis.

Plan of Action

The federal government should consider a two-part strategy to embrace and promote Living Evidence. The first part of this strategy positions the U.S. government to lead by example by embedding Living Evidence within federal agencies. The second part focuses on supporting external actors in launching and maintaining Living Evidence resources for the public good. 

Part 1. Embedding Living Evidence within federal agencies

Federal science agencies are well positioned to carry out Living Evidence approaches directly. Living Evidence requires “a sustained commitment for the period that the review remains living.” Federal agencies can support the continuous workflows and multidisciplinary project teams needed for excellent Living Evidence products.

In addition, Living Evidence projects can be very powerful mechanisms for building effective, multi-stakeholder partnerships that last—a key objective for a federal government seeking to bolster the U.S. scientific enterprise. A recent example is Wellcome Trust’s decision to fund suites of living systematic reviews in mental health as a foundational investment in its new mental-health strategy, recognizing this as an important opportunity to build a global research community around a shared knowledge source. 

Greater interagency coordination and external collaboration will facilitate implementation of Living Evidence across government. As such, President Biden should issue an Executive Order establishing an Living Evidence Interagency Policy Committee (LEIPC) modeled on the effective Interagency Arctic Research Policy Committee (IARPC). The LEIPC would be chartered as an Interagency Working Group of the National Science and Technology Council (NSTC) Committee on Science and Technology Enterprise, and chaired by the Director of the White House Office of Science and Technology Policy (OSTP; or their delegate). Membership would comprise representatives from federal science agencies, including agencies that currently create and maintain evidence clearinghouses, other agencies deeply invested in evidence-informed decision making, and non-governmental experts with deep experience in the practice of Living Evidence and/or associated capabilities (e.g., information science, machine learning).

Supporting federal implementation of Living Evidence

Widely accepted guidance for living systematic reviews (LSRs), one type of Living Evidence product, has been published. The LEIPC—working closely with OSTP, the White House Office of Management and Budget (OMB), and the federal Evaluation Officer Council (EOC), should adapt this guidance for the U.S. federal context, resulting in an informational resource for federal agencies seeking to launch or fund Living Evidence projects. The guidance should also be used to update systematic-review processes used by federal agencies and organizations contributing to national evidence clearinghouses.2

Once the federally tailored guidance has been developed, the White House should direct federal agencies to consider and pursue opportunities to embed Living Evidence within their programs and operations. The policy directive could take the form of a Presidential Memorandum, a joint management memo from the heads of OSTP and OMB, or similar. This directive would (i) emphasize the national benefits that Living Evidence could deliver, and (ii) provide agencies with high-level justification for using discretionary funding on Living Evidence projects and for making decisions based on Living Evidence insights.

Identifying priority areas and opportunities for federally managed Living Evidence projects

The LEIPC—again working closely with OSTP, OMB, and the EOC—should survey the federal government for opportunities to deploy Living Evidence internally. Box 1 provides examples of opportunities that the LEIPC could consider.

The product of this exercise should be a report that describes each of the opportunities identified, and recommends priority projects to pursue. In developing its priority list, the LEIPC should account for both the likely impact of a potential Living Evidence project as well as the near-term feasibility of that project. While the report could outline visions for ambitious Living Evidence undertakings that would require a significant time investment to realize fully (e.g., transitioning the entire National Climate Assessment into a frequently updated “living” mode), it should also scope projects that could be completed within two years and serve as pilots/proofs of concept. Lessons learned from the pilots could ultimately inform a national strategy for incorporating Living Evidence into federal government more systematically. Successful pilots could continue and grow beyond the end of the two-year period, as appropriate.

Fostering greater collaboration between government and external stakeholders

The LEIPC should create an online “LEIPC Collaborations” platform that connects researchers, practitioners, and other stakeholders both inside and outside government. The platform would emulate IARPC Collaborations, which has built out a community of more than 3,000 members and dozens of communities of practice dedicated to the holistic advancement of Arctic science. As one stakeholder has explained:

LEIPC Collaborations could deliver the same participatory opportunities and benefits for members of the evidence community, facilitating holistic advancement of Living Evidence.

Part 2. Make it easier for scientists and researchers to develop LSRs

Many government efforts could be supported by internal Living Evidence initiatives, but not every valuable Living Evidence effort should be conducted by government. Many useful Living Evidence programs will require deep domain knowledge and specialized skills that teams of scientists and researchers working outside of government are best positioned to deliver.

But experts interested in pursuing Living Evidence efforts face two major difficulties. The first is securing funding. Very little research funding is awarded for the sole purpose of conducting systematic reviews and other types of evidence syntheses. The funding that is available is typically not commensurate with the resource and personnel needs of a high-quality synthesis. Living Evidence demands efficient knowledge discovery and the involvement of multidisciplinary teams possessing overlapping skill sets. Yet federal research grants are often structured in a way that precludes principal investigators from hiring research software engineers or from founding co-led research groups.

The second is aligning with incentives. Systematic reviews and other types of evidence syntheses are often not recognized as “true” research outputs by funding agencies or university tenure committees—i.e., they are often not given the same weight in research metrics, despite (i) utilizing well-established scientific methodologies involving detailed protocols and advanced data and statistical techniques, and (ii) resulting in new knowledge. The result is that talented experts are discouraged from investing their time on projects that can contribute significant new insights and could dramatically improve the efficiency and impact of our nation’s research enterprise.

To begin addressing these problems, the two biggest STEM-funding agencies—NIH and NSF—should consider the following actions:

  1. Perform a landscape analysis of federal funding for evidence synthesis. Rigorously documenting the funding opportunities available (or lack thereof) for researchers wishing to pursue evidence synthesis will help NIH and NSF determine where to focus potential new opportunities. The landscape analysis should consider currently available funding opportunities for systematic, scoping, and rapid reviews, and could also include surveys and focus groups to assess the appetite in the research community for pursuing additional evidence-synthesis activities if supported.
  2. Establish new grant opportunities designed to support Living Evidence projects. The goal of these grant opportunities would be to deliver definitive and always up-to-date summaries of research evidence and associated data in specified topics. The opportunities could align with particular research focuses (for instance, a living systematic review on tissue-electronic interfacing could facilitate progress on bionic limb development under NSF’s current “Enhancing Opportunities for Persons with Disabilities” Convergence Accelerator track). The opportunities could also be topic-agnostic, but require applicants to justify a proposed project by demonstrating that (i) the research evidence is emerging rapidly, (ii) current evidence is uncertain, and (iii) new research might materially change policy or practice.
  3. Increase support for career research staff in academia. Although contributors to Living Evidence projects can cycle in and out (analogous to turnover in large research collaboratives), such projects benefit from longevity in a portion of the team. With this core team in place, Living Evidence projects are excellent avenues for grad students to build core research skills, including in research study design. 
  4. Leverage prestigious existing grant programs and awards to incentivize work on Living Evidence. For instance, NSF could encourage early-career faculty to propose LSRs in applications for CAREER grants.
  5. Recognize evidence syntheses as research outputs. In all assessments of scientific track record (particularly research-funding schemes), systematic reviews and other types of rigorous evidence synthesis should be recognized as research outputs equivalent to “primary” research. 

The grant opportunities should also:

Conclusion

Policymaking can only be meaningfully informed by evidence if underpinning systems for evidence synthesis are robust. The Biden administration’s Year of Evidence for Action provides a pivotal opportunity to pursue concrete actions that strengthen use of science for the betterment of the American people. Federal investment in Living Evidence is one such action. 

Living Evidence has emerged as a powerful mechanism for translating scientific discoveries into policy and practice. The Living Evidence approach is being rapidly embraced by international actors, and the United States has an opportunity to position itself as a leader. A federal initiative on Living Evidence will contribute additional energy and momentum to the Year of Evidence for Action, ensure that our nation does not fall behind on evidence-informed policymaking, and arm federal agencies with the most current and best-available scientific evidence as they pursue their statutory missions.

Frequently Asked Questions
Which sectors and scientific fields can use Living Evidence?
The Living Evidence model can be applied to any sector or scientific field. While the Living Evidence model has so far been most widely applied to the health sector, Living Evidence initiatives are also underway in other fields, such as education and climate sciences. Living Evidence is domain-agnostic: it is simply an approach that builds on existing, rigorous evidence-synthesis methods with a novel workflow of frequent and rapid updating.
What is needed to run a successful Living Evidence project?
It does not take long for teams to develop sufficient experience and expertise to apply the Living Evidence model. The key to a successful Living Evidence project is a team that possesses experience in conventional evidence synthesis, strong project-management skills, an orientation towards innovation and experimentation, and investment in building stakeholder engagement.
How much does Living Evidence cost?
As with evidence synthesis in general, cost depends on topic scope and the complexity of the evidence being appraised. Budgeting for Living Evidence projects should distinguish the higher cost of conducting an initial “baseline” systematic review from the lower cost of maintaining the project thereafter. Teams initiating a Living Evidence project for the first time should also budget for the inevitable experimentation and training required.
Do Living Evidence initiatives require recurrent funding?
No. Living Evidence initiatives are analogous to other significant scientific programs that may extend over many years, but receive funding in discrete, time-bound project periods with clear deliverables and the opportunity to apply for continuation funding. 


Living Evidence projects do require funding for enough time to complete the initial “baseline” systematic review (typically 3-12 months, depending on scope and complexity), transition to maintenance (“living”) mode, and continue in living mode for sufficient time (usually about 6–12 months) for all involved to become familiar with maintaining and using the living resource. Hence Living Evidence projects work best when fully funded for a minimum of two years.
If there is support for funding beyond this minimum period, there are operational advantages of instantiating the follow-on funding before the previous funding period concludes. If follow-on funding is not immediately available, Living Evidence resources can simply revert to a conventional static form until and if follow-on funding becomes available.

Is Living Evidence sustainable?
Living Evidence is rapidly gaining momentum as organizations conclude that the conventional model of evidence synthesis is no longer sustainable because the volume of research that must be reviewed and synthesized for each update has grown beyond the capacity of typical project teams. Organizations that transition their evidence resources into “living” mode typically find the dynamic synthesis model to be more consistent, more feasible, easier to manage, and easier to plan for and resource. If the conventional model of intermittent synthesis is like climbing a series of  mountains, the Living Evidence approach is like hiking up to and then walking across a plateau.
How can organizations that are already struggling to develop and update conventional evidence resources take on a Living Evidence project?
New initiatives usually need specific resourcing; Living Evidence is no different. The best approach is to identify a champion within the organization that has an innovation orientation and sufficient authority to effect change. The champion plays a key role in building organizational buy-in, particularly from senior leaders, key influencers within the main evidence program, and major partners, stakeholders and end users. Ultimately, the champion (or their surrogate) should be empowered and resourced to establish 1–3 Living Evidence pilots running alongside the organization’s existing evidence activities. Risk can be reduced by starting small and building a “minimum viable product” Living Evidence resource (i.e., by finding a topic area that is relatively modest in scope, of importance to stakeholders, and is characterized by evidence uncertainty as well as relatively rapid movement in the relevant research field). Funding should be structured to enable experimentation and iteration, and then move quickly to scale up, increasing the scope of evidence moving into living mode, as organizational and stakeholder experience and support builds.
Living Evidence sounds neverending. Wouldn’t that lead to burnout in the project team?
One of the advantages of the Living Evidence model is that the project team can gradually evolve over time (members can join and leave as their interests and circumstances change). This is analogous to the evolution of an ongoing scientific network or research collaborative. In contrast, the spikes in workload required for intermittent updates of conventional evidence products often lead to burnout and loss of institutional memory. Furthermore, teams working on Living Evidence are often motivated by participation in an innovative approach to evidence and pride in contributing to a definitive, high-quality, and highly impactful scientific initiative.
How is Living Evidence disseminated?

While dissemination of conventional evidence products involves sharing several dozen key messages in a once-in-several-years communications push, dissemination of Living Evidence amounts to a regular cycle of “what’s new” updates (typically one to two key insights). Living Evidence dissemination feeds become known and trusted by end users, inspiring confidence that end users can “keep up” with the implications of new research. Publication of Living Evidence can take many forms. Typically, the core evidence resource is housed in an organizational website that can be easily and frequently updated, sometimes with an ability for users to access previous versions of the resource. Living Evidence may also be published as articles in academic journals. This could  be intermittent overviews of the evidence resource with links back to the main Living Evidence summaries, or (more ambitiously) as a series of frequently updated versions of an article that are logically linked. Multiple academic journals are innovating to better support “living” publications.

If Living Evidence products are continually updated, doesn’t that confuse end users with constantly changing conclusions?
Living Evidence requires continual monitoring for new research, as well as frequent and rapid incorporation of new research into existing evidence products. The volume of research identified and incorporated can vary from dozens of studies each month to a few each year, depending on the topic scope and research activity.


Even across broad topics in fast-moving research fields, though, the overall findings and conclusions of Living Evidence products change infrequently since the threshold for changing a conclusion drawn from a whole body of evidence is high. The largest Living Evidence projects in existence only yield about one to two new major findings or recommendations each update. Furthermore, any good evidence-synthesis product will contextualize conclusions and recommendations with confidence.

What are the implications of Living Evidence for stakeholder engagement?
Living Evidence projects, due to their persistent nature, are great opportunities for building partnerships with stakeholders. Stakeholders tend to be energized and engaged in an innovative project that gives them, their staff, and their constituencies a tractable mechanism by which to engage with the “current state of science”. In addition, the ongoing nature of a Living Evidence project means that project partnerships are always active. Stakeholders are continually engaged in meaningful, collaborative discussions and activities around the current evidence. Finally, this ongoing, always-active nature of Living Evidence projects creates “accumulative” partnerships that gradually broaden and deepen over time.
What are the equity implications of taking a Living Evidence approach?
Living Evidence resources make the latest science available to all. Conventionally, the lack of high-quality summaries of science has meant the latest science is discovered and adopted by those closest to centers of excellence and expertise. Rapid incorporation of the latest science into Living Evidence resources—as well as the wide promotion and dissemination of that science—means that the immediate benefits of science can be shared much more broadly, contributing to equity of access to science and its benefits.
What are the implications of Living Evidence for knowledge translation?
The activities that use research outputs and evidence resources (such as Living Evidence) to change practice and policy are often referred to as “knowledge translation”. These activities are substantial and often multifaceted interventions that identify and address the complex structural, organizational, and cultural barriers that impede knowledge use. 


Living Evidence has the potential to accelerate knowledge translation: not because of any changes to the knowledge-translation enterprise, but because Living Evidence identifies earlier the high-certainty evidence that underpins knowledge-translation activities.

Living Evidence may also enhance knowledge translation in two ways. First, Living Evidence is a better evidence product and has been shown to increase trust, engagement, and intention to use among stakeholders. Second, as mentioned above, Living Evidence creates opportunities for deep and effective partnerships. Together, these advantages could position Living Evidence to yield a more effective “enabling environment” for knowledge translation.

Does Living Evidence require use of technologies like machine learning?
Technologies such as natural language processing, machine learning and citizen science (crowdsourcing), as well as efforts to build common data structures (and create Findable, Accessible, Interoperable and Reusable (FAIR) data), are advancing alongside Living Evidence. These technologies are often described as “enablers” of Living Evidence. While such technologies are commonly used and developed in Living Evidence projects, they are not essential. Nevertheless, over the longer term, such technologies will likely be indispensable for creating sustainable systems that make sense of science.

Creating a Digital Service for the Planet

Summary

Challenge and Opportunity

The Biden administration—through directives such as Executive Order 14008 on Tackling the Climate Crises at Home and Abroad and President Biden’s Memorandum on Restoring Trust in Government Through Scientific Integrity and Evidence-Based Policymaking, as well as through initiatives such as Justice40 and America the Beautiful (30×30)—has laid the blueprint for a data-driven environmental agenda. 

However, the data to advance this agenda are held and managed by multiple agencies, making them difficult to standardize, share, and use to their full potential. For example, water data are collected by 25 federal entities across 57 data platforms and 462 different data types. Permitting for wetlands, forest fuel treatments, and other important natural-resource management tasks still involves a significant amount of manual data entry, and protocols for handling relevant data vary by region or district. Staff at environmental agencies have privately noted that it can take weeks or months to receive necessary data from colleagues in other agencies, and that they have trouble knowing what data exist at other agencies. Accelerating the success and breadth of environmental initiatives requires digitized, timely, and accessible information for planning and execution of agency strategies.

The state of federal environmental data today echoes the state of public-health data in 2014, when President Obama recognized that the Department of Health and Human Services lacked the technical skill sets and capacity needed to stand up Healthcare.gov. The Obama administration responded by creating the U.S. Digital Service (USDS), which provides federal agencies with on-demand access to the technical expertise they need to design, procure, and deploy technology for the public good. Over the past eight years, USDS has developed a scalable and replicable model of working across government agencies. Projects that USDS has been involved in—like improving federal procurement and hiring processes, deploying healthcare.gov, and modernizing administrative tasks for veterans and immigrants—have saved agencies such as the Department of Veterans Affairs millions of dollars.

But USDS lacks the specialized capacity and skills, experience, and specific directives needed to fully meet the shared digital-infrastructure needs of environmental agencies. The Climate and Economic Justice Screening Tool (CEJST) is an example of how crucial digital-service capacity is for tackling the nation’s environmental priorities, and the need for a DSP. While USDS was instrumental in getting the tool off the ground, several issues with the launch point to a lack of specialized environmental capabilities and expertise within USDS. Many known environmental-justice issues—including wildfire, drought, and flooding—were not reflected in the tool’s first iteration. In addition, the CEJST should have been published in July 2021, but the beta version was not released until February 2022. A DSP familiar with environmental data would have started with a stronger foundation to help anticipate and incorporate such key environmental concerns, and may have been able to deliver the tool on a tighter timeline.

There is hope in this challenge. The fact that many environmental programs across multiple federal agencies have overlapping data and technology needs means that a centralized and dedicated team focused on addressing these needs could significantly and cost-effectively advance the capacities of environmental agencies to:

Plan of Action

To best position federal agencies to meet environmental goals, the Biden administration should establish a “Digital Service for the Planet (DSP).” The DSP would build off the successes of USDS to provide support across three key areas for environmental agencies:

  1. Strategic planning and procurement. Scoping, designing, and procuring technology solutions for programmatic goals. For example, a DSP could help the Fish and Wildlife Service (FWS) accelerate updates to the National Wetlands Inventory, which are currently estimated to take 10 years and cost $20 million dollars.
  2. Technical development. Implementing targeted technical-development activities to achieve mission-related goals in collaboration with agency staff. For example, a DSP could help update the accessibility and utility for many government tools that the public rely heavily on, such as the Army Corps system that tracks mitigation banks (known as the Regulatory In lieu fee and Bank Information Tracking System (RIBITS)).
  3. Cross-agency coordination on digital infrastructure. Facilitating data inventory and sharing, and development of the databases, tools, and technological processes that make cross-agency efforts possible. A DSP could be a helpful partner for facilitating information sharing among agencies that monitor interrelated events, environments, or problems, including droughts, wildfires, and algal blooms. 

The DSP could be established either as a new branch of USDS, or as a new and separate but parallel entity housed within the White House Office of Management and Budget. The former option would enable DSP to leverage the accumulated knowledge and existing structures of USDS. The latter option would enable DSP to be established with a more focused mandate, and would also provide a clear entry point for federal agencies seeking data and technology support specific to environmental issues.

Regardless of the organizational structure selected, DSP should include the essential elements that have helped USDS succeed—per the following recommendations.

Recommendation 1. The DSP should emulate the USDS’s staffing model and position within the Executive Office of the President (EOP).

The USDS hires employees on short-term contracts, with each contract term lasting between six months and four years. This contract-based model enables USDS to attract high-level technologists, product designers, and programmers who are interested in public service, but not necessarily committed to careers in government. USDS’s staffing model also ensures that the Service does not take over core agency capacities, but rather is deployed to design and procure tech solutions that agencies will ultimately operate in-house (i.e., without USDS involvement). USDS’s position within the EOP makes USDS an attractive place for top-level talent to work, gives staff access to high-level government officials, and enables the Service to work flexibly across agencies.

Recommendation 2. Staff the DSP with specialists who have prior experience working on environmental projects.

Working on data and technology issues within environmental contexts requires specialized skill sets and experience. For example, geospatial data and analysis are fundamental to environmental protection and conservation, but this has not been a focal point of USDS hiring. In addition, a DSP staff fluent in the vast and specific terminologies used in environmental fields (such as water management) will be better able to communicate with the many subject-matter experts and data stewards working in environmental agencies. 

Recommendation 3. Place interagency collaboration at the core of the DSP mission.

Most USDS projects focus on a single federal agency, but environmental initiatives—and the data and tech needs they present—almost always involve multiple agencies. Major national challenges, including flood-risk management, harmful algal blooms, and environmental justice, all demand an integrated approach to realize cross-agency benefits. For example, EPA-funded green stormwater infrastructure could reduce flood risk for housing units subsidized by the Department of Housing and Urban Development. DSP should be explicitly tasked with devising approaches for tackling complex data and technology issues that cut across agencies. Fulfilling this mandate may require DSP to bring on additional expertise in core competencies such as data sharing and integration.

Recommendation 4. Actively promote the DSP to relevant federal agencies.

Despite USDS’s eight-year existence, many staff members at agencies involved in environmental initiatives know little about the Service and what it can do for them. To avoid underutilization due to lack of awareness, the DSP’s launch should include an outreach campaign targeted at key agencies, including but not limited to the U.S. Army Corps of Engineers (USACE), the Department of Energy (DOE), the Department of the Interior (DOI), the Environmental Protection Agency (EPA), the National Aeronautics and Space Administration (NASA), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Department of Agriculture, and the U.S. Global Change Research Program (USGCRP).

Conclusion

A new Digital Service for the Planet could accelerate progress on environmental and natural-resource challenges through better use of data and technology. USDS has shown that a relatively small and flexible team can have a profound and lasting effect on how agencies operate, save taxpayer money, and encourage new ways of thinking about long standing problems. However, current capacity at USDS is limited and not specifically tailored to the needs of environmental agencies. From issues ranging from water management to environmental justice, ensuring better use of technology and data will yield benefits for generations to come. This is an important step for the federal government to be a better buyer, partner, and consumer of the data technology and innovations that are necessary to support the country’s conservation, water, and stewardship priorities.

Frequently Asked Questions
How would the DSP differ from the U.S. Digital Service?

The DSP would build on the successful USDS model, but would have two distinguishing characteristics. First, the DSP would employ staff experienced in using or managing environmental data and possessing special expertise in geospatial technologies, remote sensing, and other environmentally relevant tech capabilities. Second, DSP would have an explicit mandate to develop processes for tackling data and technology issues that frequently cut across agencies. For example, the Internet of Water found that at least 25 different federal entities collect water data, while the USGCRP has identified at least 217 examples of earth observation efforts spanning many agencies. USDS is not designed to work with so many agencies at once on a single project—but DSP would be.

Would establishing the DSP prohibit agencies from independently improving their data and tech practices? 

Not in most cases. The DSP would focus on meeting data and technology needs shared by multiple agencies. Agencies would still be free—and encouraged!—to pursue agency-specific data- and tech-improvement projects independently.


Indeed, a hope would be that by showcasing the value of digital services for environmental projects on a cross-agency basis, the DSP would inspire individual agencies to establish their own digital services teams. Precedent for this evolution exists: the USDS provided initial resources to solve digital challenges for healthcare.gov and Department of Veteran Affairs. The Department of Veteran Affairs and Department of Defense have since started their internal digital services teams. However, even with agency-based digital service teams, there will always be a need for a team with a cross-agency view, especially given that so many environmental problems and solutions extend well beyond the borders of a single agency. Digital-service teams at multiple levels can be complementary and would focus on different project scopes and groups of users. For example, agency-specific digital-service teams would be much better positioned to help sustain agency-specific components of an effort established by DSP.

How much would this proposal cost?

We propose the DSP start with a mid-sized team of twenty to thirty full-time equivalent employees (FTEs) and a budget around $8 million. These personnel and financial allocations are in line with allocations for USDS. DSP could be scaled up over time if needed, just as USDS grew from approximately 12 FTEs in fiscal year (FY) 2014 to over 200 FTEs in FY 2022. The long-term target size of the DSP team should be informed by the uptake and success of DSP-led work.

Why would agencies want a DSP? Why would they see it as beneficial?

From our conversations with agency staff, we (the authors) have heard time and again that agencies see immense value in a DSP, and find that two scenarios often inhibit improved adoption of environmental data and technology. The first scenario is that environmental-agency staff see the value in pursuing a technology solution to make their program more effective, but do not have the authority or resources to implement the idea, or are not aware of the avenues available to do so. DSP can help agency staff design and implement modern solutions to realize their vision and coordinate with important stakeholders to facilitate the process.


The second scenario is that environmental-agency staff are trained experts in environmental science, but not in evaluating technology solutions. As such, they are poorly equipped to evaluate the integrity of proposed solutions from external vendors. If they end up trialing a solution that is a poor fit, they may become risk-averse to technology at large. In this scenario, there is tremendous value in having a dedicated team of experts within the government available to help agencies source the appropriate technology or technologies for their programmatic goals.

Expanding Pathways for Career Research Scientists in Academia

Summary

The U.S. university research enterprise is plagued by an odd bug: it encourages experts in science, technology, engineering, and math (STEM) to leave it at the very moment they become recognized as experts. People who pursue advanced degrees in STEM are often compelled by deep interest in research. But upon graduation from master’s, Ph.D., or postdoctoral programs, these research-oriented individuals face a difficult choice: largely cede hands-on involvement in research to pursue faculty positions (which increasingly demand that a majority of time be spent on managerial responsibilities, such as applying for grants), give up the higher pay and prestige of the tenure track in order to continue “doing the science” via lower-status staff positions (e.g., lab manager, research software engineer), or leave the academic sector altogether. 

Many choose the latter. And when that happens at scale, it harms the broader U.S. scientific enterprise by (i) decreasing federal returns on investment in training STEM researchers, and (ii) slowing scientific progress by creating a dearth of experienced personnel conducting basic research in university labs and mentoring the next generation of researchers. The solution is to strengthen and elevate the role of the career research scientist1 in academia—the highly trained senior research-group member who is hands-on and in the lab every day—in the university ecosystem. This is, fundamentally, a fairly straightforward workforce-pipeline issue that federal STEM-funding agencies have the power to address. The National Institutes of Health (NIH) and the National Science Foundation (NSF) — two of the largest sources of academic research funding — could begin by hosting high-level discussions around the problem: specifically, through an NSF-led workshop and an NIH-led task force. In parallel, the two agencies can launch immediately tractable efforts to begin making headway in addressing the problem. NSF, for instance, could increase visibility and funding for research software engineers, while NSF and/or NIH could consider providing grants to support “co-founded” research labs jointly led by an established professor or principal investigator (PI) working alongside an experienced career research scientist.

The collective goal of these activities is to infuse technical expertise into the day-to-day ideation and execution of science (especially basic research), thereby accelerating scientific progress and helping the United States retain world scientific leadership.

Challenge and Opportunity

The scientific status quo in the United States is increasingly diverting STEM experts away from direct research opportunities at universities. STEM graduate students interested in hands-on research have few attractive career opportunities in academia: those working as staff scientists, lab managers, research software engineers, and similar forego the higher pay and status of the tenure track, while those working as faculty members find themselves encumbered by tasks that are largely unrelated to research. 

Making it difficult for STEM experts to pursue hands-on research in university settings harms the broader U.S. scientific enterprise in two ways. First, the federal government disburses huge amounts of money every year—via fellowship funding, research grants, tuition support, and other avenues—to help train early-career STEM researchers. This expenditure is warranted because, as the Association of American Universities explains, “There is broad consensus that university research is a long-term national investment in the future.” This investment hinges on university contributions to basic research; universities and colleges account for just 13% of overall U.S. research and development (R&D) activity, but nearly half (48%) of basic research. Limited career opportunities for talented STEM researchers to continue “doing the science” in academic settings therefore limits our national returns on investment in these researchers.

Box 1. Productivity benefits of senior researchers in software-driven fields.
Cutting-edge research in nearly all STEM fields increasingly depends on software. Indeed, NSF observes that software is “directly responsible for increased scientific productivity and significant enhancement of researchers’ capabilities.” Problematically, there is minimal support within academia for development and ongoing maintenance of software. It is all too common for a promising research project at a university lab to wither when the graduate student who wrote the code upon which the project depends finishes their degree and leaves.

The field of deep learning (a branch of artificial intelligence (AI) and machine learning) underscores the value of research software. Progress in deep learning was slow and stuttering until development of user-friendly software tools in the mid-2010s: a development spurred mostly by private-sector investment. The result has been an explosion of productivity in deep learning. Even now, top AI research teams cite software-engineering talent as a critical input upon which their scientific output depends. But while research software engineers are some of the most in-demand and valuable team members in the private sector, career positions for research software engineers are uncommon at academic institutions. How much potential scientific discovery are U.S. university labs failing to recognize as a result of this underinvestment?

Second, attrition of STEM talent from academia slows the pace of U.S. scientific progress because most hands-on research activities are conducted by graduate students rather than more experienced personnel. Yet, senior researchers are far more scientifically productive. With years of experience under their belt, senior researchers possess tacit knowledge of how to effectively get research done in a field, can help a team avoid repeating mistakes, and can provide the technical mentorship needed for graduate students to acquire research skills quickly and well. And with graduate students and postdocs typically remaining with a research group for only a few years, career research scientists also provide important continuity across projects. The productivity boosts that senior researchers can deliver are especially well established for software-driven fields (see box).

The absence of attractive job opportunities for career research scientists at most academic institutions is an anomaly. Such opportunities are prevalent in the private sector, at national labs (e.g., those run by the NIH and the Department of Energy) and other government institutions, and in select well-endowed university labs that enjoy more discretionary spending ability. As the dominant funder of university research in the United States, the federal government has massive leverage over the structure of research labs. With some small changes in grant-funding incentives, federal agencies can address this anomaly and bring more senior research scientists into the academic research system. 

Plan of Action

Federal STEM-funding agencies — led by NSF and NIH, as the two largest sources of federal funding for academic research — should explore and pursue strategies for changing grant-funding incentives in ways that strengthen and elevate the role of the career research scientist in academia. We split our recommendations into two parts. 

The first part focuses on encouraging discussion. The problem of limited career options for trained STEM professionals who want to engage in hands-on research in the academic sector currently flies beneath the radar of many extremely knowledgeable stakeholders inside and outside of the federal government. Bringing these stakeholders together might result in excellent actionable suggestions on how to retain talented research scientists in academia. Second, we suggest two specific projects to make headway on the problem: (i) further support for research software engineers and (ii) a pilot program supporting co-founded research labs. While the recommendations below are targeted to NSF and NIH, other federal STEM-funding agencies (e.g., the Departments of Energy and Defense) can and should consider similar actions. 

Part 1. Identify needs, opportunities, and options for federal actions to support and incentivize career research scientists.2

Shifting academic employment towards a model more welcoming to career research scientists will require a mix of specific new programs and small and large changes to existing funding structures. However, it is not yet clear which reforms should be prioritized. Our first set of suggestions is designed to start the necessary discussion.

Specifically, NSF should start by convening key community members at a workshop (modeled on previous NSF-sponsored workshops, such as the workshop on a National Network of Research Institutes [NNRI]) focused on how the agency can encourage creation of additional career research scientist positions at universities. The workshop should also (i) discuss strategies for publicizing and encouraging outstanding STEM talent to pursue such positions, (ii) identify barriers that discourage universities from posting for career research scientists, and (iii) brainstorm solutions to these barriers. Workshop participants should include representatives from federal agencies that sponsor national labs as well as industry sectors (software, biotech, etc.) that conduct extensive R&D, as these entities are more experienced employers of career research scientists. The workshop should address the following questions:

The primary audience for the workshop will be NSF leadership and policymakers. The output of the workshop should be a report suggesting a clear, actionable path forward for those stakeholders to pursue.

NIH should pursue an analogous fact-finding effort, possibly structured as a working group of the Advisory Committee to the Directorate. This working group would identify strategies for incentivizing labs to hire professional staff members, including expert lab technicians, professional biostatisticians, and RSEs. This working group will ultimately recommend to the NIH Director actions that the agency can take to expand the roles of career research scientists in the academic sector. The working group would address questions similar to those explored in the NSF workshop.

Part 2. Launch two pilot projects to begin expanding opportunities for career research scientists.

Pilot 1. Create a new NSF initiative to solicit and fund requests for research software engineer (RSE) support. 

Research software engineers (RSEs) build and maintain research software, and train scientists to use that software. Incentivizing the creation of long-term RSE positions at universities will increase scientific productivity and build the infrastructure for sustained scientific progress in the academic sector. Though a wide range of STEM disciplines could benefit from RSE involvement, NSF’s Computer and Information Science and Engineering (CISE) Directorate is a good place to start expanding support for RSEs in academic projects. 

CISE has previously invested in nascent support structures for professional staff in software and computing fields. The CISE Research Initiation Initiative (CRII) was created to build research independence among early-career researchers working in CISE-related fields by funding graduate-student appointments. Much CRII-funded work involves producing — and in turn, depends on — shared community software. Similarly, the Campus Research Computing Consortium (CaRCC) and RCD Nexus are NSF-supported programs focused on creating guidelines and resources for campus research computing operations and infrastructure. Through these two programs, NSF is helping universities build a foundation of (i) software production and (ii) computing hardware and infrastructure needed to support that software. 

However, effective RSEs are crucial for progress in scientific fields outside of CISE’s domain. For example, one of this memo’s authors has personal experience with NSF-funded geosciences research. PIs working in this field are desperate for funding to hire RSEs, but do not have access to funding for that purpose. Instead, they depend almost entirely on graduate students.

As a component of the workshop recommended above, NSF should highlight other research areas hamstrung by an acute need for RSEs. In addition, CISE should create a follow-on CISE Software Infrastructure Initiative (CSII) that solicits and funds requests from pre-tenure academic researchers in a variety of fields for RSE support. Requests should explain how the requested RSE would (i) catalyze cutting-edge research, and (ii) maintain critical community open-source scientific software. Moreover, academia severely lacks strong mentorship in software engineering. A specific goal of CSII funding should be to support at least a 1:3 ratio of RSEs to graduate students in funded labs. Creative evaluation mechanisms will be needed to assess the success of CSII. The goal of this initiative will be a community of university researchers productively using software created and supported by RSEs hired through CSII funding. 

Pilot 2. Provide grants to support “co-founded” research labs jointly led by an established professor or principal investigator (PI) working alongside an experienced career research scientist.

Academic PIs (typically faculty) normally lead their labs and research groups alone. This state of affairs leads to high rates of burnout, possibly leading to poor research success. In some cases, starting an ambitious new project or company with a co-founder makes the endeavor more likely to succeed while being less stressful and isolating. A co-founder can provide a complementary set of skills. For example, the startup incubator Y Combinator is well known for wanting teams to include a CEO visionary and manager working alongside a CTO builder and designer. By contrast, academic PIs are expected to be talented at all aspects of running a modern scientific lab. Developing mechanisms to help scientists come together and benefit from complementary skill sets should be a high priority for science-funding agencies.

We recommend that NSF and/or NIH create a pilot grant program to fund co-founded research labs at universities. Formally co-founded research groups have been successful across scientific domains (e.g., the AbuGoot Lab at MIT and the Carpenter-Singh Lab at the Broad Institute), but remain quite rare. Federal grants for co-founded research labs would build on this proof of concept by competitively awarding 5–7 years of salary and equipment funding to support a lab jointly run by an early-career PI and a career research scientist. A key anticipated benefit of this grant program is increased retention of outstanding researchers in positions that enable them to keep “doing the science.” Currently, the most talented STEM researchers become faculty members or leave academia altogether. Career research scientist positions simply cannot offer competitive levels of compensation and prestige. Creating a new, high-profile, grant-funded opportunity for STEM talent to remain in hands-on university lab positions could help shift the status quo. Creating a pathway for co-founded and co-led research labs would also help PIs avoid isolation and burnout while building more robust, healthy, and successful research teams.

Conclusion

Many breakthroughs in scientific progress have required massive funding and national coordination. This is not one of them. All that needs to be done is allow expert research scientists to do the hands-on work that they’ve been trained to do. The scientific status quo prevents our nation’s basic research enterprise from achieving its full potential, and from harnessing that potential for the common good. Strengthening and elevating the role of career research scientists in the academic sector will empower existing STEM talent to drive scientific progress forward.

Frequently Asked Questions
Are there places where research scientists are common?

Yes. The tech sector is a good example. Multiple tech companies have developed senior individual contributor (IC) career paths. These IC career paths allow people to grow their influence while remaining mostly in a hands-on technical role. The most common role of a senior software engineering IC is that of the “tech lead”, guiding the technical decision making and execution of a team. Other paths might involve prototyping and architecting a critical new system or diving in and solving an emergency problem. For more details on this kind of career, look at the Staff Engineer book and accompanying discussion.

Why is now the time for federal STEM-funding agencies to increase support for career research scientists?

The United States has long been the international leader in scientific progress, but that position is being threatened as more countries develop the human capital and infrastructure to compete in a knowledge-oriented economy. In an era where humankind faces mounting existential risks requiring scientific innovation, maintaining U.S. scientific leadership is more important than ever. This requires retaining high-level scientific talent in hands-on, basic research activities. But that goal is undermined by the current structure of employment in American academic science.

Which other federal agencies fund scientific research, and could consider actions similar to those proposed in this memo for NSF and NIH?

Key federal STEM-funding agencies that could also consider ways to support and elevate career research scientist positions include the Departments of Agriculture, Defense, and Energy, as well as the National Aeronautics and Space Administration (NASA).

Regulating Use of Mobile Sentry Devices by U.S. Customs and Border Protection

Summary

Robotic and automated systems have the potential to remove humans from dangerous situations, but their current intended use as aids or replacements for human officers conducting border patrols raises ethical concerns if not regulated to ensure that this use “promot[es] the safety of the officer/agent and the public” (emphasis added). U.S. Customs and Border Protection (CBP) should update its use-of-force policy to cover the use of robotic and other autonomous systems for CBP-specific applications that differ from the military applications assumed in existing regulations. The most relevant existing regulation, Department of Defense Directive 3000.09, governs how semi-autonomous weapons may be used to engage with enemy combatants in the context of war. This use case is quite different from mobile sentry duty, which may include interactions with civilians (whether U.S. citizens or migrants). With robotic and automated systems about to come into regular use at CBP, the agency should proactively issue regulations to forestall adverse effects—specifically, by only permitting use of these systems in ways that presume all encountered humans to be non-combatants. 

Challenge and Opportunity

CBP is currently developing mobile sentry devices as a new technology to force-multiply its presence at the border. Mobile sentry devices, such as legged and flying robots, have the potential to reduce deaths at the border by making it easier to locate and provide aid to migrants in distress. According to an American Civil Liberties Union (ACLU) report, 22% of migrant deaths between 2010 and 2021 that involved an on-duty CBP agent or officer were caused by medical distress that began before the agent or officer arrived on the scene. However, the eventual use cases, rules of engagement, and functionalities of these robots are unclear. If not properly regulated, mobile sentry devices could also be used to harm or threaten people at the border—thereby contributing to the 44% of deaths that occurred as a direct result of vehicular or foot pursuit by a CBP agent. Regulations on mobile sentry device use—rather than merely acquisition—are needed because even originally unarmed devices can be weaponized after purchase. Devices that remain unarmed can also harm civilians using a limb or propeller. 

Existing Department of Homeland Security (DHS) regulations governing autonomous systems seek to minimize technological bias in artificially intelligent risk-assessment systems. Existing military regulations seek to minimize risks of misused or misunderstood capabilities for autonomous systems. However, no existing federal regulations govern how uncrewed vehicles, whether remotely controlled or autonomous, can be used by CBP. The answer is not as simple as extending military regulations to the CBP. Military regulations governing autonomous systems assume that the robots in question are armed and interacting with enemy combatants. This assumption does not apply to most, if not all, possible CBP use cases.

With the CBP already testing robotic dogs for deployment on the Southwestern border, the need for tailored regulation is pressing. Recent backlash over the New York Police Department testing similar autonomous systems makes this topic even more timely. While the robots used by CBP are currently unarmed, the same company that developed the robots being tested by CBP is working with another company to mount weapons on them. The rapid innovation and manufacturing of these systems requires implementation of policies governing their use by CBP before CBP has fully incorporated such systems into its workflows, and before the companies that build these systems have formed a powerful enough lobby to resist appropriate oversight. 

Plan of Action

CBP should immediately update its Use of Force policy to include restrictions on use of force by mobile sentry devices. Specifically, CBP should add a chapter to the policy with the following language:

These regulations should go into effect before Mobile Sentry Devices are moved from the testing phase to the deployment phase. Related new technology, whether it increases capabilities for surveillance or autonomous mobility, should undergo review by a committee that includes representatives from the National Use of Force Review Board, migrant rights groups, and citizens living along the border. This review should mirror the process laid out in the Community Control over Police Surveillance project, which has already been successfully implemented in multiple cities

Conclusion

U.S. Customs and Border Patrol (CBP) is developing an application for legged robots as mobile sentry devices at the southwest border. However, the use cases, functionality, and rules of engagement for these robots remain unclear. New regulations are needed to forestall adverse effects of autonomous robots used by the federal government for non-military applications, such as those envisioned by CBP. These regulations should specify that mobile sentry devices can only be used as humanitarian aids, and must use de-escalation methods to indicate that they are not threatening. Regulations should further mandate that mobile sentry devices maintain clear distance from human targets, that use of force by mobile sentry devices is never considered “reasonable,” and that mobile sentry devices may never be used to pursue, detain, or arrest humans. Such regulations will help ensure that the legged robots currently being tested as mobile sentry devices by CBP—as well as any future mobile sentry devices—are used ethically and in line with CBP’s goals, alleviating concerns for migrant advocates and citizens along the border.

Frequently Asked Questions
What is the purpose of regulating CBP use of autonomous robots as mobile sentry devices rather than purchasing of autonomous robots?

Regulations on purchasing are not sufficient to prevent mobile sentry device technology from being weaponized after it is purchased. However, DHS could certainly also consider updating its acquisition regulations to include clauses resulting in fines when mobile sentry devices acquired by the CBP are not used for humanitarian purposes.

Why is Department of Defense (DOD) Directive 3000.09 not sufficient to regulate the use of force by all government agencies?

DOD Directive 3000.09 regulates the use of autonomous weapons systems in the context of war. For an autonomous, semi-autonomous, or remotely controlled system that is deployed with the intention to be a weapon in an active battlefield, this regulation makes sense. But applications of robotic and automated systems currently being developed by DHS are oriented towards mobile sentry duty along stretches of American land where civilians are likely to be found. This sentry duty is likely to be performed by uncrewed ground robots following GPS breadcrumb trails along predetermined regular patrols along the border. Applying Directive 3000.09, the use of a robot to kill or harm a person during a routine patrol along the border would not be a violation as long as a human had “meaningful control” over the robot at that time. The upshot is that mobile sentry devices used by CBP should be subject to stricter regulations.

What standards do robotics companies have on the use of their technologies?

Most companies selling legged robots in the United States have explicit end-user policies prohibiting the use of their machines to harm or intimidate humans or animals. Some companies selling quadcopter drones have similar policies. But these policies lack any enforcement mechanism. As such, there is a regulatory gap that the federal government must fill.

Is updating its Use of Force policy the only way for CBP to regulate its use of mobile sentry devices?

No, but it is an immediately actionable strategy. An alternative—albeit more time-consuming—option would be for CBP to form a committee comprising representatives from the National Use of Force Review Board, the military, migrant-rights activist groups, and experts on ethics to develop a directive for CBP’s use of mobile sentry devices. This directive should be modeled after DoD Directive 3000.09, which regulates the use of lethal autonomous weapons systems by the military. As the autonomous systems in DOD Directive 3000.09 are assumed to be interacting with enemy combatants while CBP’s jurisdiction consists mostly of civilians, the CBP directive should be considerably more stringent than Directive 3000.09.

Would the policies proposed in this memo vary with the degree of autonomy possessed by the robot in question?

The policies proposed in this memo govern what mobile sentry devices are and are not permitted to do, regardless of the extent to which humans are involved in device operation and/or the degree of autonomy possessed by the technology in question. The policies proposed in this memo could therefore be applied consistently as the technology continues to be developed. AI is always changing and improving, and by creating policies that are tech-agnostic, CPB can avoid updating regulations as mobile sentry device technology evolves.

CLimate Improvements through Modern Biotechnology (CLIMB) — A National Center for Bioengineering Solutions to Climate Change and Environmental Challenges

Summary

Tackling pressing environmental challenges — such as climate change, biodiversity loss, environmental toxins and pollution — requires bold, novel approaches that can act at the scale and expediency needed to stop irreversible damage. Environmental biotechnology can provide viable and effective solutions. The America COMPETES Act, if passed, would establish a National Engineering Biology Research and Development Initiative. To lead the way in innovative environmental protection, a center should be created within this initiative that focuses on applying biotechnology and bioengineering to environmental challenges. The CLimate Improvements through Modern Biotechnology (CLIMB) Center will fast-track our nation’s ability to meet domestic and international decarbonization goals, remediate contaminated habitats, detect toxins and pathogens, and deliver on environmental-justice goals. 

The CLIMB Center would (i) provide competitive grant funding across three key tracks — bioremediation, biomonitoring, and carbon capture — to catalyze comprehensive environmental biotechnology research; (ii) house a bioethics council to develop and update guidelines for safe, equitable environmental biotechnology use; (iii) manage testbeds to efficiently prototype environmental biotechnology solutions; and (iv) facilitate public-private partnerships to help transition solutions from prototype to commercial scale. Investing in the development of environmental biotechnology through the CLIMB Center will overall advance U.S. leadership on biotechnology and environmental stewardship, while helping the Biden-Harris Administration deliver on its climate and environmental-justice goals. 

Challenge and Opportunity

The rapidly advancing field of biotechnology has considerable potential to aid the fight against climate change and other pressing environmental challenges. Fast and inexpensive genetic sequencing of bacterial populations, for instance, allows researchers to identify genes that enable microorganisms to degrade pollutants and synthesize toxins. Existing tools like CRISPR, as well as up-and-coming techniques such as retron-library recombineering, allow researchers to effectively design microorganisms that can break down pollutants more efficiently or capture more carbon. Biotechnology as a sector has been growing rapidly over the past two decades, with the global market value estimated to be worth nearly $3.5 trillion by 2030. These and numerous other biotechnological advances are already being used to transform sectors like medicine (which comprises nearly 50% of the biotechnology sector), but have to date been underutilized in the fight for a more sustainable world. 

One reason why biotechnology and bioengineering approaches have not been widely applied to advance climate and environmental goals is that returns on investment are too uncertain, too delayed, or too small to motivate private capital — even if solving pressing environmental issues through biotechnology would deliver massive societal benefits. The federal government can act to address this market failure by creating a designated environmental-biotechnology research center as part of the National Engineering Biology Research and Development Initiative (America COMPETES act, Sec. 10403). Doing so will help the Biden-Harris Administration achieve its ambitious targets for climate action and environmental justice.

Plan of Action

The America COMPETES Act would establish a National Engineering Biology Research and Development Initiative “to establish new research directions and technology goals, improve interagency coordination and planning processes, drive technology transfer to the private sector, and help ensure optimal returns on the Federal investment.” The Initiative is set to be funded through agency contributions and White House Office and Science and Technology Policy (OSTP) budget requests. The America COMPETES Act also calls for creation of undesignated research centers within the Initiative. We propose creating such a center focused on environmental-biotechnology research: The CLimate Improvements through Modern Biotechnology (CLIMB) Center. The Center would be housed under the new National Science Foundation (NSF) Directorate for Technology, Innovation and Partnerships and co-led by the NSF Directorate of Biological Sciences. The Center would take a multipronged approach to support biotechnological and bioengineering solutions to environmental and climate challenges and rapid technology deployment. 

We propose the Center be funded with an initial commitment of $60 million, with continuing funds of $300 million over five years. The main contributing federal agencies research offices would be determined by OSTP, but should at minimum include: NSF; the Departments of Agriculture, Defense, and Energy (USDA, DOD, and DOE); the Environmental Protection Agency (EPA), the National Oceanic and Atmospheric Administration (NOAA), and the U.S. Geological Survey (USGS).  

Specifically, the CLIMB Center would: 

  1. Provide competitive grant funding across three key tracks — bioremediation, biomonitoring, and carbon capture — to catalyze comprehensive environmental-biotechnology research.
  2. House a bioethics council to develop and update guidelines for safe, equitable environmental-biotechnology use.
  3. Manage testbeds to efficiently prototype environmental-biotechnology solutions. 
  4. Facilitate public-private partnerships to help transition solutions from prototype to commercial scale.

More detail on each of these components is provided below.

Component 1: Provide competitive grant funding across key tracks to catalyze comprehensive environmental biotechnology research.

The CLIMB Center will competitively fund research proposals related to (i) bioremediation, (ii) biomonitoring, and (iii) carbon capture. These three key research tracks were chosen to span the approaches to tackle environmental problems from prevention, monitoring to large-scale remediation. Within these tracks, the Center’s research portfolio will span the entire technology-development pathway, from early-stage research to market-ready applications.

Track 1: Bioremediation

Environmental pollutants are detrimental to ecosystems and human health. While the Biden-Harris Administration has taken strides to prevent the release of pollutants such as per- and polyfluoroalkyl substances (PFAS), many pollutants that have already been released into the environment persist for years or even decades. Bioremediation is the use of biological processes to degrade contaminants within the environment. It is either done within a contaminated site (in-situ bioremediation) or away from it (ex-situ). Traditional in-situ bioremediation is primarily accomplished by bioaugmentation (addition of pollutant-degrading microbes) or by biostimulation (supplying oxygen or nutrients to stimulate the growth of pollutant-degrading microbes that are already present). While these approaches work, they are costly, time-consuming, and cannot be done at large spatial scales. 

Environmental biotechnology can enhance the ability of microbes to degrade contaminants quickly and at scale. Environmental-biotechnology approaches produce bacteria that are better able to break down toxic chemicalsdecompose plastic waste, and process wastewater. But the potential of environmental biotechnology to improve bioremediation is still largely untapped, as technology development and regulatory regimes still need to be developed to enable widespread use. CLIMB Center research grants could support the early discovery phase to identify more gene targets for bioremediation as well as efforts to test more developed bioremediation technologies for scalability.

Track 2: Biomonitoring

Optimizing responses to environmental challenges requires collection of data on pollutant levels, toxin prevalence, spread of invasive species, and much more. Conventional approaches to environmental monitoring (like mass spectrometry or DNA amplification) require specialized equipment, are low-throughput, and need highly trained personnel. In contrast, biosensors—devices that use biological molecules to detect compounds of interest—provide rapid, cost-effective, and user-friendly alternatives to measure materials of interest. Due to these characteristics, biosensors enable users to sample more frequently and across larger spatial scales, resulting in more accurate datasets and enhancing our ability to respond. Detection of DNA or RNA is key for identifying pathogens, invasive species, and toxin-producing organisms. Standard DNA- and RNA-detection techniques like polymerase chain reaction (PCR) require specialized equipment and are slow. By contrast, biosensors detect minuscule amounts of DNA and RNA in minutes (rather than hours) and without the need for DNA/RNA amplificationSHERLOCK and DETECTR are two examples of highly successful, marketed tools used for diagnostic applications such as detecting SARS-CoV-2 and for ecological purposes such as distinguishing invasive fish species from similar-looking native species. Moving forward, these technologies could be repurposed for other environmental applications, such as monitoring for the presence of algal toxins in water used for drinking, recreating, agriculture, or aquaculture. Furthermore, while existing biosensors can detect DNA and RNA, detecting compounds like pesticides, DNA-damaging compounds, and heavy metals requires a different class of biosensor. CLIMB Center research grants could support development of new biosensors as well as modification of existing biomonitoring tools for new applications.  

Track 3: Carbon capture

Rising atmospheric levels of greenhouse gases like carbon dioxide are driving irreversible climate change. The problem has become so bad that it is no longer sufficient to merely reduce future emissions—limiting average global warming below 2°C by 2100 will require achieving negative emissions through capture and removal of atmospheric carbon. A number of carbon-capture approaches are currently being developed. These range from engineered approaches such as direct air capture, chemical weathering, and geologic sequestration to biological approaches such as reforestation, soil amendment, algal cultivation, and ocean fertilization.  

Environmental-biotechnology approaches such as synthetic biology (“designed biology”) can vastly increase the amount of carbon that could be captured by natural processes. For instance, plants and crops can be engineered to produce larger root biomass that sequesters more carbon into the soil, or to store more carbon in harder-to-break-down molecules such as ligninsuberin, or sporopollenin instead of easily more metabolized sugars and cellulose. Alternatively, carbon capture efficiency can be improved by modifying enzymes in the photosynthetic pathway or limiting photorespiration through synthetic biology. Microalgae in particular hold great promise for enhanced carbon capture. Microalgae can be bioengineered to not only capture more carbon but also produce a greater density of lipids that can be used for biofuel. The potential for synthetic biology and other environmental-biotechnology approaches to enhanced carbon capture is vast, largely unexplored, and certainly under commercialized. CLIMB Center research grants could propel such approaches quickly. 

Component 2: House a bioethics council to develop and update guidelines for safe, equitable environmental-biotechnology use.

The ethical, ecological, and social implications of environmental biotechnology must be carefully considered and proactively addressed to avoid unintended damage and to ensure that benefits are distributed equitably. As such, the CLIMB Center should assemble a bioethics council comprising representatives from:

The bioethics council will identify key ethical and equity issues surrounding emerging environmental biotechnologies. The council will then develop guidelines to ensure transparency of research to the public, engagement of key stakeholders, and safe and equitable technology deployment. These guidelines will ensure that there is a framework for the use of field-ready environmental-biotechnology devices, and that risk assessment is built consistently into regulatory-approval processes. The council’s findings and guidelines will be reported to the National Engineering Biology Research and Development Initiative’s interagency governance committee which will work with federal and state regulatory agencies to incorporate guidance and streamline regulation and oversight of environmental biotechnology products. 

Component 3. Manage testbeds to efficiently prototype environmental-biotechnology solutions. 

The “valley of death” separating early research and prototyping and commercialization is a well-known bottleneck hampering innovation. This bottleneck could certainly inhibit innovation in environmental biotechnology, given that environmental-biotechnology tools are often intended for use in complex natural environments that are difficult to replicate in a lab. The CLIMB Center should serve as a centralized node to connect researchers with testing facilities and test sites where environmental biotechnologies can be properly validated and risk-assessed. There are numerous federal facilities that could be leveraged for environmental biotechnology testbeds, including: 

The CLIMB Center could also work with industry, state, and local partners to establish other environmental-biotechnology testbeds. Access to these testbeds could be provided to researchers and technology developers as follow-on opportunities to CLIMB Center research grants and/or through stand-alone testing programs managed by the CLIMB Center. 

Component 4: Facilitate public-private partnerships to help transition solutions from prototype to commercial scale.

Public-private partnerships have been highly successful in advancing biotechnology for medicine. Operation Warp Speed, to cite one recent and salient example, enabled research, development, testing, and distribution of vaccines against SARS-CoV-2 at unprecedented speeds. Public-private partnerships could play a similarly key role in advancing the efficient deployment of market-ready environmental biotechnological devices. To this end, the CLIMB Center can reduce barriers for negotiating partnerships between environmental engineers and biotechnology manufacturers. For example, the CLIMB center can develop templates for Memoranda of Understandings (MOUs) and  Collaborative Research Agreements (CDAs) to facilitate the initial establishment of the partnerships, as well as help connect interested parties.The CLIMB center could also facilitate access for both smaller companies and researchers to existing government infrastructure necessary to deploy these technologies. For example, an established public-private partnership team could have access to government-managed gene and protein libraries, microbial strain collections, sequencing platforms, computing power, and other specialized equipment. The Center could further negotiate with companies to identify resources (equipment, safety data, and access to employee experts) they are willing to provide. Finally, the Center could determine and fast-track opportunities where the federal government would be uniquely suited to serve as an end user of biotechnology products. For instance, in the bioremediation space, the EPA’s purview for management and cleanup of Superfund sites would immensely benefit from the use of novel, safe, and effective tools to quickly address pollution and restore habitats.

Conclusion

Environmental and climate challenges are some of the most pressing problems facing society today. Fortunately, advances in biotechnology that enable manipulation, acceleration, and improvement of natural processes offer powerful tools to tackle these challenges. The federal government can accelerate capabilities and applications of environmental biotechnology by establishing the CLimate Improvements through Modern Biotechnology (CLIMB) Center. This center, established as part of the National Engineering Biology Research and Development Initiative, will be dedicated to advancing research, development, and commercialization of environmental biotechnology. CLIMB Center research grants will focus on advances in bioremediation, biomonitoring, and biologically assisted carbon capture, while other CLIMB Center activities will scale and commercialize emerging environmental biotechnologies safely, responsibly, and equitably. Overall, the CLIMB Center will further solidify U.S. leadership in biotechnology while helping the Biden-Harris Administration meet its ambitious climate, energy, and environmental-justice goals. 

Frequently Asked Questions
Why should the federal government take the lead in environmental biotechnology solutions?

Environmental biotechnology can help address wide-reaching, interdisciplinary issues with huge benefits for society. Many of the applications for environmental biotechnology are within realms where the federal government is an interested or responsible party. For instance, bioremediation largely falls within governmental purview. Creating regulatory guidelines in parallel to the development of these new technologies will enable an expedited rollout. Furthermore, environmental biotechnology approaches are still novel and using them on a wide scale in our natural environments will require careful handling, testing, and regulation to prevent unintended harm.  Here again, the federal government can play a key role to help validate and test technologies before they are approved for use on a wide scale.


Finally, the largest benefits from environmental biotechnology will be societal. The development of such technology should hence be largely driven by its potential to improve environmental quality and address environmental injustices, even if these are not profitable. As such, federal investments are better suited than private investments to help develop and scale these technologies, especially during early stages when returns are too small, too uncertain, and too future-oriented.

How do we mitigate security risks of bioengineered products?

Bioengineered products already exist and are in use, and bioengineering innovations and technology will continue to grow over the next century. Rather than not develop these tools and lag behind other nations that will continue to do so, it is better to develop a robust regulatory framework that will address the critical ethical and safety concerns surrounding their uses. Importantly, each bioengineered product will present its own set of risks and challenges. For instance, a bacterial species that has been genetically engineered to metabolize a toxin is very different from an enzyme or DNA probe that could be used as a biosensor. The bacteria are living, can reproduce, and can impact other organisms around them, especially when released into the environment. In contrast, the biosensor probe would contain biological parts (not a living organism) and would only exist in a device. It is thus critical to ensure that every biotechnology product, with its unique characteristics, is properly tested, validated, and designed to minimize its environmental impact and maximize societal benefits. The CLIMB Center will greatly enhance the safety of environmental-biotechnology products by facilitating access to test beds and the scientific infrastructure necessary to quantify these risk-benefit trade-offs.

How would the CLIMB Center address the Biden-Harris Administration’s goals for environmental justice?

The Biden-Harris Administration has recognized the vast disparity in environmental quality and exposure to contaminants that exist across communities in the United States. Communities of color are more likely to be exposed to environmental hazards and bear the burden of climate change-related events. For example, the closer the distance to a Superfund site—a site deemed contaminated enough to warrant federal oversight—the higher the proportion of Black and the lower the proportion of White families. To address these disparities, the Administration  issued Executive Order 14008 to advance environmental justice efforts. Through this order, President Biden created an Environmental Justice Advisory Council and launched the Justice40 initiative, which mandates that 40% of the benefits from climate investments be delivered to underserved communities. The Justice40 initiative includes priorities such as the “remediation and reduction of legacy pollution, and the development of critical clean water infrastructure.” The Executive Order also calls for the creation of a “community notification program to monitor and provide real-time data to the public on current environmental pollution…in frontline and fenceline communities — places with the most significant exposure to such pollution.” Environmental biotechnology offers an incredible opportunity to advance these goals by enhancing water treatment and bioremediation and enabling rapid and efficient monitoring of environmental contaminants.

How would the CLIMB Center address the Biden-Harris Administration’s goals for climate change?

President Biden has set targets for a 50–52% reduction (relative to 2005 levels) in net greenhouse-gas pollution by the year 2030, and has directed federal government operations to reach 100% carbon-pollution-free electricity by 2030 (Executive Order 14057). It is well established that meeting such climate goals and limiting global warming to less than 2°C will require negative emissions technologies (carbon capture) in addition to reducing the amount of emissions created by energy and other sectors. Carbon-capture technologies will need to be widely available, cost-effective, and scalable. Environmental biotechnology can help address these needs by enhancing our capacity for biological carbon capture through the use of organisms such as microalgae and macroalgae, which can even serve the dual role of producing biofuels, feedstock, and other products in a carbon-neutral or carbon-negative way. The CLIMB Center can establish the United States as the global leader in advancing both biotechnology and the many untapped environmental and climate solutions it can offer.

What are the current federal funding mechanisms available for the research and development of bioengineered environmental solutions?

There are multiple avenues for funding foundational research and development in bioengineering. Federal agencies and offices that currently fund bioengineering with an environmental focus include (but are not necessarily limited to):



  • DOE’s Office of Science’s various research programs, ARPA-E, and DOE’s Bioenergy Technologies Office

  • EPA’s Office of Research and Development, Science to Achieve Results (STAR) Program

  • National Science Foundation’s Biological Sciences and Engineering Directorates

  • USDA’s National Institute of Food and Agriculture, Biotechnology Risk Assessment Research Grants Program

  • NOAA’s Office of Ocean Exploration and Research

  • NASA’s Space Technology Mission Directorate

  • The National Institute of Health’s Environmental Health Services and National Institute of Biomedical Imaging and Bioengineering Institutes

  • DOD’s DARPA, Biological Technologies Office


Research funding provided by these offices often includes a biomedical focus. The research and development funding provided by the CLIMB Center would seek to build upon these efforts and help coordinate directed research towards environmental-biotechnology applications.

How could biosensor inform management and policy decisions?

Compared to conventional analytical techniques, biosensors are fast, cost-effective, easy-to-use, and largely portable and largely portable. However, biosensors are not always poised to take-over conventional techniques. In many cases, regulatory bodies have approved analytical techniques that can be used for compliance. Novel biosensors are rarely included in the suite of approved techniques, even though biosensors can complement conventional techniques—such as by allowing regulators to rapidly screen more samples to prioritize which require further processing using approved conventional methods. Moreover, as conventional methods can only provide snapshot measurements, potentially missing critical time periods where toxins, contaminants, or pathogens can go unnoticed. Biosensors, on the other hand, could be used to continuously monitor a given area. For example, algae can accumulate (bloom) and produce potent toxins that accumulate in seafood. To protect human health, seafood is tested using analytical chemical approaches (direct measurement of toxins) or biological assays (health monitoring in exposed laboratory animals). This requires regulators to decide when it is best to sample. However, if a biosensor was deployed in an monitoring array out in the ocean or available to people who collect the seafood, it could serve as an early detection system for the presence of these toxins. This application will become especially important moving forward since climate change has altered the geographic distribution and seasonality of these algal blooms, making it harder to forecast when it is best to measure seawater and seafood for these toxins.

How do we ensure that benefits from environmental biotechnologies extend equitably to historically excluded populations?

Communities of color are more likely to live near Superfund sites, be disproportionately exposed to pollutants, and bear the heaviest burdens from the effects of climate change. These communities have also been disproportionately affected by unethical environmental and medical-research practices. It is imperative that novel tools designed to improve environmental outcomes benefit these communities and do not cause unintended harm. Guidelines established by the CLIMB Center’s bioethics council coupled with evaluation of environmental biotechnologies in realistic testbeds will help ensure that this is the case.