Not Accessible: Federal Policies Unnecessarily Complicate Funding to Support Differently Abled Researchers. We Can Change That.
Persons with disabilities (PWDs) are considered the largest minority in the nation and in the world. There are existing policies and procedures from agencies, directorates, or funding programs that provide support for Accessibility and Accommodations (A&A) in federally funded research efforts. Unfortunately, these policies and procedures all have different requirements, processes, deadlines, and restrictions. This lack of standardization can make it difficult to acquire the necessary support for PWDs by placing the onus on them or their Principal Investigators (PIs) to navigate complex and unique application processes for the same types of support.
This memo proposes the development of a standardized, streamlined, rolling, post-award support mechanism to provide access and accommodations for PWDs as they conduct research and disseminate their work through conferences and convenings. The best case scenario is one wherein a PI or their institution can simply submit the identifying information for the award that has been made and then make a direct request for the support needed for a given PWD to work on the project. In a multi-year award such a request should be possible at any time within the award period.
This could be implemented by a single, streamlined policy adopted by all agencies with the process handled internally. Or, by a new process across agencies under Office of Science and Technology Policy (OSTP) or Office of Management and Budget (OMB) that handles requests for accessibility and accommodations at federally funded research sites and at federally funded convenings. An alternative to a single streamlined policy across these agencies might be a new section in the uniform guidance for federal funding agencies, also known as 2 CFR 200.
This memo focuses on Federal Open Science funding programs to illustrate the challenges in getting A&A funding requests supported. The authors have taken an informal look at agencies outside of science and technology funding. We found similar challenges across federal grantmaking in the Arts and Humanities, Social Services, and Foreign Relations and Aid entities. Similar issues likely exist in private philanthropy as well.
Challenge and Opportunity
Deaf/hard-of-hearing (DHH), Blind/low-vision (BLV), and other differently abled academicians, senior personnel, students, and post-doctoral fellows engaged in federally funded research face challenges in acquiring accommodations for accessibility. These include, but are not limited to:
- Human-provided ASL-English interpreting and interview transcription services for the DHH and non-DHH participants. While there are some applications of artificial intelligence (AI) that show promise on the transcription side, there’s a long way to go on ASL interpretation in an AI provided model versus the use of human interpreters.
- Visual and Pro-tactile interpreting/descriptive services for the BLV participants
- Adaptive lab equipment and computing peripherals
- Accessibility support or remediation for physical sites
Having these services available is crucial for promoting an inclusive research environment on a larger scale.
Moving to a common, post-award process:
- Allows the PI and the reviewers more time and space to focus on the core research efforts being described in the initial proposal
- Removes any chance of the proposal funding being taken out of consideration due to higher costs in comparison to similar proposals in the pool
- Creates a standard, replicable pathway for seeking accommodations once the overall proposal has been funded. This is especially true if the support comes from a single process across all federal funding programs rather than within each agency.
- Allows for flexibility in accommodations. Needs vary from person-to-person and case-to-case. For example, in the case of workplace accommodations for DHH team members, one full-time researcher may request full-time ASL interpretation on-site, while another might prefer to work primarily through digital text channels; only requiring ASL interpretation for staff meetings and other group activities.
- Potentially reduces federal government financial and human resources currently expended in supporting such requests by eliminating duplication of effort across agencies or, at minimum streamlining processes within agencies.
Such a process might follow these steps below. The example below is from the National Science Foundation (NSF), but the same, or similar process could be done within any agency:
- PI receives notification of grant award from NSF. PI identifies need for A & A services at start, or at any time during the grant period
- PI (or SRS staff) submits request for A&A funding support to NSF. Request includes NSF program name and award number, the specifics of the requested A & A support, a budget justification and three vendor quotes (if needed)
- Use of funds is authorized, and funding is released to PI’s institution and acquisition would follow their standard purchasing or contracting procedures
- PI submits receipts/ paid vendor invoice to funding body
- PI cites and documents use of funds in annual report, or equivalent, to NSF
Current Policies and Practices
Pre-Award Funding
Principal Investigators (PIs) who request A&A support for themselves or for other members of the research team are sometimes required to apply for it in their initial grant proposals. This approach has several flaws.
First and foremost, this funding process reduces the direct application of research dollars for these PIs and their teams compared to other researchers in the same program. Simply put, if two applicants are applying for a $100,000 grant, and one needs to fund $10,000 worth of accommodations, services, and equipment out of the award, they have $10,000 less to pursue the proposed research activities. This essentially creates a “10% A & A tax” on the overall research funding request.
Lived Experience Example
In a real world example, the author and his colleague, the late Dr. Mel Chua, were awarded a $60,000, one year grant to do a qualitative research case study as part of the Ford Foundation Critical Digital Infrastructure Research cohort. As Dr. Chua was Deaf, the PIs pointed out to Ford that $10,000 worth of support services would be needed to cover costs for
- American Sign Language (ASL) interpreters during the qualitative interviews and advisory committee meetings
- Transcription of the interviews
- ASL Interpreting for conference dissemination and collection of comments at formal and informal meetings during those conferences
We communicated the fact that spending general research award money on those services would reduce the research work the funds were awarded to support. The Ford Foundation understood and provided an additional $10,000 as post-award funding to cover those services. Ford did not inform the PIs as to whether that support came from another directed set of funds for A&A support or from discretionary dollars within the foundation.
Second, it can be limiting for the funded project to work with or hire PWDs as co-PIs, students, or if they weren’t already part of the original grant proposal. For example, suppose a research project is initially awarded funding for four years without A&A support and then a promising team member who is a PWD appears on the scene in year three who would require it. In this case, PIs then must:
- Reallocate research dollars meant for other uses within the grant to support A&A;
- Find other funding to support those needs within their institution;
- Navigate the varied post-award support landscape, sometimes going so far as to write an entirely new full proposal with a significant review timeline, to try to get support. If this happens off cycle, the funding might not even arrive until the last few months of the fourth year.
- Or, not hire the person in question because they can’t provide the needed A&A.
Post-Award Funding
Some agencies have programs for post-award supplemental funding that address the challenges described above. While these are well-intentioned, many are complicated and often have different timelines, requirements, etc. In some cases, a single supplemental funding source may be addressing all aspects of diversity, equity and inclusion as well as A&A. The needs and costs in the first three categories are significantly different than in the last. Some post-award pools come from the same agency’s annual allocation program-wide. If those funds have been primarily expended on the initial awards for the solicitation, there may be little, or no money left to support post-award funding for needed accommodations. The table below briefly illustrates the range of variability across a subset of representative supplemental funding programs. There are links in the top row of the table to access the complete program information. Beyond the programs in this table, more extensive lists of NSF and NIH offerings are provided by those agencies. One example is the NSF Dear Colleague Letter Persons with Disabilities – STEM Engagement and Access.
Ideally these policies and procedures, and others like them, would be replaced by a common, post-award process. PIs or their institutions would simply submit the identifying information on the grant that had been awarded and the needs for Accommodations and Accessibility to support team members with disabilities at any time during the grant period.
Plan of Action
The OSTP, possibly in a National Science and Technology Council interworking group process,, should conduct an internal review of the A&A policies and procedures for grant programs from federal scientific research aligned agencies. This could be led by OSTP directly or under their auspices and led by either NSF or the National Institute of Health (NIH). Participants would be relevant personnel from DOE, DOD, NASA, USDA, EPA, NOAA, NIST and HHS, at minimum. The goal should be to create a draft of a single, streamlined policy and process, post-award, for all federal grant programs or a new section in the uniform guidance for federal funding agencies.
There should be an analysis of the percentages, size and amounts of awards currently being made to support A&A in research funding grant programs. It’s not clear how the various funding ranges and caps listed in the table above were determined or if they meet the needs. One goal of this analysis would be to determine how well current needs within and across agencies are being met and what future needs might be.
A second goal would be to look at the level of duplication of effort and scope of manpower savings that might be attained by moving to a single, streamlined policy. This might be a coordinated process between OMB and OSTP or a separate one done by OMB. No matter how it is coordinated, an understanding of these issues should inform whatever new policies or new additions to 2 CFR 200 would emerge.
A third goal of this evaluation could be to consider if the support for A&A post-award funding might best be served by a single entity across all federal grants, consolidating the personnel expertise and policy and process recommendations in one place. It would be a significant change, and could require an act of Congress to achieve, but from the point of view of the authors it might be the most efficient way to serve grantees who are PWDs.
Once the initial reviews as described above, or a similar process is completed, the next step should be a convening of stakeholders outside of the federal government with the purpose of providing input to the streamlined draft policy. These stakeholder entities could include, but should not be limited to, the National Association for the Deaf, The American Foundation for the Blind, The American Association of People with Disabilities and the American Diabetes Association. One of the goals of that convening should be a discussion, and decision, as to whether a period of public comment should be put in place as well, before the new policy is adopted.
Conclusion
The above plan of action should be pursued so that more PWDS will be able to participate, or have their participation improved, in federally funded research. A policy like the one described above lays the groundwork and provides a more level playing field for Open Science to become more accessible and accommodating.It also opens the door for streamlined processes, reduced duplication of effort and greater efficiency within the engine of Federal Science support.
Acknowledgments
The roots of this effort began when the author and Dr. Mel Chua and Stephen Jacobs received funding for their research as part of the first Critical Digital Infrastructure research cohort and were able to negotiate for accessibility support services outside their award. Those who provided input on the position paper this was based on are:
- Dr. Mel Chua, Independent Researcher
- Dr. Liz Hare, Quantitative Geneticist, Dog Genetics LLC
- Dr. Christopher Kurz, Professor and Director of Mathematics and Science Language and Learning Lab, National Technical Institute for the Deaf
- Luticha Andre-Doucette, Catalyst Consulting
This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.
Based on the percentage of PWDs in the general population size, conference funders should assume that some of their presenters or attendees will need accommodations. Funding from federal agencies should be made available to provide an initial minimum-level of support for necessary A & A. The event organizers should be able to apply for additional support above the minimum level if needed, provided participant requests are made within a stated time before the event. For example, a stipulated deadline of six weeks before the event to request supplemental accommodation, so that the organizers can acquire what’s needed within thirty days of the event.
Yes, in several ways. In general, most of the support needed for these is in service provision vs. hardware/software procurement. However, understanding the breadth and depth of issues surrounding human services support is more complex and outside the experience of most PIs running a conference in their own scientific discipline.
Again, using the example of DHH researchers who are attending a conference. A conference might default to providing a team of two interpreters during the conference sessions, as two per hour is the standard used. Should a group of DHH researchers attend the conference and wish to go to different sessions or meetings during the same convening, the organizers may not have provided enough interpreters to support those opportunities.
By providing interpretation for formal sessions only, DHH attendees are excluded from a key piece of these events, conversations outside of scheduled sessions. This applies to both formally planned and spontaneous ones. They might occur before, during, or after official sessions, during a meal offsite, etc. Ideally interpreters would be provided for these as well.
These issues, and others related to other groups of PWDs, are beyond the experience of most PIs who have received event funding.
There are some federal agency guides produced for addressing interpreting and other concerns, such as the “Guide to Developing a Language Access Plan” Center for Medicare and Medicaid Services (CMS). These are often written to address meeting needs of full-time employees on site in office settings. These generally cover various cases not needed by a conference convener and may not address what they need for their specific use case. It might be that the average conference chair and their logistics committee is a simply stated set of guidelines to address their short-term needs for their event. Additionally, a directory of where to hire providers with the appropriate skill sets and domain knowledge to meet the needs of PWDs attending their events would be an incredible aid to all concerned.
The policy review process outlined above should include research to determine a base level of A & A support for conferences. They might recommend a preferred federal guide to these resources or identify an existing one.
Promoting Fairness in Medical Innovation
There is a crisis within healthcare technology research and development, wherein certain groups due to their age, gender, or race and ethnicity are under-researched in preclinical studies, under-represented in clinical trials, misunderstood by clinical practitioners, and harmed by biased medical technology. These issues in turn contribute to costly disparities in healthcare outcomes, leading to losses of $93 billion a year in excess medical-care costs, $42 billion a year in lost productivity, and $175 billion a year due to premature deaths. With the rise of artificial intelligence (AI) in healthcare, there’s a risk of encoding and recreating existing biases at scale.
The next Administration and Congress must act to address bias in medical technology at the development, testing and regulation, and market-deployment and evaluation phases. This will require coordinated effort across multiple agencies. In the development phase, science funding agencies should enforce mandatory subgroup analysis for diverse populations, expand funding for under-resourced research areas, and deploy targeted market-shaping mechanisms to incentivize fair technology. In the testing and regulation phase, the FDA should raise the threshold for evaluation of medical technologies and algorithms and expand data-auditing processes. In the market-deployment and evaluation phases, infrastructure should be developed to perform impact assessments of deployed technologies and government procurement should incentivize technologies that improve health outcomes.
Challenge and Opportunity
Bias is regrettably endemic in medical innovation. Drugs are incorrectly dosed to people assigned female at birth due to historical exclusion of women from clinical trials. Medical algorithms make healthcare decisions based on biased health data, clinically disputed race-based corrections, and/or model choices that exacerbate healthcare disparities. Much medical equipment is not accessible, thus violating the Americans with Disabilities Act. And drugs, devices, and algorithms are not designed with the lifespan in mind, impacting both children and the elderly. Biased studies, technology, and equipment inevitably produce disparate outcomes in U.S. healthcare.
The problem of bias in medical innovation manifests in multiple ways: cutting across technological sectors in clinical trials, pervading the commercialization pipeline, and impeding equitable access to critical healthcare advances.
Bias in medical innovation starts with clinical research and trials
The 1993 National Institutes of Health (NIH) Revitalization Act required federally funded clinical studies to (i) include women and racial minorities as participants, and (ii) break down results by sex and race or ethnicity. As of 2019, the NIH also requires inclusion of participants across the lifespan, including children and older adults. Yet a 2019 study found that only 13.4% of NIH-funded trials performed the mandatory subgroup analysis, and challenges in meeting diversity targets continue into 2024 . Moreover, the increasing share of industry-funded studies are not subject to Revitalization Act mandates for subgroup analysis. These studies frequently fail to report differences in outcomes by patient population as a result. New requirements for Diversity Action Plans (DAPs), mandated under the 2023 Food and Drug Omnibus Reform Act, will ensure drug and device sponsors think about enrollment of diverse populations in clinical trials. Yet, the FDA can still approve drugs and devices that are not in compliance with their proposed DAPs, raising questions around weak enforcement.
The resulting disparities in clinical-trial representation are stark: African Americans represent 12% of the U.S. population but only 5% of clinical-trial participants, Hispanics make up 16% of the population but only 1% of clinical trial participants, and sex distribution in some trials is 67% male. Finally, many medical technologies approved prior to 1993 have not been reassessed for potential bias. One outcome of such inequitable representation is evident in drug dosing protocols: sex-aware prescribing guidelines exist for only a third of all drugs.
Bias in medical innovation is further perpetuated by weak regulation
Algorithms
Regulation of medical algorithms varies based on end application, as defined in the 21st Century Cures Act. Only algorithms that (i) acquire and analyze medical data and (ii) could have adverse outcomes are subject to FDA regulation. Thus, clinical decision-support software (CDS) is not regulated even though these technologies make important clinical decisions in 90% of U.S. hospitals. The FDA has taken steps to try and clarify what CDS must be considered a medical device, although these actions have been heavily criticized by industry. Finally, the lack of regulatory frameworks for generative AI tools is leading to proliferation without oversight.
Even when a medical algorithm is regulated, regulation may occur through relatively permissive de novo pathways and 510(k) pathways. A de novo pathway is used for novel devices determined to be low to moderate risk, and thus subject to a lower burden of proof with respect to safety and equity. A 510(k) pathway can be used to approve a medical device exhibiting “substantial equivalence” to a previously approved device, i.e., it has the same intended use and/or same technological features. Different technical features can be approved so long as there are no questions raised around safety and effectiveness.
Medical algorithms approved through de novo pathways can be used as predicates for approval of devices through 510(k) pathways. Moreover, a device approved through a 510(k) pathway can remain on the market even if its predicate device was recalled. Widespread use of 510(k) approval pathways has generated a “collapsing building” phenomenon, wherein many technologies currently in use are based on failed predecessors. Indeed, 97% of devices recalled between 2008 to 2017 were approved via 510(k) clearance.
While DAP implementation will likely improve these numbers, for the 692 AI-ML enabled medical devices, only 3.6% reported race or ethnicity, 18.4% reported age, and only .9% include any socioeconomic information. Further, less than half did detailed analysis of algorithmic performance and only 9% included information on post-market studies, raising the risk of algorithmic bias following approvals and broad commercialization.
Even more alarming is evidence showing that machine learning can further entrench medical inequities. Because machine learning medical algorithms are powered by data from past medical decision-making, which is rife with human error, these algorithms can perpetuate racial, gender, and economic bias. Even algorithms demonstrated to be ‘unbiased’ at the time of approval can evolve in biased ways over time, with little to no oversight from the FDA. As technological innovation progresses, especially generative AI tools, an intentional focus on this problem will be required.
Medical devices
Currently, the Medical Device User Fee Act requires the FDA to consider the least burdensome appropriate means for manufacturers to demonstrate the effectiveness of a medical device or to demonstrate a device’s substantial equivalence. This requirement was reinforced by the 21st Century Cures Act, which also designated a category for “breakthrough devices” subject to far less-stringent data requirements. Such legislation shifts the burden of clinical data collection to physicians and researchers, who might discover bias years after FDA approval. This legislation also makes it difficult to require assessments on the differential impacts of technology.
Like medical algorithms, many medical devices are approved through 510(k) exemptions or de novo pathways. The FDA has taken steps since 2018 to increase requirements for 510(k) approval and ensure that Class III (high-risk) medical devices are subject to rigorous pre-market approval, but problems posed by equivalence and limited diversity requirements remain.
Finally, while DAPs will be required for many devices seeking FDA approval, the recommended number of patients in device testing is shockingly low. For example, currently, only 10 people are required in a study of any new pulse oximeter’s efficacy and only 2 of those people need to be “darkly pigmented”. This requirement (i) does not have the statistical power necessary to detect differences between demographic groups, and (i) does not represent the composition of the U.S. population. The standard is currently under revision after immense external pressure. FDA-wide, there are no recommended guidelines for addressing human differences in device design, such as pigmentation, body size, age, and pre-existing conditions.
Pharmaceuticals
The 1993 Revitalization Act strictly governs clinical trials for pharmaceuticals and does not make recommendations for adequate sex or genetic diversity in preclinical research. The results are that a disproportionately high number of male animals are used in research and that only 5% of cell lines used for pharmaceutical research are of African descent. Programs like All of Us, an effort to build diverse health databases through data collection, are promising steps towards improving equity and representation in pharmaceutical research and development (R&D). But stronger enforcement is needed to ensure that preclinical data (which informs function in clinical trials) reflects the diversity of our nation.
Bias in medical innovation are not tracked post-regulatory approval
FDA-regulated medical technologies appear trustworthy to clinicians, where the approval signals safety and effectiveness. So, when errors or biases occur (if they are even noticed), the practitioner may blame the patient for their lifestyle rather than the technology used for assessment. This in turn leads to worse clinical outcomes as a result of the care received.
Bias in pulse oximetry is the perfect case study of a well-trusted technology leading to significant patient harm. During the COVID-19 pandemic, many clinicians and patients were using oximeter technology for the first time and were not trained to spot factors, like melanin in the skin, that cause inaccurate measurements and impact patient care. Issues were largely not attributed to the device. This then leads to underreporting of adverse events to the FDA — which is already a problem due to the voluntary nature of adverse-event reporting.
Even when problems are ultimately identified, the federal government is slow to respond. The pulse oximeter’s limitations in monitoring oxygenation levels across diverse skin tones was identified as early as the 1990s. 34 years later, despite repeated follow-up studies indicating biases, no manufacturer has incorporated skin-tone-adjusted calibration algorithms into pulse oximeters. It required the large Sjoding study, and the media coverage it garnered around delayed care and unnecessary deaths, for the FDA to issue a safety communication and begin reviewing the regulation.
Other areas of HHS are stepping up to address issues of bias in deployed technologies. A new ruling by the HHS Office of Civil Rights (OCR) on Section 1557 of the Affordable Care Act requires covered providers and institutions (i.e. any receiving federal funding) to identify their use of patient care decision support tools that directly measure race, color, national origin, sex, age, or disability, and to make reasonable efforts to mitigate the risk of discrimination from their use of these tools. Implementation of this rule will depend on OCR’s enforcement, and yet it provides another route to address bias in algorithmic tools.
Differential access to medical innovation is a form of bias
Americans face wildly different levels of access to new medical innovations. As many new innovations have high cost points, these drugs, devices, and algorithms exist outside the price range of many patients, smaller healthcare institutions and federally funded healthcare service providers, including the Veterans Health Administration, federally qualified health centers and the Indian Health Service. Emerging care-delivery strategies might not be covered by Medicare and Medicaid, meaning that patients insured by CMS cannot access the most cutting-edge treatments. Finally, the shift to digital health, spurred by COVID-19, has compromised access to healthcare in rural communities without reliable broadband access.
Finally, the Advanced Research Projects Agency for Health (ARPA-H) has a commitment to have all programs and projects consider equity in their design. To fulfill ARPA-H’s commitment, there is a need for action to ensure that medical technologies are developed fairly, tested with rigor, deployed safely, and made affordable and accessible to everyone.
Plan of Action
The next Administration should launch “Healthcare Innovation for All Americans” (HIAA), a whole of government initiative to improve health outcomes by ensuring Americans have access to bias-free medical technologies. Through a comprehensive approach that addresses bias in all medical technology sectors, at all stages of the commercialization pipeline, and in all geographies, the initiative will strive to ensure the medical-innovation ecosystem works for all. HIAA should be a joint mandate of Health and Human Services (HHS) and the Office of Science Technology and Policy (OSTP) to work with federal agencies on priorities of equity, non-discrimination per Section 1557 of the Affordable Care Act and increasing access to medical innovation, and initiative leadership should sit at both HHS and OSTP.
This initiative will require involvement of multiple federal agencies, as summarized in the table below. Additional detail is provided in the subsequent sections describing how the federal government can mitigate bias in the development phase; testing, regulation, and approval phases; and market deployment and evaluation phases.
Three guiding principles should underlie the initiative:
- Equity and non-discrimination should drive action. Actions should seek to improve the health of those who have been historically excluded from medical research and development. We should design standards that repair past exclusion and prevent future exclusion.
- Coordination and cooperation are necessary. The executive and legislative branches must collaborate to address the full scope of the problem of bias in medical technology, from federal processes to new regulations. Legislative leadership should task the Government Accountability Office (GAO) to engage in ongoing assessment of progress towards the goal of achieving bias-free and fair medical innovation.
- Transparent, evidence-based decision making is paramount. There is abundant peer-reviewed literature that examines bias in drugs, devices, and algorithms used in healthcare settings — this literature should form the basis of a non-discrimination approach to medical innovation. Gaps in evidence should be focused on through deployed research funding. Moreover, as algorithms become ubiquitous in medicine, every effort should be made to ensure that these algorithms are trained on representative data of those experiencing a given healthcare condition.
Addressing bias at the development phase
The following actions should be taken to address bias in medical technology at the innovation phase:
- Enforce parity in government-funded research. For clinical research, NIH should examine the widespread lack of adherence to regulations requiring that government funded clinical trials report sex, racial or ethnicity, and age breakdown of trial participants. Funding should be reevaluated for non-compliant trials. For preclinical research, NIH should require gender parity in animal models and representation of diverse cell lines used in federally funded studies.
- Deploy funding to address research gaps. Where data sources for historically marginalized people are lacking, such as for women’s cardiovascular health, NIH should deploy strategic, targeted funding programs to fill these knowledge gaps. This could build on efforts like the Initiative on Women’s Health Research. Increased funding should include resources for underrepresented groups to participate in research and clinical trials through building capacity in community organizations. Results should be added to a publicly available database so they can be accessed by designers of new technologies. Funding programs should also be created to fill gaps in technology, such as in diagnostics and treatments for high-prevalence and high-burden uterine diseases like endometriosis (found in 10% of reproductive-aged people with uteruses).
- Invest in research into healthcare algorithms and databases. Given the explosion of algorithms in healthcare decision-making, NIH and NSF should launch a new research program focused on the study, evaluation, and application of algorithms in healthcare delivery, and on how artificial intelligence and machine learning (AI/ML) can exacerbate healthcare inequities. The initial request for proposals should focus on design strategies for medical algorithms that mitigate bias from data or model choices.
- Task ARPA-H with developing metrics for equitable medical technology development. ARPA-H should prioritize developing a set of procedures and metrics for equitable development of medical technology. Once developed, these processes should be rapidly deployed across ARPA-H, as well as published for potential adoption by additional federal agencies, industry, and other stakeholders. ARPA-H could also collaborate with NIST on standards setting with NIST and ASTP on relevant standards setting. For instance, NIST has developed an AI Risk Management Framework and the ONC engages in setting standards that achieve equity by design. CMS could use resultant standards for Medicare and Medicaid reimbursements.
- Leverage procurement as a demand-signal for medical technologies that work for diverse populations. As the nation’s largest healthcare system, the Veterans Health Administration (VHA) can generate demand-signals for bias-free medical technologies through its procurement processes and market-shaping mechanisms. For example, the VA could put out a call for a pulse oximeter that works equally well across the entire range of human skin pigmentation and offer contracts for the winning technology.
Addressing bias at the testing, regulation, and approval phases
The following actions should be taken to address bias in medical innovation at the testing, regulation, and approval phases:
- Raise the threshold for FDA evaluation of devices and algorithms. Equivalency necessary to receive 510(k) clearance should be narrowed. For algorithms, this would involve consideration of whether the datasets for machine learning tactics used by the new device and its predicate are similar. For devices (including those that use algorithms), this would require tightening the definition of “same intended use” (currently defined as a technology having the same functionality as one previously approved by the FDA) as well as eliminating the approval of new devices with “different technological characteristics” (the application of one technology to a new area of treatment in which that technology is untested).
- Evaluate FDA’s guidance on specific technology groups for equity. Requirements for the safety of a given drug, medical device, or algorithm should have the statistical power necessary to detect differences between demographic groups and represent all end-users of the technology..
- Establish a data bank for auditing medical algorithms. The newly established Office of Digital Transformation within the FDA should create a “data bank” of healthcare images and datasets representative of the U.S. population, which could be done in partnership with the All of Us program. Medical technology developers could use the data bank to assess the performance of medical algorithms across patient populations. Regulators could use the data bank to ground claims made by those submitting a technology for FDA approval.
- Allow data submitted to the FDA to be examined by the broader scientific community. Currently, data submitted to the FDA as part of its regulatory-approval process is kept as a trade secret and not released pre-authorization to researchers. Releasing the data via an FDA-invited “peer review” step in the regulation of high-risk technologies, like automated decision-making algorithms, Class III medical devices, and drugs, will ensure that additional, external rigor is applied to the technologies that could cause the most harm due to potential biases.
- Establish an enforceable AI Bill of Rights. The federal government and Congress should create protections for necessary uses of artificial intelligence (AI) identified by OSTP. Federally funded healthcare centers, like facilities part of the Veterans Health Administration, could refuse to buy software or technology products that violate this “AI Bill of Rights” through changes to federal acquisition regulation (FAR).
Addressing bias at the market deployment and evaluation phases
- Strengthen reporting mechanisms at the FDA. Healthcare providers, who are often closest to the deployment of medical technologies, should be made mandatory reporters to the FDA of all witnessed adverse events related to bias in medical technology. In addition, the FDA should require the inclusion of unique device identifiers (UDIs) in adverse-response reporting. Using this data, Congress should create a national and publicly accessible registry that uses UDIs to track post-market medical outcomes and safety.
- Require impact assessments of deployed technologies. Congress must establish systems of accountability for medical technologies, like algorithms, that can evolve over time. Such work could be done by passing the Algorithmic Accountability Act which would require companies that create “high-risk automated decision systems” to conduct impact assessments reviewed by the FTC as frequently as necessary.
- Assess disparities in patient outcomes to direct technical auditing. AHRQ should be given the funding needed to fully investigate patient-outcome disparities that could be caused by biases in medical technology, such as its investigation into the impacts of healthcare algorithms on racial and ethnic disparities. The results of this research should be used to identify technologies that the FDA should audit post-market for efficacy or the FTC should investigate. CMS and its accrediting agencies can monitor these technologies and assess whether they should receive Medicare and Medicaid funding.
- Review reimbursement guidelines that are dependent on medical technologies with known bias. CMS should review its national coverage determinations for technologies, like pulse oximetry, that are known to perform differently across populations. For example, pulse oximeters can be used to determine home oxygen therapy provision, thus potentially excluding darkly-pigmented populations from receiving this benefit.
- Train physicians to identify bias in medical technologies and identify new areas of specialization. ED should work with medical schools to develop curricula training physicians to identify potential sources of bias in medical technologies and ensuring that physicians understand how to report adverse events to the FDA. In addition, ED should consider working with the American Medical Association to create new medical specialties that work at the intersection of technology and care delivery.
- Ensure that technologies developed by ARPA-H have an enforceable access plan. ARPA-H will produce cutting-edge technologies that must be made accessible to all Americans. ARPA-H should collaborate with the Center for Medicare and Medicaid Innovation to develop strategies for equitable delivery of these new technologies. A cost-effective deployment strategy must be identified to service federally-funded healthcare institutions like Veterans Health Administration hospitals and clinical, federally qualified health centers, and Indian Health Service.
- Create a fund to support digital health technology infrastructure in rural hospitals. To capitalize on the $65 billion expansion of broadband access allocated in the Bipartisan Infrastructure Bill, HRSA should deploy strategic funding to federally qualified health centers and rural health clinics to support digital health strategies — such as telehealth and mobile health monitoring — and patient education for technology adoption.
A comprehensive road map is needed
The GAO should conduct a comprehensive investigation of “black box” medical technologies utilizing algorithms that are not transparent to end users, medical providers, and patients. The investigation should inform a national strategic plan for equity and non-discrimination in medical innovation that relies heavily on algorithmic decision-making. The plan should include identification of noteworthy medical technologies leading to differential healthcare outcomes, creation of enforceable regulatory standards, development of new sources of research funding to address knowledge gaps, development of enforcement mechanisms for bias reporting, and ongoing assessment of equity goals.
Timeline for action
Realizing HIAA will require mobilization of federal funding, introduction of regulation and legislation, and coordination of stakeholders from federal agencies, industry, healthcare providers, and researchers around a common goal of mitigating bias in medical technology. Such an initiative will be a multi-year undertaking and require funding to enact R&D expenditures, expand data capacity, assess enforcement impacts, create educational materials, and deploy personnel to staff all the above.
Near-term steps that can be taken to launch HIAA include issuing a public request for information, gathering stakeholders, engaging the public and relevant communities in conversation, and preparing a report outlining the roadmap to accomplishing the policies outlined in this memo.
Conclusion
Medical innovation is central to the delivery of high-quality healthcare in the United States. Ensuring equitable healthcare for all Americans requires ensuring that medical innovation is equitable across all sectors, phases, and geographies. Through a bold and comprehensive initiative, the next Administration can ensure that our nation continues leading the world in medical innovation while crafting a future where healthcare delivery works for all.
This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.
HIAA will be successful when medical policies, projects, and technologies yield equitable health care access, treatment, and outcomes. For instance, success would yield the following outcomes:
- Representation in preclinical and clinical research equivalent to the incidence of a studied condition in the general population.
- Research on a disease condition funded equally per affected patient.
- Existence of data for all populations facing a given disease condition.
- Medical algorithms that have equal efficacy across subgroup populations.
- Technologies that work equally well in testing as they do when deployed to the market.
- Healthcare technologies made available and affordable to all care facilities.
Regulation alone cannot close the disparity gap. There are notable gaps in preclinical and clinical research data for women, people of color, and other historically underrepresented groups that need to be filled. There are also historical biases encoded in AI/ML decision making algorithms that need to be studied and rectified. In addition, the FDA’s role is to serve as a safety check on new technologies — the agency has limited oversight over technologies once they are out on the market due to the voluntary nature of adverse reporting mechanisms. This means that agencies like the FTC and CMS need to be mobilized to audit high-risk technologies once they reach the market. Eliminating bias in medical technology is only possible through coordination and cooperation of federal agencies with each other as well as with partners in the medical device industry, the pharmaceutical industry, academic research, and medical care delivery.
A significant focus of the medical device and pharmaceutical industries is reducing the time to market for new medical devices and drugs. Imposing additional requirements for subgroup analysis and equitable use as part of the approval process could work against this objective. On the other hand, ensuring equitable use during the development and approval stages of commercialization will ultimately be less costly than dealing with a future recall or a loss of Medicare or Medicaid eligibility if discriminatory outcomes are discovered.
Healthcare disparities exist in every state in America and are costing billions a year in economic growth. Some of the most vulnerable people live in rural areas, where they are less likely to receive high-quality care because costs of new medical technologies are too high for the federally qualified health centers that serve one in five rural residents as well as rural hospitals. Furthermore, during continued use, a biased device creates adverse healthcare outcomes that cost taxpayers money. A technology functioning poorly due to bias can be expensive to replace. It is economically imperative to ensure technology works as expected, as it leads to more effective healthcare and thus healthier people.
U.S. Energy Security Compacts: Enhancing American Leadership and Influence with Global Energy Investment
This policy proposal was incubated at the Energy for Growth Hub and workshopped at FAS in May 2024.
Increasingly, U.S. national security priorities depend heavily on bolstering the energy security of key allies, including developing and emerging economies. But U.S. capacity to deliver this investment is hamstrung by critical gaps in approach, capability, and tools.
The new administration should work with Congress to give the Millennium Challenge Corporation (MCC) the mandate and capacity to lead the U.S. interagency in implementing ‘Energy Security Compacts’, bilateral packages of investment and support for allies whose energy security is closely tied to core U.S. priorities. This would require minor amendments to the Millennium Challenge Act of 2003 to add a fourth business line to MCC’s Compact operations and grant the agency authority to coordinate an interagency working group contributing complementary tools and resources.
This proposal presents an opportunity to deliver on global energy security, an issue with broad appeal and major national security benefits. This initiative would strengthen economic partnerships with allies overseas, who consistently rank energy security as a top priority; enhance U.S. influence and credibility in advancing global infrastructure; and expand growing markets for U.S. energy technology. This proposal is built on the foundations and successes of MCC, a signature achievement of the G.W. Bush administration, and is informed by lessons learned from other initiatives launched by previous presidents of both parties.
Challenge and Opportunity
More than ever before, U.S. national security depends on bolstering the energy security of key allies. Core examples include:
- Securing physical energy assets: In countries under immediate or potential military threat, the U.S. may seek to secure vulnerable critical energy infrastructure, restore energy services to local populations, and build a foundation for long-term restoration.
- Countering dependence on geostrategic competitors: U.S. allies’ reliance on geostrategic competitors for energy supply or technologies poses short- and long-term threats to national security. Russia is building large nuclear reactors in major economies including Turkey, Egypt, India, and Bangladesh; has signed agreements to supply nuclear technology to at least 40 countries; and has agreed to provide training and technical assistance to at least another 14. Targeted U.S. support, investment, and commercial diplomacy can head off such dependence by expanding competition.
- Driving economic growth and enduring diplomatic relationships: Many developing and emerging economies face severe challenges in providing reliable, affordable electricity to their populations. This hampers basic livelihoods; constrains economic activity, job creation, and internet access; and contributes to deteriorating economic conditions driving instability and unrest. Of all the constraints analyses conducted by MCC since its creation, roughly half identified energy as a country’s top economic constraint. As emerging economies grow, their economic stability has an expanding influence over global economic performance and security. In coming decades, they will require vast increases in reliable energy to grow their manufacturing and service industries and employ rapidly growing populations. U.S. investment can provide the foundation for market-driven growth and enduring diplomatic partnerships.
- Diversifying supply chains: Many crucial technologies depend on minerals sourced from developing economies without reliable electricity. For example, Zambia accounts for about 4% of global copper supply and would like to scale up production. But recurring droughts have shuttered the country’s major hydropower plant and led to electricity outages, making it difficult for mining operations to continue or grow. Scaling up the mining and processing of key minerals in developing economies will require investment in improving power supply.
The U.S. needs a mechanism that enables quick, efficient, and effective investment and policy responses to the specific concerns facing key allies. Currently, U.S. capacity to deliver such support is hamstrung by key gaps in approach, capabilities, and tools. The most salient challenges include:
A project-by-project approach limits systemic impact: U.S. overseas investment agencies including the Development Finance Corporation (DFC), the U.S. Trade and Development Agency (USTDA), and the Export-Import Bank (EXIM) are built to advance individual commercial energy transactions across many different countries. This approach has value–but is insufficient in cases where the goal is to secure a particular country’s entire energy system by building strong, competitive markets. That will require approaching the energy sector as a complex and interconnected system, rather than a set of stand-alone transactions.
Diffusion of tools across the interagency hinders coordination. The U.S. has powerful tools to support energy security–including through direct investment, policy support, and technical and commercial assistance–but they are spread across at least nine different agencies. Optimizing deployment will require efficient coordination, incentives for collaboration; and less fragmented engagement with private partners.
Insufficient leverage to incentivize reforms weakens accountability. Ultimately, energy security depends heavily on decisions made by the partner country’s government. In many cases, governments need to make tough decisions and advance key reforms before the U.S. can help crowd in private capital. Many U.S. agencies provide technical assistance to strengthen policy and regulatory frameworks but lack concrete mechanisms to incentivize these reforms or make U.S. funding contingent on progress.
Limited tools supporting vital enabling public infrastructure blocks out private investment. The most challenging bottleneck to modernizing and strengthening a power sector is often not financing new power generation (which can easily attract private investment under the right conditions), but supporting critical enabling infrastructure including grid networks. In most emerging markets, these are public assets, wholly or partially state-owned. However, most U.S. energy finance tools are designed to support only private sector-led investments. This effectively limits their effectiveness to the generation sector, which already attracts far more capital than transmission or distribution.
To succeed, an energy security investment mechanism should:
- Enable investment highly tailored to the specific needs and priorities of partners;
- Provide support across the entire energy sector value chain, strengthening markets to enable greater direct investment by DFC and the private sector;
- Co-invest with partner countries in shared priorities, with strong accountability mechanisms.
Plan of Action
The new administration should work with Congress to give the Millennium Challenge Corporation the mandate to implement ‘Energy Security Compacts’ (ESCs) addressing the primary constraints to energy security in specific countries, and to coordinate the rest of the interagency in contributing relevant tools and resources. This proposal builds on and reflects key lessons learned from previous efforts by administrations of both parties.
Each Energy Security Compact would include the following:
- A process led by MCC and the National Security Council (NSC) to identify priority countries.
- An analysis jointly conducted by MCC and the partner country on the key constraints to energy security.
- Negotiation, led by MCC with support from NSC, of a multi-year Energy Security Compact, anchored by MCC support for a specific set of investments and reforms, and complemented by relevant contributions from the interagency. The Energy Security Compact would define agency-specific responsibilities and include clear objectives and measurable targets.
- Implementation of the Energy Security Compact, led by MCC and NSC. To manage this process, MCC and NSC would co-lead an Interagency Working Group comprising representatives from all relevant agencies.
- Results reporting, based on MCC’s top-ranked reporting process, to the National Security Council and Congress.
This would require the following congressional actions:
- Amend the Millennium Challenge Act of 2003: Grant MCC the expanded mandate to deploy Energy Security Compacts as a fourth business line. This should include language applying more flexible eligibility criteria to ESCs, and broadening the set of countries in which MCC can operate when implementing an ESC. Give MCC the mandate to co-lead an interagency working group with NSC.
- Plus up MCC Appropriation: ESCs can be launched as a pilot project in a few markets. But ultimately, the model’s success and impact will depend on MCC appropriations, including for direct investment and dedicated staff. MCC has a track record of outstanding transparency in evaluating its programs and reporting results.
- Strengthen DFC through reauthorization. The ultimate success of ESCs hinges on DFC’s ability to deploy more capital in the energy sector. DFC’s congressional authorization expires in September 2025, presenting an opportunity to enhance the agency’s reach and impact in energy security. Key recommendations for reauthorization include: 1) Addressing the equity scoring challenge; and 2) Raising DFC’s maximum contingent liability to $100 billion.
- Budget. The initiative could operate under various budget scenarios. The model is specifically designed to be scalable, based on the number of countries with which the U.S. wants to engage. It prioritizes efficiency by drawing on existing appropriations and authorities, by focusing U.S. resources on the highest priority countries and challenges, and by better coordinating the deployment of various U.S. tools.
This proposal draws heavily on the successes and struggles of initiatives from previous administrations of both parties. The most important lessons include:
- From MCC: The Compact model works. Multi-year Compact agreements are an effective way to ensure country buy-in, leadership, and accountability through the joint negotiation process and the establishment of clear goals and metrics. Compacts are also an effective mechanism to support hard infrastructure because they provide multi-year resources.
- From MCC: Investments should be based on rigorous analysis. MCC’s Constraints Analyses identify the most important constraints to economic growth in a given country. That same rigor should be applied to energy security, ensuring that U.S. investments target the highest impact projects, including those with the greatest positive impact on crowding in additional private sector capital.
- From Power Africa: Interagency coordination can work. Coordinating implementation across U.S. agencies is a chronic challenge. But it is essential to ESCs–and to successful energy investment more broadly. The ESC proposal draws on lessons learned from the Power Africa Coordinator’s Office. Specifically, joint-leadership with the NSC focuses effort and ensures alignment with broader strategic priorities. A mechanism to easily transfer funds from the Coordinator’s Office to other agencies incentivizes collaboration, and enables the U.S. to respond more quickly to unanticipated needs. And finally, staffing the office with individuals seconded from relevant agencies ensures that staff understand the available tools, how they can be deployed effectively, and how (and with whom) to work with to ensure success. Legislative language creating a Coordinator’s Office for ESCs can be modeled on language in the Electrify Africa Act of 2015, which created Power Africa’s interagency working group.
Conclusion
The new administration should work with Congress to empower the Millennium Challenge Corporation to lead the U.S. interagency in crafting ‘Energy Security Compacts’. This effort would provide the U.S. with the capability to coordinate direct investment in the energy security of a partner country and contribute to U.S. national priorities including diversifying energy supply chains, investing in the economic stability and performance of rapidly growing markets, and supporting allies with energy systems under direct threat.
This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.
MCC’s model already includes multi-year Compacts targeting major constraints to economic growth. The agency already has the structure and skills to implement Energy Security Compacts in place, including a strong track record of successful investment across many energy sector compacts. MCC enjoys a strong bipartisan reputation and consistently ranks as the world’s most transparent bilateral development donor. Finally, MCC is unique among U.S. agencies in being able to put large-scale grant capital into public infrastructure, a crucial tool for energy sector support–particularly in emerging and developing economies. Co-leading the design and implementation of ESCs with the NSC will ensure that MCC’s technical skills and experience are balanced with NSC’s view on strategic and diplomatic goals.
This proposal supports existing proposed legislative changes to increase MCC’s impact by expanding the set of countries eligible for support. The Millennium Challenge Act of 2003 currently defines the candidate country pool in a way that MCC has determined prevents it from “considering numerous middle-income countries that face substantial threats to their economic development paths and ability to reduce poverty.” Expanding that country pool would increase the potential for impact. Secondly, the country selection process for ESCs should be amended to include strategic considerations and to enable participation by the NSC.
America’s Teachers Innovate: A National Talent Surge for Teaching in the AI Era
Thanks to Melissa Moritz, Patricia Saenz-Armstrong, and Meghan Grady for their input on this memo.
Teaching our young children to be productive and engaged participants in our society and economy is, alongside national defense, the most essential job in our country. Yet the competitiveness and appeal of teaching in the United States has plummeted over the past decade. At least 55,000 teaching positions went unfilled this year, with long-term annual shortages set to double to 100,000 annually. Moreover, teachers have little confidence in their self-assessed ability to teach critical digital skills needed for an AI enabled future and in the profession at large. Efforts in economic peer countries such as Canada or China demonstrate that reversing this trend is feasible. The new Administration should announce a national talent surge to identify, scale, and recruit into innovative teacher preparation models, expand teacher leadership opportunities, and boost the profession’s prestige. “America’s Teachers Innovate” is an eight-part executive action plan to be coordinated by the White House Office of Science and Technology Policy (OSTP), with implementation support through GSA’s Challenge.Gov and accompanied by new competitive priorities in existing National Science Foundation (NSF), Department of Education (ED), Department of Labor (DoL), and Department of Defense education (DoDEA) programs.
Challenge and Opportunity
Artificial Intelligence may add an estimated $2.6 trillion to $4.4 trillion annually to the global economy. Yet, if the U.S. is not able to give its population the proper training to leverage these technologies effectively, the U.S. may witness a majority of this wealth flow to other countries over the next few decades while American workers are automated from, rather than empowered by, AI deployment within their sectors. The students who gain the digital, data, and AI foundations to work in tandem with these systems – currently only 5% of graduating high school students in the U.S. – will fare better in a modern job market than the majority who lack them. Among both countries and communities, the AI skills gap will supercharge existing digital divides and dramatically compound economic inequality.
China, India, Germany, Canada, and the U.K. have all made investments to dramatically reshape the student experience for the world of AI and train teachers to educate a modern, digitally-prepared workforce. While the U.S. made early research & development investments in computer science and data science education through the National Science Foundation, we have no teacher workforce ready to implement these innovations in curriculum or educational technology. The number of individuals completing a teacher preparation program has fallen 25% over the past decade; long-term forecasts suggest at least 100,000 shortages annually, teachers themselves are discouraging others from joining their own profession (especially in STEM), and preparing to teach digital skills such as computer science was the least popular option for prospective educators to pursue. In 2022, even Harvard discontinued its Undergraduate Teacher Education Program completely, citing low interest and enrollment numbers. There is still consistent evidence that young people or even current professionals remain interested in teaching as a possible career, but only if we create the conditions to translate that interest into action. U.S. policymakers have a narrow window to leverage the strong interest in AI to energize the education workforce, and ensure our future graduates are globally competitive for the digital frontier.
Plan of Action
America’s teaching profession needs a coordinated national strategy to reverse decades of decline and concurrently reinvigorate the sector for a new (and digital) industrial revolution now moving at an exponential pace. Key levers for this work include expanding the number of leadership opportunities for educators; identifying and scaling successful evidence-based models such as UTeach, residency-based programs, or National Writing Project’s peer-to-peer training sites; scaling registered apprenticeship programs or Grow Your Own programs along with the nation’s largest teacher colleges; and leveraging the platform of the President to boost recognition and prestige of the teaching profession.
The White House Office of Science and Technology Policy (OSTP) should coordinate a set of Executive Actions within the first 100 days of the next administration, including:
Recommendation 1. Launch a Grand Challenge for AI-Era Teacher Preparation
Create a national challenge via www.Challenge.Gov to identify the most innovative teacher recruitment, preparation, and training programs to prepare and retain educators for teaching in the era of AI. Challenge requirements should be minimal and flexible to encourage innovation, but could include the creation of teacher leadership opportunities, peer-network sites for professionals, and digital classroom resource exchanges. A challenge prompt could replicate the model of 100Kin10 or even leverage the existing network.
Recommendation 2. Update Areas of National Need
To enable existing scholarship programs to support AI readiness, the U.S. Department of Education should add “Artificial Intelligence,” “Data Science,” and “Machine Learning” to GAANN Areas of National Need under the Computer Science and Mathematics categories to expand eligibility for Masters-level scholarships for teachers to pursue additional study in these critical areas. The number of higher education programs in Data Science education has significantly increased in the past five years, with a small but increasing number of emerging Artificial Intelligence programs.
Recommendation 3. Expand and Simplify Key Programs for Technology-Focused Training
The President should direct the U.S. Secretary of Education, the National Science Foundation Director, and the Department of Defense Education Activity Director to add “Artificial Intelligence, Data Science, Computer Science” as competitive priorities where appropriate for existing grant or support programs that directly influence the national direction of teacher training and preparation, including the Teacher Quality Partnerships (ED) program, SEED (ED), the Hawkins Program (ED), the STEM Corps (NSF), the Robert Noyce Scholarship Program (NSF), and the DoDEA Professional Learning Division, and the Apprenticeship Building America grants from the U.S. Department of Labor. These terms could be added under prior “STEM” competitive priorities, such as the STEM Education Acts of 2014 and 2015 for “Computer Science,”and framed under “Digital Frontier Technologies.”
Additionally, the U.S. Department of Education should increase funding allocations for ESSA Evidence Tier-1 (“Demonstrates Rationale”), to expand the flexibility of existing grant programs to align with emerging technology proposals. As AI systems quickly update, few applicants have the opportunity to conduct rigorous evaluation studies or randomized control trials (RCTs) within the timespan of an ED grant program application window.
Additionally, the National Science Foundation should relaunch the 2014 Application Burden Taskforce to identify the greatest barriers in NSF application processes, update digital review infrastructure, review or modernize application criteria to recognize present-day technology realities, and set a 2-year deadline for recommendations to be implemented agency-wide. This ensures earlier-stage projects and non-traditional applicants (e.g. nonprofits, local education agencies, individual schools) can realistically pursue NSF funding. Recommendations may include a “tiered” approach for requirements based on grant size or applying institution.
Recommendation 4. Convene 100 Teacher Prep Programs for Action
The White House Office of Science & Technology Policy (OSTP) should host a national convening of nationally representative colleges of education and teacher preparation programs to 1) catalyze modernization efforts of program experiences and training content, and 2) develop recruitment strategies to revitalize interest in the teaching profession. A White House summit would help call attention to falling enrollment in teacher preparation programs; highlight innovative training models to recruit and retrain additional graduates; and create a deadline for states, districts, and private philanthropy to invest in teacher preparation programs. By leveraging the convening power of the White House, the Administration could make a profound impact on the teacher preparation ecosystem.
The administration should also consider announcing additional incentives or planning grants for regional or state-level teams in 1) catalyzing K-12 educator Registered Apprenticeship Program (RAPs) applications to the Department of Labor and 2) enabling teacher preparation program modernization for incorporating introductory computer science, data science, artificial intelligence, cybersecurity, and other “digital frontier skills,” via the grant programs in Recommendation 3 or via expanded eligibility for the Higher Education Act.
Recommendation 5. Launch a Digital “White House Data Science Fair”
Despite a bipartisan commitment to continue the annual White House Science Fair, the tradition ended in 2017. OSTP and the Committee on Science, Technology, and Math Education (Co-STEM) should resume the White House Science Fair and add a national “White House Data Science Fair,” a digital rendition of the Fair for the AI-era. K-12 and undergraduate student teams would have the opportunity to submit creative or customized applications of AI tools, machine-learning projects (similar to Kaggle competitions), applications of robotics, and data analysis projects centered on their own communities or global problems (climate change, global poverty, housing, etc.), under the mentorship of K-12 teachers. Similar to the original White House Science Fair, this recognition could draw from existing student competitions that have arisen over the past few years, including in Cleveland, Seattle, and nationally via AP Courses and out-of-school contexts. Partner Federal agencies should be encouraged to contribute their own educational resources and datasets through FC-STEM coordination, enabling students to work on a variety of topics across domains or interests (e.g. NASA, the U.S. Census, Bureau of Labor Statistics, etc.).
Recommendation 6. Announce a National Teacher Talent Surge at the State of Union
The President should launch a national teacher talent surge under the banner of “America’s Teachers Innovate,” a multi-agency communications campaign to reinvigorate the teaching profession and increase the number of teachers completing undergraduate or graduate degrees each year by 100,000. This announcement would follow the First 100 Days in office, allowing Recommendations 1-5 to be implemented and/or planned. The “America’s Teachers Innovate” campaign would include:
A national commitments campaign for investing in the future of American teaching, facilitated by the White House, involving State Education Agencies (SEAs) and Governors, the 100 largest school districts, industry, and philanthropy. Many U.S. education organizations are ready to take action. Commitments could include targeted scholarships to incentivize students to enter the profession, new grant programs for summer professional learning, and restructuring teacher payroll to become salaried annual jobs instead of nine-month compensation (see Discover Bank: “Surviving the Summer Paycheck Gap”).
Expansion of the Presidential Awards for Excellence in Mathematics and Science Teaching (PAMEST) program to include Data Science, Cybersecurity, AI, and other emerging technology areas, or a renaming of the program for wider eligibility across today’s STEM umbrella. Additionally, the PAMEST Award program should resume in-person award ceremonies beyond existing press releases, which were discontinued during COVID disruptions and have not since been offered. Several national STEM organizations and teacher associations have requested these events to return.
Student loan relief through the Teacher Loan Forgiveness (TLF) program for teachers who commit to five or more years in the classroom. New research suggests the lifetime return of college for education majors is near zero, only above a degree in Fine Arts. The administration should add “computer science, data science, and artificial intelligence” to the subject list of “Highly Qualified Teacher” who receive $17,500 of loan forgiveness via executive order.
An annual recruitment drive at college campus job fairs, facilitated directly under the banner of the White House Office of Science & Technology Policy (OSTP), to help grow awareness on the aforementioned programs directly with undergraduate students at formative career choice-points.
Recommendation 7. Direct IES and BLS to Support Teacher Shortage Forecasting Infrastructure
The IES Commissioner and BLS Commissioner should 1) establish a special joint task-force to better link existing Federal data across agencies and enable cross-state collaboration on the teacher workforce, 2) support state capacity-building for interoperable teacher workforce data systems through competitive grant priorities in the State Longitudinal Data Systems (SLDS) at IES and the Apprenticeship Building America (ABA) Program (Category 1 grants), and 3) recommend a review criteria question for education workforce data & forecasting in future EDA Tech Hub phases. The vast majority of states don’t currently have adequate data systems in place to track total demand (teacher vacancies), likely supply (teachers completing preparation programs), and the status of retention/mobility (teachers leaving the profession or relocating) based on near- or real-time information. Creating estimates for this very brief was challenging and subject to uncertainty. Without this visibility into the nuances of teacher supply, demand, and retention, school systems cannot accurately forecast and strategically fill classrooms.
Recommendation 8. Direct the NSF to Expand Focus on Translating Evidence on AI Teaching to Schools and Districts.
The NSF Discovery Research PreK-12 Program Resource Center on Transformative Education Research and Translation (DRK-12 RC) program is intended to select intellectual partners as NSF seeks to enhance the overall influence and reach of the DRK-12 Program’s research and development investments. The DRK-12 RC program could be utilized to work with multi-sector constituencies to accelerate the identification and scaling of evidence-based practices for AI, data science, computer science, and other emerging tech fields. Currently, the program is anticipated to make only one single DRK-RC award; the program should be scaled to establish at least three centers: one for AI, integrated data science, and computer science, respectively, to ensure digitally-powered STEM education for all students.
Conclusion
China was #1 in the most recent Global Teacher Status Index, which measures the prestige, respect, and attractiveness of the teaching profession in a given country; meanwhile, the United States ranked just below Panama. The speed of AI means educational investments made by other countries have an exponential impact, and any misstep can place the United States far behind – if we aren’t already. Emerging digital threats from other major powers, increasing fluidity of talent and labor, and a remote-work economy makes our education system the primary lever to keep America competitive in a fast-changing global environment. The timing is ripe for a new Nation at Risk-level effort, if not an action on the scale of the original National Defense Education Act in 1958 or following the more recent America COMPETES Act. The next administration should take decisive action to rebuild our country’s teacher workforce and prepare our students for a future that may look very different from our current one.
This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.
This memo was developed in partnership with the Alliance for Learning Innovation, a coalition dedicated to advocating for building a better research and development infrastructure in education for the benefit of all students. Read more education R&D memos developed in partnership with ALI here.
Approximately 100,000 more per year. The U.S. has 3.2 million public school teachers and .5 million private school teachers (NCES, 2022). According to U.S. Department of Education data, 8% of public and 12% of private school teachers exit the profession each year (-316,000), a number that has remained relatively steady since 2012, while long-term estimates of re-entry continue to hover near 20% (+63,000). Unfortunately, the number of new teachers completing either traditional or alternative preparation programs has steadily declined over the past decade to 159,000+ per year. As a result of this gap, active vacancies continue to increase each year, and more than 270,000 educators are now cumulatively underqualified for their current roles, assumedly filling-in for absences caused by the widening gap. These predictions were made as early as 2016 (p. 2) and now have seemingly become a reality. Absent any changes, the total shortage of vacant or underqualified teaching positions could reach a total deficit between 700,000 and 1,000,000 by 2035.
The above shortage estimate assumes a base of 50,000 vacancies and 270,000 underqualified teachers as of the most recent available data, a flow of -94,000 net (entries – exits annually, including re-entrants) in 2023-2024. This range includes uncertainties for a slight (3%-5%) annual improvement in preparation from the status quo growth of alternative licensure pathways such as Grow your Own or apprenticeship programs through 2035. For exit rate, the most conservative estimates suggest a 5% exit rate, while the highest estimate at 50%; however, assembled state-level data suggests a 7.9% exit rate, similar to the NCES estimate (8%). Population forecasts for K-12 students (individuals aged 14-17) imply slight declines by 2035, based on U.S. Census estimates. Taken together, more optimistic assumptions result in a net cumulative shortage closer to -700,000 teachers, while worst-case scenario estimates may exceed -1,000,000.
Early versions of AI-powered tutoring have significant promise but have not yet lived up to expectations. Automated tutors have resulted in frustrating experiences for users, led students to perform worse on tests than those who leveraged no outside support, and have yet to successfully integrate other school subject problem areas (such as mathematics). We should expect AI tools to improve over time and become more additive for learning specific concepts, including repetitive or generalizable tasks requiring frequent practice, such as sentence writing or paragraph structure, which has the potential to make classroom time more useful and higher-impact. However, AI will struggle to replace other critical classroom needs inherent to young and middle-aged children, including classroom behavioral management, social motivation to learn, mentorship relationships, facilitating collaboration between students for project-based learning, and improving quality of work beyond accuracy or pre-prompted, rubric-based scoring. Teachers consistently report student interest as a top barrier for continued learning, which digital curriculum and AI automation may provide effectively for a short-period, but cannot do for the full twelve-year duration of a students’ K-12 experience.
These proposed executive actions complement a bi-partisan legislative proposal, “A National Training Program for AI-Ready Students,” which would invest in a national network of training sites for in-service teachers, provide grant dollars to support the expansion of teacher preparation programs, and help reset teacher payroll structure from 9-months to 12-months. Either proposal can be implemented independently from the other, but are stronger together.
Using Title 1 to Unlock Equity-Focused Innovation for Students
Congress should approve a new allowable use of Title I spending that specifically enables and encourages school districts to use funds for activities that support and drive equity-focused innovation. The persistent equity gap between wealthy and poor students in our country, and the continuing challenges caused by the pandemic, demand new, more effective strategies to help the students who are most underserved by our public education system.
Efforts focused on the distribution of all education funding, and Title I in particular, have focused on ensuring that funds flow to students and districts with the highest need. Given the persistence of achievement and opportunity gaps across race, class, and socioeconomic status, there is still work to be done on this front. Further, rapidly developing technologies such as artificial intelligence and immersive technologies are opening up new possibilities for students and teachers. However, these solutions are not enough. Realizing the full potential of funding streams and emerging technologies to transform student outcomes requires new solutions designed alongside the communities they are intended to serve.
To finally close the equity gap, districts must invest in developing, evaluating, and implementing new solutions to meet the needs of students and families today and in a rapidly changing future. Using Title I funding to create a continuous, improvement-oriented research and development (R&D) infrastructure supporting innovations at scale will generate the systemic changes needed to reach the students in highest need of new, creative, and more effective solutions to support their learning.
Challenge and Opportunity
Billions of dollars of federal funding have been distributed to school districts since the authorization of Title I federal funding under the Elementary and Secondary Education Act (ESEA), introduced in 1965 (later reauthorized under the Every Student Succeeds Act [ESSA]). In 2023 alone, Congress approved $18.4 billion in Title I funding. This funding is designed to provide targeted resources to school districts to ensure that students from low-income families can meet rigorous academic standards and have access to post-secondary opportunities. ESEA was authorized during the height of the Civil Rights Movement with the intent of addressing the two primary goals of (1) ensuring traditionally disadvantaged students were better served in an effort to create more equitable public education, and (2) addressing the funding disparities created by differences in local property taxes, the predominant source of education funding in most districts. These dual purposes were ultimately aimed at ensuring that a student’s zip code did not define their destiny.
The passing of ESEA was a watershed moment. Prior to its authorization, education policy was left mostly up to states and localities. In authorizing ESEA, the federal government launched ongoing involvement in public education and initiated a focus on principles of equity in education.
Further, research shows that school spending matters: Increased funding has been found to be associated with higher levels of student achievement. However, despite the increased spending for students from low-income families via Title I, the literature on outcomes of Title 1 funding is mixed. The limited impact of Title I funds on outcomes may be a result of municipalities using Title I funding to supplant or fill gaps in their overall funding and programs, instead of being used as an additive funding stream meant to equalize funding between poorer and richer districts. Additionally, while a taxonomy of options is provided to bring rigor and research to how districts use Title funding, the narrow set of options has not yielded the intended outcomes at scale. For instance, studies have repeatedly shown that school turnaround efforts have proven particularly stubborn and not shown the hoped-for outcomes.
The equity gap that ESEA was created to address has not been erased. There is still a persistent achievement gap between high- and low-income students in the nation. The emergence of COVID in 2020 uprooted the public education system, and its impact on student learning, as measured by test scores, is profound. Students lost ground across all focus areas and grades. Now, in the post-pandemic era, students have continued to lose ground. The “COVID Generation” of students are behind where they should be, and many are disengaged or questioning the value of their public education. Chronic absenteeism is increasing across all grades, races, and incomes. These challenges create an imperative for schools and districts to deepen their understanding of the interests and needs of students and families. The quick technological advancements in the education market are changing what is possible and available to students, while also raising important questions around ethics, student agency, and equitable access to technology. It is a moment of immense potential in public education.
Title I funds are a key mechanism to addressing the array of challenges in education ranging from equity to fast-paced advancements in technology transforming the field. In its current form, Title I allocation occurs via four distribution criteria. The majority of funding is allocated via basic grants that are determined entirely on individual student income eligibility. The other three criteria allocate funding based on the concentration of student financial need within a district. Those looking to rethink allocation often argue for considering impact per dollar allocated, beyond solely need as a qualifying indicator for funding, essentially taking into account cost of living and services in an area to understand how far additional funding will stretch in order to more accurately equalize funding. It is essential that Title I is redesigned beyond redoing the distribution formula. The money allocated must be spent differently—more creatively, innovatively, and wisely—in order to ensure that the needs of the most vulnerable students are finally met.
Plan of Action
Title I needs a new allowable spending category approved that specifically enables and encourages districts to use funds for activities that drive equity-focused innovation. Making room for innovation grounded in equity is particularly important in this present moment. Equity has always been important, but there are now tools to better understand and implement systems to address it. As school districts continue to recover from the pandemic-related disruptions, explore new edtech learning options, and prepare for an increasingly diverse population of students for the future, they must be encouraged to drive the creation of better solutions for students via adding a spending category that indicates the value the federal government sees in innovating for equity. Some of the spending options highlighted below are feasible under the current Title I language. By encouraging these options tethered specifically to innovation, district leadership will feel more flexibility to spend on programs that can foster equity-driven innovation and create space for the new solutions that are needed to improve outcomes for students.
Innovation, in this context, is any systemic change that brings new services, tools, or ways of working into school districts that improve the learning opportunities and experience for students. Equity-focused innovation refers to innovation efforts that are specifically focused on improving equity within school systems. It is a solution-finding process to meet the needs of students and families. Innovation can be new, technology-driven tools for students, teachers, or others who support student learning. But innovation is not limited to technology. Allowing Title I funding to be used for activities that support and foster equity-driven innovation could also include:
- Improving data systems and usage: Ensure that school districts have agile data systems equipped to identify student weaknesses and determine the effectiveness of solutions. As more solutions come to market and are developed internally, both AI and otherwise, school systems will be able to better serve students qualifying for Title I funding if they can meaningfully assess what is and is not working and use that information to guide strategy and decision-making.
- Leadership development: Support the research and development, futurist, and equitable design skills of systems to enable leaders to guide innovation from within districts alongside students and families.
- Testing new solutions: Title I funding currently can be spent primarily on evidence-based programs; enabling the use of funding for innovative pilots that have community support would provide space to discover more effective solutions.
- Incentivizing systemic district innovation: School districts could use funding to support the creation of innovation offices within their administration structure that are tasked with developing an innovation agenda rooted in district and student needs and spearheading solutions.
- Building networks for change: District leaders charged with creating and sustaining new learning models, school models, and programs often do so in isolation. Allowing districts to fund the creation of new programs and support existing organizations that bring together school system innovators and researchers to capture and share best practices, promising new solutions, and lessons learned from testing can lead to better adoption and scale of promising new models. There are already networks that exist, for instance, the Regional Education Laboratory Program. Funding could be used to support these existing networks or to develop new networks specifically tailored to meet the needs of leaders driving these innovations.
Expanding Title I funding to make room for innovative ideas and solutions within school systems has the potential to unlock new, more effective solutions that will help close equity gaps, but spending available education funds on unproven ideas can be risky. It is essential that the Department of Education issues carefully constructed guardrails to allow ample space for new solutions to emerge and scale, while also protecting students and ensuring their educational needs are still met. These guardrails and design principles would ensure that funds are spent in impactful ways that support innovation and building an evidence base. Examples of guardrails for a school system spending Title I funding on innovation could include:
- Innovation agenda: There should be a clearly articulated, publicly available innovation agenda that lays out how needs are being identified using quantitative and qualitative data and research, the methods of how innovations are being developed and selected, the goals of the innovation and how the work will grow (or not) based on clearly defined metrics of success.
- Clear research & development process: New ideas, tools, and ways of working must come into the district with a clear R&D process that begins with student and community needs and then regularly interrogates what is and is not working, tries to understand the why behind what is working, and expands promising practices.
- Pilot size limits: Unproven and innovative ideas should begin as pilots in order to ensure they are tested, evaluated, and proven before being used more broadly.
- Timeline requirements for results: New innovation funded via Title I funding should have a limited timeline during which the system needs to show improvement and evidence of impact.
- Clear outcomes that the innovation is aiming for: Innovation is not about something new for the sake of something new. Innovation funding via Title I funding must be linked to specific outcomes that will help achieve the overarching programmatic goal of increasing educational equity in our country.
While creating an authorized funding category for equity-focused innovation through Title I would have the most widespread impact, other ways to drive equitable innovation should also be pursued in the short term, such as through the new Comprehensive Center (CC), set to open in fall 2024, that will focus on equitable funding. It should prioritize developing the skills in district leaders to enable and drive equity-driven innovation.
Conclusion
Investment in innovation through Title I funding can feel high risk compared to the more comfortable route of spending only on proven solutions. However, many ways of traditional spending are not currently working at scale. Investing in innovation creates the space to find solutions that actually work for students—especially those that are farthest from opportunity and whom Title I funding is intended to support. Despite the perceived risk, investing in innovation is not a high-risk path when coupled with a clear sense of the community need, guardrails to promote responsible R&D and piloting processes, predetermined outcome goals, and the data systems to support transparency on progress. Large-scale, federal investment in creating space for innovation through Title I funding in—an already well-known mode of district funding not currently realizing its desired impact—will create solutions within public education that give students the opportunities they need and deserve.
This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.
This memo was developed in partnership with the Alliance for Learning Innovation, a coalition dedicated to advocating for building a better research and development infrastructure in education for the benefit of all students. Read more education R&D memos developed in partnership with ALI here.
Establish Data Standards To Protect Newborn DNA Privacy by Developing Data Storage Standards for Newborn Screening Samples
Newborn screening is performed on millions of babies in the U.S. every year to test for rare genetic diseases and, when necessary, allow for early treatment. While newborn screening is mandated by the federal government, each state runs its own screening program. Importantly, individual states manage how newborn screening data is stored and, potentially, accessed and used in the future. While such data is often used for quality assurance testing and clinical research, there have been instances of law enforcement subpoenaing newborn screening data for use in criminal investigations. For example, New Jersey used newborn screening data to investigate a decades-old sexual assault This raises major concerns about overall transparency of data use and privacy in the newborn screening process.
The incoming administration should encourage states to develop data handling standards for newborn screening data. Specifically these standards should include how long data is stored and who can access it. This can be accomplished by directing the Health and Human Services’ (HHS) Federal Advisory Committee on Heritable Disorders in Newborns and Children (ACHDNC) to provide recommendations that clearly communicate data use and privacy measures to state health departments. In addition, the incoming administration should also encourage development of increased educational materials for parents to explain these privacy concerns, and create funding opportunities to incentivize both of these measures.
Challenge and Opportunity
Newborn screening is a universal practice across the United States. Blood samples are taken from infants only a few days old to test for a variety of genetic diseases such as phenylketonuria, which can cause intellectual disability that can be prevented through changes in diet—if it is caught early enough. These blood samples can be used for both metabolic and genetic tests, depending on which disease is being tested for and how it is detected. Phenylketonuria, for example, is detected by high levels of a molecule called phenylalanine in the blood, while spinal muscular atrophy (SMA) is detected by changes in the genetic sequence of the gene associated with SMA. Newborn screening is an essential practice that identifies a wide range of severe diseases before symptoms occur, and three babies out of every 1,000 are identified with a genetic condition.
While newborn screening is required by the federal government, each state can determine which panel of diseases are tested. The Department of Health and Human Services established an Advisory Committee on Heritable Disorders in Newborns and Children (ACHDNC), which regularly updates a Recommended Uniform Screening Panel (RUSP) with conditions. For example, SMA was approved to the RUSP in 2018, and all 50 states have now added SMA to their screening panels. Much of the effort to both nominate conditions to the federal RUSP and to encourage individual states to adapt spinal muscular atrophy testing was led by patient advocacy groups such as CureSMA, and these sorts of groups play a significant role in the addition of future conditions. Similar efforts are underway for Krabbe disease, which was added to the RUSP in 2024 and is currently screened for in only twelve states, a number that may increase in the coming years as more states consider adding it to their panels. State advisory boards will review new disease nominations and, along with their status on the RUSP, will often consider how prevalent a disease is, if there are treatments available for the disease, and cost-effectiveness of screening for this condition. Regardless of which tests are performed, every state participates in newborn screening. Importantly, newborn screening does not require affirmative consent from parents—some states offer opt-out options, generally for religious reasons, but 98% of infants are screened.
Mandatory newborn screening programs have led health departments across the country to obtain genetic data from nearly every child in the country for decades. With recent developments in genetic sequencing technologies, this means that, theoretically, this newborn screening data could be repurposed for other functions. In 2022, the New Jersey Office of the Public Defender filed a lawsuit against the New Jersey Department of Health for complying with a subpoena to provide newborn screening data to the police as part of a sexual assault investigation. Specifically, law enforcement subpoenaed the blood sample of a suspect’s child, which they used to perform new DNA analysis to match DNA crime scene evidence. The lawsuit reveals that the New Jersey Department of Health has retained newborn screening blood spots for over twenty years; that the data obtained from the subpoena was used to bring criminal charges for a crime committed in 1996; and that the Office of the Public Defender were not provided information about how many similar subpoenas have been complied with in the past.
This case highlights the bigger issue of newborn screening data as the United States’ “hidden national DNA database.” Law enforcement has potential access to decades of samples that can be used for genetic analysis that were not intended for law enforcement use. Police in other states like California have also sought access to newborn screening databases for investigational purposes. In California, the state health department keeps samples indefinitely, and not only is this information not disclosed to parents, it no longer provides parents with an opt-out option. As law enforcement agencies across states begin to understand the magnitude of data that can be found in these databases, it is becoming clear that health department policies for regulating access to these data are lacking.
Using genetic data in law enforcement has become increasingly common. The practice of “investigative genetic genealogy,” or IGG, has made national headlines in recent years, in which law enforcement can access genetic data from publicly available databases to use in criminal investigations. These databases are full of genetic data that consumers who participate in direct-to-consumer genetic testing, such as 23andMe, can use to voluntarily upload and share their data with more people. IGG presents its own privacy concerns, but it is important to recognize the voluntary nature of both (a) participating in direct-to-consumer testing and (b) uploading it to a third-party website. Newborn screening, on the other hand, is not an optional practice.
Proponents of IGG argue that using genetic data is very effective at not only catching killers—and doing so quicker than without DNA data—but also exonerating innocents. However, this fact does not outweigh the major issues of privacy, transparency, and the fact that this approach potentially violates the fourth amendment’s protections against unreasonable search and seizures—especially when it comes to incorporating newborn screening data into these approaches. A previous court case found a hospital in violation of the fourth amendment for providing law enforcement with warrantless drug screening results from pregnant women, even though the women were under the impression they were receiving diagnostic tests. The Supreme Court argued that the hospital’s actions break down public trust in the health system, as patients have a “reasonable expectation of privacy” regarding their test results. While cases of subpoenaing newborn screening data may not currently violate any legal procedure, allowing law enforcement access to these data for use in future investigations, particularly without informing the individuals or parents involved, may also erode trust in the health system. This may lead to parents—when given the option—to opting out of newborn screening programs more often, leading to an increase in genetic and metabolic disorders going undiagnosed in newborns and causing major health problems in the future. In addition, with many scientists advocating for adopting whole-genome sequencing of newborns—instead of simply sequencing a panel of genes that are commonly identified as disease-causing in newborns—the amount of potential available genetic data could be staggering. As a result, the incoming administration needs to take action to address the lack of transparent policies regarding newborn screening data in order to maintain its success as a public health measure.
Plan of Action
Current genetic privacy legislation
The landscape of genetic privacy legislation is, currently, somewhat patchwork. At the federal level, the most relevant legislation includes (1) the Genetic Information Nondiscrimination Act (GINA), (2) the Affordable Care Act (ACA), and (3) the Health Insurance Portability and Accountability Act (HIPAA). GINA specifically prohibits genetic discrimination in health insurance and in the workplace. This means that health insurers cannot deny coverage based on genetic data, and employers cannot make hiring, firing, or promotion decisions based on genetic data. The ACA strengthens GINA’s stipulation against genetic discrimination in health insurance by mandating that any health insurance issuer must provide coverage to whomever applies, as well as including genetic information on the list of factors that cannot be considered when determining overage or premium costs. HIPAA additionally regulates genetic data gathered in a healthcare setting, which includes newborn screening data, but HIPAA-protected information can be shared at the request of a court order or subpoena. The FBI developed an interim policy regarding all types of forensic genetic genealogy—often used with direct-to-consumer genetic tests but could also be applicable to newborn screening—which states the criteria required for investigators to use this approach. Criteria includes the requirement that a case must be an unsolved violent crime. In addition, the interim policy states that investigative agencies must identify themselves as law enforcement—a previous case was solved by accessing genetic databases without disclosing this information to the database—and that any collected data must be destroyed upon conclusion of the case.
Additionally, many states have additional laws that strengthen genetic privacy regulation on top of federal regulations. Maryland passed a bill that regulates the use of genetic data in criminal investigations—specifically, it requires that law enforcement obtains informed consent from non-suspects before using their DNA in investigations. Other recent state regulations that address law enforcement access to genetic data in one way or another include Montana, which requires government agencies to obtain a warrant to access genetic data, and Tennessee, which explicitly allows law enforcement to access genetic data as long as they obtain a warrant or subpoena. Importantly, many of these laws are geared more towards addressing genetic data from direct-to-consumer testing and do not directly apply to newborn screening. Like federal legislation, state genetic privacy legislation is largely lacking in policies to address the use of newborn screening by law enforcement.
On top of legislation regarding genetic privacy, states all have their own respective policies regarding newborn screening that vary dramatically. For example, a court in Minnesota found that nonconsensual storage of newborn screening data for use outside of genetic screening purposes violates the state genetic privacy law stating that genetic information can only be distributed with an individual’s written consent, leading to Minnesota destroying its newborn screening samples. Other states have no legislation at all. Additionally, states can have laws addressing other, non-law enforcement uses of newborn screening data; another major use of newborn screening data is research.
Policy Recommendations
The incoming administration should address the lack of transparency in newborn screening data management by implementing the following recommendations:
Direct the ACHDNC to develop national recommendations detailing standards for newborn genetic screening sample and data handling.
These standards should include:
Standards for what the data can be used for outside of newborn screening, and by whom. Newborn screening data is used in additional ways outside of law enforcement; it can also be used for quality assurance to help ensure tests are working properly, to help develop new tests, and in clinical trials. There are compelling arguments for these uses; for clinical research, for example, this data can contribute towards research studying the disease the child may have been diagnosed with. However, for the sake of transparency, policy should state specifically what newborn genetic data can and cannot be used for, and who is allowed access to the data under these circumstances. For instance, Michigan has a program called the Michigan BioTrust, which takes the leftover, de-identified newborn screening samples for use in research towards understanding disease. Parents can choose to opt in or out at the time of screening, and parents—as well as children, upon turning 18—can change their mind and have their data removed later if they so choose. Regardless of state decisions on whether law enforcement should be able to access their newborn screening data, clearly stating what the data can be used for overall is paramount for parents to understand what happens to their children’s samples.
The length of time that blood samples and genetic data can be stored in state databases, and when, if ever, the data will be destroyed. As detailed by the lawsuit, New Jersey had been storing samples for over twenty years, although parents were not actually aware of this fact until the lawsuit was filed; potentially in response to this lawsuit, starting in November 2024, New Jersey will be destroying blood spots older than 2 years. Similarly, Delaware stores blood spot samples for three years before destroying them. While there is no definitive answer to what the best timeline for saving samples is, establishing a transparent timeline for how long samples can be stored in each state will improve data handling transparency.
What say, if any, do parents have in what is done with their child’s samples and data. In Texas, after a lawsuit determining that storing newborn screening samples without consent was against the law, parents have the right to request their child’s samples be destroyed if they so choose. Developing policies that allow parents—or children themselves, once they become adults—to have a say in what happens to the samples after screening is completed would provide individuals control of their data without disincentivizing testing.
Partner with state advisory boards to develop educational materials for parents detailing ACHDNC recommendations and state-specific policy.
While newborn screening is mandated, there is variable information available to parents regarding what is done with the data. For example, Michigan has an extensive Q&A page on their Department of Health website addressing many major newborn screening-related questions, including a section addressing what is done with samples after screening is complete. In contrast, West Virginia’s Q&A page does not address what happens to the samples after testing. Not only would developing standard policies for data handling be beneficial, but improving the dissemination of such information to parents would increase overall transparency and improve trust in the system. The incoming administration should work closely with state advisory boards to improve the communication of newly-developed data handling standards to parents and other relevant parties.
Incentivize development of plans by providing grant opportunities to state health departments to support newborn screening programs.
Currently, newborn screening programs receive no direct federal funding; however, costs include operating costs, testing equipment, and personnel on top of the tests themselves. In general, newborn screening is paid for through a fee for the tests, which are often covered by the parents’ health insurance, or the State Children’s Health Insurance Program or Medicaid. However, grants such as the NBS Co-Propel have been awarded to states in the past for creating improvements in their newborn screening programs such as support for long-term follow up on patients that have positive test results returned to them. The Co-Propel grant was administered through the Maternal & Child Health Bureau (MCHB) of Health and Human Services; the incoming administration could recommend that MCHB initiates a new funding opportunity for states to either develop data storage standards and/or educational materials for families to encourage the adaptation of these standards.
Conclusion
Newborn genetic screening is an essential public health measure that saves thousands of lives each year by identifying diseases in newborns that can either be prevented early or treated immediately rather than waiting until severe symptom onset. However, with the advent of new genetic technologies and the burgeoning use of newborn genetic screening data in law enforcement investigations, major privacy and transparency issues are becoming known to parents, potentially putting trust in the newborn screening process at risk. This could reduce desire to participate in these programs, leading to an inability to quickly diagnose many preventable or treatable conditions. The incoming administration should work towards encouraging state health departments to develop clear and well-communicated data storage standards for newborn screening samples in order to combat these concerns moving forward.
This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.
Newborn screening is performed by pricking a newborn’s heel to obtain a blood sample, or “blood spot,” within two days of being born. These blood spot samples are used for both metabolic tests and genetic tests. Metabolic tests measure different molecules in the blood that might signal a disease, such as high levels of an amino acid called phenylalanine, which in healthy amounts is used by our bodies to make proteins and in high amounts can cause phenylketonuria. Genetic tests are performed by sequencing a panel, or selection, of genes that are often associated with newborn screening diagnoses; often, genetic testing is performed after a positive hit on a metabolic test to both confirm and further clarify the diagnosis.
The role of the Advisory Committee on Heritable Disorders in Newborns and Children (ACHDNC) is to communicate with the Secretary of the Department of Health and Human Services regarding newborn screening policies. This not only includes managing the Recommended Uniform Screening Panel, but also providing advice on grants and research projects related to newborn screening research, assistance with developing policies for state and local health departments for newborn screening implementation, and recommendations towards reducing child mortality from the diseases screened.
The Recommended Uniform Screening Panel (RUSP) is the list of disorders recommended for newborn testing. As of July 2024, the RUSP contains 38 “core conditions,” which are conditions that states specifically test for, and 26 “secondary conditions,” which are conditions that physicians may identify incidentally while screening for core conditions. New conditions can be added, and conditions can be moved between categories if the advisory board chooses to do so. These conditions include metabolic disorders such as phenylketonuria, endocrine disorders such as thyroid disorders, hemoglobin disorders such as sickle cell anemia, and others such as cystic fibrosis.
The National Human Genome Research Institute has a searchable database that details the different state genetic privacy laws, including their legislative status and a summary of their intended purpose. These laws have many goals, including expanding protections against genetic discrimination, research subject protections, artificial intelligence, and more.
Reimagining the Enhancing Education Through Technology Program for the Modern Era
This memo proposes the modernization of the Enhancing Education Through Technology (E2T2) Program as part of the overdue Elementary and Secondary Education Act’s (ESEA) reauthorization. With the expiration of several programs that support technology-enabled teaching and learning—such as the Elementary and Secondary School Emergency Relief (ESSER) fund, Emergency Connectivity Fund (ECF), and the Affordable Connectivity Program (ACP)—and the increasing prevalence of digital tools in educational settings, there is a pressing need for dedicated aid to states and districts. A reimagined E2T2 can address the digital use, design, and access divides identified in the 2024 National Educational Technology Plan (NETP).
Challenge and Opportunity
The 2024 NETP, the U.S. Department of Education’s (ED) flagship educational technology (edtech) policy document, envisions a future where all students use digital tools actively to learn, all educators have support to design those classroom experiences, and all communities can readily access foundational connectivity, devices, and digital content. The original $1 billion E2T2, established under the No Child Left Behind Act, played a critical role in developing and implementing state and local plans that reflected this vision. For example, SETDA’s 2010 report examining all states’ investments found that the top E2T2 priorities were:
- Professional development (top priority in 34 states)
- Increasing achievement and digital literacy (top priority in 6 states)
- Increase access to technology (top priority in 4 states)
However, the program lost funding in 2011 and was excluded from the 2015 Every Student Succeeds Act (ESSA). Since then, edtech has been subsumed under broader block grants, such as the Student Support and Academic Enrichment Program (Title IV-A) and Supporting Effective Instruction Program (Title II-A), resulting in a dilution of focus and resources. Furthermore, the end of the current Administration coincides with several challenges:
- Federal Program Expirations: ED’s ESSER fund, which supported technology-enabled teaching and learning, was fully obligated by September 2024. The Federal Communications Commission’s (FCC) ECF, which provided $7.1 billion to purchase equipment, sunsetted in June 2024. Finally, the FCC’s ACP connected 23 million households to broadband at a discounted rate. Although many ACP recipients used the program to access schoolwork, the FCC exhausted its $14 billion in May 2024. A new program is necessary to sustain the significant gains made through these programs.
- Unprepared for Emerging Technologies: As innovative tools like generative artificial intelligence (AI) make their way into educational environments, there is an increasing need to support states and districts by offering guidance and professional learning. While half of students aged 14-22 report using generative AI, including for schoolwork, 70% of educators have not yet received training on how to use AI effectively and responsibly.
- Urgency for Digital Citizenship: Recent actions by the Surgeon General to recommend warning labels on social media, as well as bans on cell phones in schools approved by several states and local school boards, calls for additional capacity at the state and local levels to support digital citizenship education.
- Educator Attrition: Due to increased pressures faced during the pandemic, educator attrition rates have increased. Although many policy solutions can help counter this issue, research suggests that educators who do not feel supported in their roles are more likely to leave the profession. With the average district now accessing nearly 3,000 different technology tools in a given school year, educators are more likely to feel lost in selecting and deploying the most appropriate options. States and districts require additional capacity to help educators navigate vast quantities of digital tools, thereby bolstering a feeling of professional support.
Plan of Action
ESSA will be nine years old in December 2024, and the legislation included authorization levels for many programs only up until fiscal year 2020. The 119th Congress has an opportunity to examine the legislation and authorize new programs that respond to current challenges.
A reimagined E2T2, authorized at a minimum of $1.8 billion, can be provided to states and districts through the ED’s Office of Elementary and Secondary Education (OESE), which has experience in administering large national programs. A 1.5% national activities set-aside, reserved by OESE and the Office of Planning, Evaluation, and Policy Development (OPEPD), can offer means for evaluating the impact of the program, as well as providing technical assistance through convenings and federal guidance on impactful investment strategies.
Similar to the original E2T2, state education agencies should receive their share of funds via Title I formula upon submission of a long-range statewide edtech plan informed through adequate community input (e.g., see the U.S. Department of Commerce’s guidance on soliciting public comments and engaging community organizations). States should be permitted to reserve a maximum of 5% of funds received to carry out various coordination activities, including the establishment of a dedicated edtech office that reports to the chief state school officer and is responsible for governing program implementation. The remainder of the funds should be subgranted through a mix of formula and competitive grants to local educational agencies and consortia of eligible entities (e.g., districts, nonprofits, higher education institutions, community anchor institutions).
Allowable uses should include activities to close the three digital divides articulated in the 2024 NETP. For example, the reimagined E2T2 can support the current national AI strategy by allowing funds to be invested toward closing the “digital use divide,” providing opportunities for students to build AI literacy skills and use AI tools to examine and solve community problems. Funds could also be used to close the “digital design divide” by providing educators with ongoing professional development and reinforcing their abilities to align instruction with the Universal Design for Learning principles. Finally, funds could be used to close the “digital access divide” by allowing schools to procure accessible technology solutions, support students’ universal broadband access, or establish a state or local cabinet-level edtech director position.
In 2025, federal policymakers have an opportunity to begin critical discussions around the E2T2 modernization by taking specific action steps:
- The Senate Health, Education, Labor, and Pensions Committee and the House Education and Workforce Committee can introduce steps to reauthorize ESEA, as well as seek input from educators and education leaders on new program considerations.
- The White House Domestic Policy Council and Office of Management and Budget can advocate for the reimagined E2T2 in the president’s annual budget request.
- The Secretary of Education, alongside ED’s Assistant Secretary for Policy, Evaluation, and Policy Development and Assistant Secretary for Legislation and Congressional Affairs, can engage the public and legislators to build support for the reimagined E2T2.
Conclusion
The reimagined E2T2 represents a critical opportunity to address many pressing challenges in K-12 education while preparing students for the future. As we approach the reauthorization of ESEA, as well as consider policy solutions to fully harness the promises of emerging technologies like AI, providing systems with dedicated support for closing the three digital divides can significantly enhance the quality and equity of education across the United States.
This memo was developed in partnership with the Alliance for Learning Innovation, a coalition dedicated to advocating for building a better research and development infrastructure in education for the benefit of all students. Read more education R&D memos developed in partnership with ALI here.
This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.
Democratizing Hiring: A Public Jobs Board for A Fairer, More Transparent Political Appointee Hiring Process
Current hiring processes for political appointees are opaque and problematic; job openings are essentially closed off except to those in the right networks. To democratize hiring, the next administration should develop a public jobs board for non-Senate-confirmed political appointments, which includes a list of open roles and job descriptions. By serving as a one-stop shop for those interested in serving in an administration, an open jobs board would bring more skilled candidates into the administration, diversify the appointee workforce, expedite the hiring process, and improve government transparency.
Challenge and Opportunity
Hiring for federal political appointee positions is a broken process. Even though political appointees steer some of the federal government’s most essential functions, the way these individuals are hired lacks the rigor and transparency expected in most other fields.
Political appointment hiring processes are opaque, favoring privileged candidates already in policy networks. There is currently no standardized hiring mechanism for filling political appointee roles, even though new administrations must fill thousands of lower-level appointee positions. Openings are often shared only through word-of-mouth or internal networks, meaning that many strong candidates with relevant domain expertise may never be aware of available opportunities to work in an administration. Though the Plum Book (an annually updated list of political appointees) exists, it does not list vacancies, meaning outside candidates must still have insider information on who is hiring.
These closed hiring processes are deeply problematic because they lead to a non-diverse pool of applicants. For example, current networking-based processes benefit graduates of elite universities, and similar networking-based employment processes such as employee referral programs tend to benefit White men more than any other demographic group. We have experienced this opaque process firsthand at the Aspen Tech Policy Hub; though we have trained hundreds of science and technology fellows who are interested in serving as appointees, we are unaware of any that obtained political appointment roles by means other than networking.
Appointee positions often do not include formal job descriptions, making it difficult for outside candidates to identify roles that are a good fit. Most political appointee jobs do not include a written, formalized job description—a standard best practice across every other sector. A lack of job descriptions makes it almost impossible for outside candidates utilizing the Plum Book to understand what a position entails or whether it would be a good fit. Candidates that are being recruited typically learn more about position responsibilities through direct conversations with hiring managers, which again favors candidates who have direct connections to the hiring team.
Hiring processes are inefficient for hiring staff. The current approach is not only problematic for candidates; it is also inefficient for hiring staff. Through the current process, PPO or other hiring staff must sift through tens of thousands of resumes submitted through online resume bank submissions (e.g. the Biden administration’s “Join Us” form) that are not tailored to specific jobs. They may also end up directly reaching out to candidates that may not actually be interested in specific positions, or who lack required specialized skills.
Given these challenges, there is significant opportunity to reform the political appointment hiring process to benefit both applications and hiring officials.
Plan of Action
The next administration’s Presidential Personnel Office (PPO) should pilot a public jobs board for Schedule C and non-career Senior Executive Service political appointment positions and expand the job board to all non-Senate-confirmed appointments if the pilot is successful. This public jobs board should eventually provide a list of currently open vacancies, a brief description for each currently open vacancy that includes a job description and job requirements, and a process for applying to that position.
Having a more transparent and open jobs board with job descriptions would have multiple benefits. It would:
- Bring in more diverse applicants and strengthen the appointee workforce by broadening hiring pools;
- Require hiring managers to write out job descriptions in advance, allowing outside candidates to better understand job opportunities and hiring managers to pinpoint qualifications they are looking for;
- Expedite the hiring process since hiring managers will now have a list of qualified applicants for each position; and
- Improve government transparency and accessibility into critical public sector positions.
Additionally, an open jobs board will allow administration officials to collect key data on applicant background and use these data to improve recruitment going forward. For example, an open application process would allow administration officials to collect aggregate data on education credentials, demographics, and work experience, and modify processes to improve diversity as needed. Having an updated, open list of positions will also allow PPO to refer strong candidates to other open roles that may be a fit, as current processes make it difficult for administration officials or hiring managers to know what other open positions exist.
Implementing this jobs board will require two phases: (1) an initial phase where the transition team and PPO modify their current “Join Us” form to list 50-100 key initial hires the administration will need to make; and (2) a secondary phase where it builds a more fulsome jobs board, launched in late 2025, that includes all open roles going forward.
Phase 1. By early 2025, the transition team (or General Services Administration, in its transition support capacity) should identify 50-100 key Schedule C or non-career Senior Executive service hires they think the PPO will need to fill early in the administration, and launch a revised resume bank to collect applicants for these positions. The transition team should prioritize roles that a) are urgent needs for the new administration, b) require specialized skills not commonly found among campaign and transition staff (for instance technical or scientific knowledge), and c) have no clear candidate already identified. The transition team should then revise the current administration’s “Join Us” form to include this list of 50-100 soon-to-be vacant job roles, as well as provide a 2-3 sentence description of the job responsibilities, and allow outside candidates to explicitly note interest in these positions. This should be a relatively light lift, given the current “Join Us” form is fairly easy to build.
Phase 2. Early in the administration, PPO should build a larger, more comprehensive jobs board that should aim to go live in late 2025 and includes all open Schedule C or non-Senior Executive Service (SES) positions. Upon launch, this jobs board should include open jobs for whom no candidate has been identified, and any new Schedule C and non-SES appointments that are open going forward. As described in further detail in the FAQ section, every job listed should include a brief description of the position responsibilities and qualifications, and additional questions on political affiliation and demographics.
During this second phase, the PPO and the Office of Personnel Management (OPM) should identify and track key metrics to determine whether it should be expanded to cover all non-Senate confirmed appointments. For example, PPO and OPM could compare the diversity of applicants, diversity of hires, number of qualified candidates who applied for a position, time-to-hire, and number of vacant positions pre- and post-implementation of the jobs board.
If the jobs board improves key metrics, PPO and OPM should expand the jobs board to all non-Senate confirmed appointments. This would include non-Senate confirmed Senior Executive Service appointee positions.
Conclusion
An open jobs board for political appointee positions is necessary to building a stronger and more diverse appointee workforce, and for improving government transparency. An open jobs board will strengthen and diversify the appointee workforce, require hiring managers to specifically write down job responsibilities and qualifications, reduce hiring time, and ultimately result in more successful hires.
This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.
An open jobs board will attract many applicants, perhaps more than the PPO’s currently small team can handle. If the PPO is overwhelmed by the number of job applicants it can either directly forward resumes to hiring managers — thereby reducing burden on PPO itself — or consider hiring a vetted third-party to sort through submitted resumes and provide a smaller, more focused list of applicants for PPO to consider.
PPO can also include questions to enable candidates to be sorted by political experience and political alignment, so as (for instance) to favor those who worked on the president’s campaign.
Both phases of our recommendation would be a relatively light lift, and most costs would come from staff time. Phase 1 costs will solely include staff time; we suspect it will take ⅓ to ½ of an FTE’s time over 3 months to source the 50-100 high-priority jobs, write the job descriptions, and incorporate them into the existing “Join Us” form.
Phase 2 costs will include staff time and cost of deploying and maintaining the platform. We suspect it will take 4-5 months to build and test the platform, and to source the job descriptions. The cost of maintaining the Phase 2 platform will ultimately depend on the platform chosen. Ideally, this jobs board would be hosted on an easy-to-use platform like Google, Lever, or Greenhouse that can securely hold applicant data. If that proves too difficult, it could also be built on top of the existing USAJobs site.
PPO may be able to use existing government resources to help fund this effort. The PPO may be able to pull on personnel from the General Services Administration in their transition support capacity to assist with sourcing and writing job descriptions. PPO can also work with in-house technology teams at the U.S. Digital Service to actually build the platform, especially given they have considerable expertise in reforming hiring for federal technology positions.
Building a Comprehensive NEPA Database to Facilitate Innovation
The Inflation Reduction Act and the Infrastructure Innovation and Jobs Act are set to drive $300 billion in energy infrastructure investment by 2030. Without permitting reform, lengthy review processes threaten to make these federal investments one-third less effective at reducing greenhouse gas emissions. That’s why Congress has been grappling with reforming the National Environmental Policy Act (NEPA) for almost two years. Yet, despite the urgency to reform the law, there is a striking lack of available data on how NEPA actually works. Under these conditions, evidence-based policy making is simply impossible. With access to the right data and with thoughtful teaming, the next administration has a golden opportunity to create a roadmap for permitting software that maximizes the impact of federal investments.
Challenge and Opportunity
NEPA is a cornerstone of U.S. environmental law, requiring nearly all federally funded projects—like bridges, wildfire risk-reduction treatments, and wind farms—to undergo an environmental review. Despite its widespread impact, NEPA’s costs and benefits remain poorly understood. Although academics and the Council on Environmental Quality (CEQ) have conducted piecemeal studies using limited data, even the most basic data points, like the average duration of a NEPA analysis, remain elusive. Even the Government Accountability Office (GAO), when tasked with evaluating NEPA’s effectiveness in 2014, was unable to determine how many NEPA reviews are conducted annually, resulting in a report aptly titled “National Environmental Policy Act: Little Information Exists on NEPA Analyses.”
The lack of comprehensive data is not due to a lack of effort or awareness. In 2021, researchers at the University of Arizona launched NEPAccess, an AI-driven program aimed at aggregating publicly available NEPA data. While successful at scraping what data was accessible, the program could not create a comprehensive database because many NEPA documents are either not publicly available or too hard to access, namely Environmental Assessments (EAs) and Categorical Exclusions (CEs). The Pacific Northwest National Laboratory (PNNL) also built a language model to analyze NEPA documents but contained their analysis to the least common but most complex category of environmental reviews, Environmental Impact Statements (EISs).
Fortunately, much of the data needed to populate a more comprehensive NEPA database does exist. Unfortunately, it’s stored in a complex network of incompatible software systems, limiting both public access and interagency collaboration. Each agency responsible for conducting NEPA reviews operates its own unique NEPA software. Even the most advanced NEPA software, SOPA used by the Forest Service and ePlanning used by the Bureau of Land Management, do not automatically publish performance data.
Analyzing NEPA outcomes isn’t just an academic exercise; it’s an essential foundation for reform. Efforts to improve NEPA software have garnered bipartisan support from Congress. CEQ recently published a roadmap outlining important next steps to this end. In the report, CEQ explains that organized data would not only help guide development of better software but also foster broad efficiency in the NEPA process. In fact, CEQ even outlines the project components that would be most helpful to track (including unique ID numbers, level of review, document type, and project type).
Put simply, meshing this complex web of existing softwares into a tracking database would be nearly impossible (not to mention expensive). Luckily, advances in large language models, like the ones used by NEPAccess and PNNL, offer a simpler and more effective solution. With properly formatted files of all NEPA documents in one place, a small team of software engineers could harness PolicyAI’s existing program to build a comprehensive analysis dashboard.
Plan of Action
The greatest obstacles to building an AI-powered tracking dashboard are accessing the NEPA documents themselves and organizing their contents to enable meaningful analysis. Although the administration could address the availability of these documents by compelling agencies to release them, inconsistencies in how they’re written and stored would still pose a challenge. That means building a tracking board will require open, ongoing collaboration between technologists and agencies.
- Assemble a strike team: The administration should form a cross-disciplinary team to facilitate collaboration. This team should include CEQ; the Permitting Council; the agencies responsible for conducting the greatest number of NEPA reviews, including the Forest Service, Bureau of Land Management, and the U.S. Army Corps of Engineers; technologists from 18F; and those behind the PolicyAI tool developed by PNNL. It should also decide where the software development team will be housed, likely either at CEQ or the Permitting Council.
- Establish submission guidelines: When handling exorbitant amounts of data, uniform formatting ensures quick analysis. The strike team should assess how each agency receives and processes NEPA documents and create standardized submission guidelines, including file format and where they should be sent.
- Mandate data submission: The administration should require all agencies to submit relevant NEPA data annually, adhering to the submission guidelines set by the strike team. This process should be streamlined to minimize the burden on agencies while maximizing the quality and completeness of the data; if possible, the software development team should pull data directly from the agency. Future modernization efforts should include building APIs to automate this process.
- Build the system: Using PolicyAI’s existing framework, the development team should create a language model to feed a publicly available, searchable database and dashboard that tracks vital metadata, including:
- The project components suggested in CEQ’s E-NEPA report, including unique ID numbers, level of review, document type, and project type
- Days spent producing an environmental review (if available; this information may need to be pulled from agency case management materials instead)
- Page count of each environmental review
- Lead and supporting agencies
- Project location (latitude/longitude and acres impacted, or GIS information if possible)
- Other laws enmeshed in the review, including the Endangered Species Act, the National Historic Preservation Act, and the National Forest Management Act
- When clearly stated in a NEPA document, cost of producing the review
- When clearly stated in a NEPA document, staff hours used to produce the review
- When clearly stated in a NEPA document, jobs and revenue created by the project
- When clearly stated in a NEPA document, carbon emissions mitigated by the project
Conclusion
The stakes are high. With billions of dollars in federal climate and infrastructure investments on the line, a sluggish and opaque permitting process threatens to undermine national efforts to cut emissions. By embracing cutting-edge technology and prioritizing transparency, the next administration can not only reshape our understanding of the NEPA process but bolster its efficiency too.
This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.
It’s estimated that only 1% of NEPA analyses are Environmental Impact Statements (EISs), 5% are Environmental Assessments (EAs), and 94% are Categorical Exclusions (CEs). While EISs cover the most complex and contentious projects, using only analysis of EISs to understand the NEPA process paints an extremely narrow picture of the current system. In fact, focusing solely on EISs provides an incomplete and potentially misleading understanding of the true scope and effectiveness of NEPA reviews.
The vast majority of projects undergo either an EA or are afforded a CE, making these categories far more representative of the typical environmental review process under NEPA. EAs and CEs often address smaller projects, like routine infrastructure improvements, which are critical to the nation’s broader environmental and economic goals. Ignoring these reviews means disregarding a significant portion of federal environmental decision-making; as a result, policymakers, agency staff, and the public are left with an incomplete view of NEPA’s efficiency and impact.
Using Home Energy Rebates to Support Market Transformation
Without market-shaping interventions, federal and state subsidies for energy-efficient products like heat pumps often lead to higher prices, leaving the overall market worse off when rebates end. This is a key challenge that must be addressed as the Department of Energy (DOE) and states implement the Inflation Reduction Act’s Home Electrification and Appliance Rebates (HEAR) program.
DOE should prioritize the development of evidence-based market-transformation strategies that states can implement with their HEAR funding. The DOE should use its existing allocation of administrative funds to create a central capability to (1) develop market-shaping toolkits and an evidence base on how state programs can improve value for money and achieve market transformation and (2) provide market-shaping program implementation assistance to states.
There are proven market-transformation strategies that can reduce costs and save consumers billions of dollars. DOE can look to the global public health sector for an example of what market-shaping interventions could do for heat pumps and other energy-efficient technologies. In that arena, the Clinton Health Access Initiative (CHAI) has shown how public funding can support market-based transformation, leading to sustainably lower drug and vaccine prices, new types of “all-inclusive” contracts, and improved product quality. Agreements negotiated by CHAI and the Bill and Melinda Gates Foundation have generated over $4 billion in savings for publicly financed health systems and improved healthcare for hundreds of millions of people.
Similar impact can be achieved in the market for heat pumps if DOE and states can supply information to empower consumers to purchase the most cost-effective products, offer higher rebates for those cost-effective products, and seek supplier discounts for heat pumps eligible for rebates.
Challenge and Opportunity
HEAR received $4.5 billion in appropriations from the Inflation Reduction Act and provides consumers with rebates to purchase and install high-efficiency electric appliances. Heat pumps, the primary eligible appliance, present a huge opportunity for lowering overall greenhouse gas emissions from heating and cooling, which makes up over 10% of global emissions. In the continental United States, studies have shown that heat pumps can reduce carbon emissions up to 93% compared to gas furnaces across their lifetime.
However, direct-to-consumer rebate programs have been shown to enable suppliers to increase prices unless these subsidies are used to reward innovation and reduce cost. If subsidies are dispersed and the program design is not aligned with a market-transformation strategy, the result will be a short-term boost in demand followed by a fall-off in consumer interest as prices increase and the rebates are no longer available. This is a problem because program funding for heat pump rebates will support only ~500,000 projects over the life of the program—but more than 50 million households will need to convert to heat pumps in order to decarbonize the sector.
HEAR aims to address this through Market Transformation Plans, which states are required to submit to DOE within a year after receiving the award. States will then need to obtain DOE approval before implementing them. We see several challenges with the current implementation of HEAR:
- Need for evidence: There is a lack of evidence and policy agreement on the best approaches for market transformation. The DOE provides a potpourri of areas for action, but no evidence of cost-effectiveness. Thus, there is no rational basis for states to allocate funding across the 10 recommended areas for action. There are no measurable goals for market transformation.
- Redundancy: It is wasteful and redundant to have every state program allocate administrative expenses to design market-transformation strategies incorporating some or all of the 10 recommended areas for action. There is nothing unique to Georgia, Iowa, or Vermont in creating a tool to better estimate energy savings. A best-in-class software tool developed by DOE or one of the states could be adapted for use in each state. Similarly, if a state develops insights into lower-cost ways to install heat pumps, these insights will be valuable in many other state programs. The best tools should be a public good made known to every state program.
Despite these challenges, DOE has a clear opportunity to increase the impact of HEAR rebates by providing program design support to states for market-transformation goals. To ensure a competitive market and better value for money, state programs need guidance on how to overcome barriers created by information asymmetry – meaning that HVAC contractors have a much better understanding of the technical and cost/benefit aspects of heat pumps than consumers do. Consumers cannot work with contractors to select a heat pump solution that represents the best value for money if they do not understand the technical performance of products and how operating costs are affected by Seasonal Energy Efficiency Rating, coefficient of performance, and utility rates. If consumers are not well-informed, market outcomes will not be efficient. Currently, consumers do not have easy access to critical information such as the tradeoff in costs between increased Seasonal Energy Efficiency Rating and savings on monthly utility bills.
Overcoming information asymmetry will also help lower soft costs, which is critical to lowering the cost of heat pumps. Based on studies conducted by New York State, Solar Energy Industries Association and DOE, soft costs run over 60% of project costs in some cases and have increased over the past 10 years.
There is still time to act, as thus far only a few states have received approval to begin issuing rebates and state market-transformation plans are still in the early stages of development.
Plan of Action
Recommendation 1. Establish a central market transformation team to provide resources and technical assistance to states.
To limit cost and complexity at the state level for designing and staffing market-transformation initiatives, the DOE should set up central resources and capabilities. This could either be done by a dedicated team within the Office of State and Community Energy Programs or through a national lab. Funding would come from the 3% of program funds that DOE is allowed to use for administration and technical assistance.
This team would:
- Collect, centralize, and publish heat pump equipment and installation cost data to increase transparency and consumer awareness of available options.
- Develop practical toolkits and an evidence base on how to achieve market transformation most cost-effectively.
- Provide market-shaping program design assistance to states to create and implement market transformation programs.
Data collection, analysis, and consistent reporting are at the heart of what this central team could provide states. The DOE data and tools requirements guide already asks states to provide information on the invoice, equipment and materials, and installation costs for each rebate transaction. It is critical that the DOE and state programs coordinate on how to collect and structure this data in order to benefit consumers across all state programs.
A central team could provide resources and technical assistance to State Energy Offices (SEOs) on how to implement market-shaping strategies in a phased approach.
Phase 1. Create greater price transparency and set benchmarks for pricing on the most common products supported by rebates.
The central market-transformation team should provide technical support to states on how to develop benchmarking data on prices available to consumers for the most common product offerings. Consumers should be able to evaluate pricing for heat pumps like they do for major purchases such as cars, travel, or higher education. State programs could facilitate these comparisons by having rebate-eligible contractors and suppliers provide illustrative bids for a set of 5–10 common heat pump installation scenarios, for example, installing a ductless mini-split in a three-bedroom home.
States should also require contractors to provide hourly rates for different types of labor, since installation costs are often ~70% of total project costs. Contractors should only be designated as recommended or preferred service providers (with access to HEAR rebates) if they are willing to share cost data.
In addition, the central market-transformation team could facilitate information-sharing and data aggregation across states to limit confusion and duplication of data. This will increase price transparency and limit the work required at the state level to find price information and integrate with product technical performance data.
Phase 2. Encourage price and service-level competition among suppliers by providing consumers with information on how to judge value for money.
A second area to improve market outcomes is by promoting competition. Price transparency supports this goal, but to achieve market transformation programs need to go further to help consumers understand what products, specific to their circumstances, offer best value for money.
In the case of a heat pump installation, this means taking account of fuel source, energy prices, house condition, and other factors that drive the overall value-for-money equation when achieving improved energy efficiency. Again, information asymmetry is at play. Many energy-efficiency consultants and HVAC contractors offer to advise on these topics but have an inherent bias to promoting their products and services. There are no easily available public sources of reliable benchmark price/performance data for ducted and ductless heat pumps for homes ranging from 1500 to 2700 square feet, which would cover 75% of the single-family homes in the United States.
In contrast, the commercial building sector benefits from very detailed cost information published on virtually every type of building material and specialty trade procedure. Data from sources such as RSMeans provides pricing and unit cost information for ductwork, electrical wiring, and mean hourly wage rates for HVAC technicians by region. Builders of newly constructed single-family homes use similar systems to estimate and manage the costs of every aspect of the new construction process. But a homeowner seeking to retrofit a heat pump into an existing structure has none of this information. Since virtually all rebates are likely to be retrofit installations, states and the DOE have a unique interest in making this market more competitive by developing and publishing cost/performance benchmarking data.
State programs have considerable leverage that can be used to obtain the information needed from suppliers and installers. The central market-transformation team should use that information to create a tool that provides states and consumers with estimates of potential bill savings from installation of heat pumps in different regions and under different utility rates. This information would be very valuable to low- and middle-income (LMI) households, who are to receive most of the funding under HEAR.
Phase 3. Use the rebate program to lower costs and promote best-value products by negotiating product and service-level agreements with suppliers and contractors and awarding a higher level of rebate to installations that represent best value for money.
By subsidizing and consolidating demand, SEOs will have significant bargaining power to achieve fair prices for consumers.
First, by leveraging relationships with public and private sector stakeholders, SEOs can negotiate agreements with best-value contractors, offering guaranteed minimum volumes in return for discounted pricing and/or longer warranty periods for participating consumers. This is especially important for LMI households, who have limited home improvement budgets and experience disproportionately higher energy burdens, which is why there has been limited uptake of heat pumps by LMI households. In return, contractors gain access to a guaranteed number of additional projects that can offset the seasonal nature of the business.
Second, as states design the formulas used to distribute rebates, they should be encouraged to create systems that allocate a higher proportion of rebates to projects quoted at or below the benchmark costs and a smaller proportion or completely eliminate the rebates to projects higher than the benchmark. This will incentivize contractors to offer better value for money, as most projects will not proceed unless they receive a substantial rebate. States should also adopt a similar process as New York and Wisconsin in creating a list of approved contractors that adhere to “reasonable price” thresholds.
Recommendation 2. For future energy rebate programs, Congress and DOE can make market transformation more central to program design.
In future clean energy legislation, Congress should direct DOE to include the principles recommended above into the design of energy rebate programs, whether implemented by DOE or states. Ideally, that would come with either greater funding for administration and technical assistance or dedicated funding for market-transformation activities in addition to the rebate program funding.
For future rebate programs, DOE could take market transformation a step further by establishing benchmarking data for “fair and reasonable” prices from the beginning and requiring that, as part of their applications, states must have service-level agreements in place to ensure that only contractors that are at or below ceiling prices are awarded rebates. Establishing this at the federal level will ensure consistency and adoption at the state level.
Conclusion
The DOE should prioritize funding evidence-based market transformation strategies to increase the return on investment for rebate programs. Learning from U.S.-funded programs for global public health, a similar approach can be applied to the markets for energy-efficient appliances that are supported under the HEAR program. Market shaping can tip the balance towards more cost-effective and better-value products and prevent rebates from driving up prices. Successful market shaping will lead to sustained uptake of energy-efficient appliances by households across the country.
This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.
There is compelling evidence that federal and state subsidies for energy-efficient products can lead to price inflation, particularly in the clean energy space. The federal government has offered tax credits in the residential solar space for many years. While there has been a 64% reduction in the ex-factory photovoltaic module price for residential panels, the total residential installed cost per kWh has increased. The soft costs, including installation, have increased over the same period and are now ~65% or more of total project costs.
In 2021, the National Bureau of Economic Research linked consumer subsidies with firms charging higher prices, in the case of Chinese cell phones. The researchers found that by introducing competition for eligibility, through techniques such as commitment to price ceilings, price increases were mitigated and, in some cases, even reduced, creating more consumer surplus. This type of research along with the observed price increases after tax credits for solar show the risks of government subsidies without market-shaping interventions and the likely detrimental long-term impacts.
CHAI has negotiated over 140 agreements for health commodities supplied to low-and-middle-income countries (LMICs) with over 50 different companies. These market-shaping agreements have generated $4 billion in savings for health systems and touched millions of lives.
For example, CHAI collaborated with Duke University and Bristol Myers Squibb to combat hepatitis-C, which impacts 71 million people, 80% of whom are in LMICs, mostly in Southeast Asia and Africa [see footnote]. The approval in 2013 of two new antiviral drugs transformed treatment for high-income countries, but the drugs were not marketed or affordable in LMICs. Through its partnerships and programming, CHAI was able to achieve initial pricing of $500 per treatment course for LMICs. Prices fell over the next six years to under $60 per treatment course while the cost in the West remained at over $50,000 per treatment course. This was accomplished through ceiling price agreements and access programs with guaranteed volume considerations.
CHAI has also worked closely with the Bill and Melinda Gates Foundation to develop the novel market-shaping intervention called a volume guarantee (VG), where a drug or diagnostic test supplier agrees to a price discount in exchange for guaranteed volume (which will be backstopped by the guarantor if not achieved). Together, they negotiated a six-year fixed price VG with Bayer and Merck for contraceptive implants that reduced the price by 53% for 40 million units, making family planning more accessible for millions and generating $500 million in procurement savings.
Footnote: Hanafiah et al., Global epidemiology of hepatitis C virus infection: New estimates of age-specific antibody to HCV seroprevalence, J Hepatol. (2013), Volume 57, Issue 4, Pages 1333–1342; Gower E, Estes C, Blach S, et al. Global epidemiology and genotype distribution of the hepatitis C virus infection. J Hepatol. (2014),61(1 Suppl):S45-57; World Health Organization. Work conducted by the London School of Hygiene and Tropical Medicine. Global Hepatitis Report 2017.
Many states are in the early stages of setting up the program, so they have not yet released their implementation plans. However, New York and Wisconsin indicate which contractors are eligible to receive rebates through approved contractor networks on their websites. Once a household applies for the program, they are put in touch with a contractor from the approved state network, which they are required to use if they want access to the rebate. Those contractors are approved based on completion of training and other basic requirements such as affirming that pricing will be “fair and reasonable.” Currently, there is no detail about specific price thresholds that suppliers need to meet (as an indication of value for money) to qualify.
DOE’s Data and Tools Requirements document lays out the guidelines for states to receive federal funding for rebates. This includes transaction-level data that must be reported to the DOE monthly, including the specs of the home, the installation costs, and the equipment costs. Given that states already have to collect this data from contractors for reporting, this proposal recommends that SEOs streamline data collection and standardize it across all participating states, and then publish summary data so consumers can get an accurate sense of the range of prices.
There will be natural variation between homes, but by collecting a sufficient sample size and overlaying efficiency metrics like Seasonal Energy Efficiency Rating, Heating Seasonal Performance Factor, and coefficient of performance, states will be able to gauge value for money. Rewiring America and other nonprofits have software that can quickly make these calculations to help consumers understand the return on investment for higher-efficiency (and higher-cost) heat pumps given their location and current heating/cooling costs.
In the global public health markets, CHAI has promoted price transparency for drugs and diagnostic tests by publishing market surveys that include product technical specifications, and links to product performance studies. We show the actual prices paid for similar products in different countries and by different procurement agencies. All this information has helped public health programs migrate to the best-in-class products and improve value for money. Stats could do the same to empower consumers to choose best-in-class and best-value products and contractors.
Driving Product Model Development with the Technology Modernization Fund
The Technology Modernization Fund (TMF) currently funds multiyear technology projects to help agencies improve their service delivery. However, many agencies abdicate responsibility for project outcomes to vendors, lacking the internal leadership and project development teams necessary to apply a product model approach focused on user needs, starting small, learning what works, and making adjustments as needed.
To promote better outcomes, TMF could make three key changes to help agencies shift from simply purchasing static software to acquiring ongoing capabilities that can meet their long-term mission needs: (1) provide education and training to help agencies adopt the product model; (2) evaluate investments based on their use of effective product management and development practices; and (3) fund the staff necessary to deliver true modernization capacity.
Challenge and Opportunity
Technology modernization is a continual process of addressing unmet needs, not a one-time effort with a defined start and end. Too often, when agencies attempt to modernize, they purchase “static” software, treating it like any other commodity, such as computers or cars. But software is fundamentally different. It must continuously evolve to keep up with changing policies, security demands, and customer needs.
Presently, agencies tend to rely on available procurement, contracting, and project management staff to lead technology projects. However, it is not enough to focus on the art of getting things done (project management); it is also critically important to understand the art of deciding what to do (product management). A product manager is empowered to make real-time decisions on priorities and features, including deciding what not to do, to ensure the final product effectively meets user needs. Without this role, development teams typically march through a vast, undifferentiated, unprioritized list of requirements, which is how information technology (IT) projects result in unwieldy failures.
By contrast, the product model fosters a continuous cycle of improvement, essential for effective technology modernization. It empowers a small initial team with the right skills to conduct discovery sprints, engage users from the outset and throughout the process, and continuously develop, improve, and deliver value. This approach is ultimately more cost effective, results in continuously updated and effective software, and better meets user needs.
However, transitioning to the product model is challenging. Agencies need more than just infrastructure and tools to support seamless deployment and continuous software updates – they also need the right people and training. A lean team of product managers, user researchers, and service designers who will shape the effort from the outset can have an enormous impact on reducing costs and improving the effectiveness of eventual vendor contracts. Program and agency leaders, who truly understand the policy and operational context, may also require training to serve effectively as “product owners.” In this role, they work closely with experienced product managers to craft and bring to life a compelling product vision.
These internal capacity investments are not expensive relative to the cost of traditional IT projects in government, but they are currently hard to make. Placing greater emphasis on building internal product management capacity will enable the government to more effectively tackle the root causes that lead to legacy systems becoming problematic in the first place. By developing this capacity, agencies can avoid future costly and ineffective “modernization” efforts.
Plan of Action
The General Services Administration’s Technology Modernization Fund plays a crucial role in helping government agencies transition from outdated legacy systems to modern, secure, and efficient technologies, strengthening the government’s ability to serve the public. However, changes to TMF’s strategy, policy, and practice could incentivize the broader adoption of product model approaches and make its investments more impactful.
The TMF should shift from investments in high-cost, static technologies that will not evolve to meet future needs towards supporting the development of product model capabilities within agencies. This requires a combination of skilled personnel, technology, and user-centered approaches. Success should be measured not just by direct savings in technology but by broader efficiencies, such as improvements in operational effectiveness, reductions in administrative burdens, and enhanced service delivery to users.
While successful investments may result in lower costs, the primary goal should be to deliver greater value by helping agencies better fulfill their missions. Ultimately, these changes will strengthen agency resilience, enabling them to adapt, scale, and respond more effectively to new challenges and conditions.
Recommendation 1. The Technology Modernization Board, responsible for evaluating proposals, should:
- Assess future investments based on the applicant’s demonstrated competencies and capacities in product ownership and management, as well as their commitment to developing these capabilities. This includes assessing proposed staffing models to ensure the right teams are in place.
- Expand assessment criteria for active and completed projects beyond cost savings, to include measurements of improved mission delivery, operational efficiencies, resilience, and adaptability.
Recommendation 2. The TMF Program Management Office, responsible for stewarding investments from start to finish, should:
- Educate and train agencies applying for funds on how to adopt and sustain the product model.
- Work with the General Services Administration’s 18F to incorporate TMF project successes and lessons learned into a continuously updated product model playbook for government agencies that includes guidance on the key roles and responsibilities needed to successfully own and manage products in government.
- Collaborate with the Office of Personnel Management (OPM) to ensure that agencies have efficient and expedited pathways for acquiring the necessary talent, utilizing appropriate assessments to identify and onboard skilled individuals.
Recommendation 3. Congress should:
- Encourage agencies to set up their own working capital funds under the authorities outlined in the TMF legislation.
- Explore the barriers to product model funding in the current budgeting and appropriations processes for the federal government as a whole and develop proposals for fitting them to purpose.
- Direct OPM to reduce procedural barriers that hinder swift and effective hiring.
Conclusion
The TMF should leverage its mandate to shift agencies towards a capabilities-first mindset. Changing how the program educates, funds, and assesses agencies will build internal capacity and deliver continuous improvement. This approach will lead to better outcomes, both in the near and long terms, by empowering agencies to adapt and evolve their capabilities to meet future challenges effectively.
This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.
Congress established TMF in 2018 “to improve information technology, and to enhance cybersecurity across the federal government” through multiyear technology projects. Since then, more than $1 billion has been invested through the fund across dozens of federal agencies in four priority areas.
Introducing Certification of Technical Necessity for Resumption of Nuclear Explosive Testing
The United States currently observes a voluntary moratorium on explosive nuclear weapons testing. At the same time, the National Nuclear Security Administration (NNSA) is required by law to maintain the capability to conduct an underground nuclear explosive test at the Nevada National Security Site, if directed to do so by the U.S. president.
Restarting U.S. nuclear weapons testing could have very negative security implications for the United States unless it was determined to be an absolute technical or security necessity. A restart of U.S. nuclear testing for any reason could open the door for China, Russia, Pakistan, and India to do the same, and make it even harder to condemn North Korea for its testing program. This would have significant security consequences for the United States and global environmental impacts.
The United States conducted over 1,000 nuclear weapons tests before the 1991 testing moratorium took effect. It was able to do so with the world’s most advanced diagnostic and data detection equipment, which enabled the US to conduct advanced computer simulations after the end of testing. Neither Russia or China conducted as many tests, and many fewer of those were able to collect advanced metrics, hampering these countries’ ability to match American simulation capabilities. Enabling Russia and China to resume testing could narrow the technical advantage the United States has held in testing data since the testing moratorium went into effect in 1992.
Aside from the security loss, nuclear testing would also have long-lasting radiological effects at the test site itself, including radiation contamination in the soil and groundwater, and the chance of venting into the atmosphere. Despite these downsides, a future president has the legal authority—for political or other reasons—to order a resumption of nuclear testing. Ensuring any such decision is more democratic and subject to a broader system of political accountability could be achieved by creating a more integrated approval process, based on scientific or security needs. To this end, Congress should pass legislation requiring the NNSA administrator to certify that an explosive nuclear test is technically necessary to rectify an existing problem or doubt in U.S. nuclear surety before a test can be conducted.
Challenges and Opportunities
The United States is party to the 1963 Limited Test Ban Treaty, which prohibits atmospheric tests, and the Threshold Ban Treaty of 1974, limiting underground tests of more than 150 kilotons of explosive yield. In 1992, the United States also established a legal moratorium on nuclear testing through the Hatfield-Exon-Mitchell Amendment, passed during the George H.W. Bush Administration. After extending this moratorium in 1993, the United States, Russia, and China also signed the Comprehensive Nuclear Test Ban Treaty (CTBT) in 1996, which prohibits nuclear explosions. However, none of the Annex 2 (nuclear armed) states have ratified the CTBT, which prevents it from entering into force.
Since halting nuclear explosive tests in 1992, the United States has benefited from a comparative advantage over other nuclear-armed states, given its advanced simulation and computing technologies, coupled with extensive data collected from conducting over 1,000 explosive nuclear tests over nearly five decades. The NNSA’s Stockpile Stewardship Program uses computer simulations to combine new scientific research with data from past nuclear explosive tests to assess the reliability, safety, and security of the U.S. stockpile without returning to nuclear explosive testing. Congress has mandated that the NNSA must provide a yearly report to the Nuclear Weapons Council, which reports to the president on the reliability of the nuclear weapons stockpile. The NNSA also maintains the capability to test nuclear weapons at the Nevada Test Site as directed by President Clinton in Presidential Decision Directive 15 (PDD-15). National Security Memorandum 7 requires the NNSA to have the capability to conduct an underground explosive test with limited diagnostics within 36 months, but the NNSA has asserted in their Stockpile Stewardship and Management plan that domestic and international laws and regulations could slow down this timeline. A 2011 report to Congress from the Department of Energy stated that a small test for political reasons could take only 6–10 months.
For the past 27 years, the NNSA administrator and the three directors of the national laboratories have annually certified—following a lengthy assessment process—that “there is no technical reason to conduct nuclear explosive testing.” Now, some figures, including former President Trump’s National Security Advisor, have called for a resumption of U.S. nuclear testing for political reasons. Specifically, testing advocates suggest—despite a lack of technical justification—that a return to testing is necessary in order to maintain the reliability of the U.S. nuclear stockpile and to intimidate China and other adversaries at the bargaining table.
A 2003 study by Sandia National Laboratories found that conducting an underground nuclear test would cost between $76 million and $84 million in then-year dollars, approximately $132 million to $146 million today. In addition to financial cost, explosive nuclear testing could also be costly to both humans and the environment even if conducted underground. For example, at least 32 underground tests performed at the Nevada Test Site were found to have released considerable quantities of radionuclides into the atmosphere through venting. Underground testing can also lead to contamination of land and groundwater. One of the most significant impacts of nuclear testing in the United States is the disproportionately high rate of thyroid cancer in Nevada and surrounding states due to radioactive contamination of the environment.
In addition to health and environmental concerns, the resumption of nuclear tests in the United States would likely trigger nuclear testing by other states—all of which would have comparatively more to gain and learn from testing. When the CTBT was signed, the United States had already conducted far more nuclear tests than China or Russia with better technology to collect data, including fiber optic cables and supercomputers. A return to nuclear testing would also weaken international norms on nonproliferation and, rather than coerce adversaries into a preferred course of action, likely instigate more aggressive behavior and heightened tensions in response.
Plan of Action
In order to ensure that, if resumed, explosive nuclear testing is done for technical rather than political reasons, Congress should amend existing legislation to implement checks and balances on the president’s ability to order such a resumption. Per section 2530 of title 50 of the United States Code, “No underground test of nuclear weapons may be conducted by the United States after September 30, 1996, unless a foreign state conducts a nuclear test after this date, at which time the prohibition on United States nuclear testing is lifted.” Congress should amend this legislation to stipulate that, prior to any nuclear test being conducted, the NNSA administrator must first certify that the objectives of the test cannot be achieved through simulation and are important enough to warrant an end to the moratorium. A new certification should be required for every individual test, and the amendment should require that the certification be provided in the form of a publicly available, unclassified report to Congress, in addition to a classified report. In the absence of such an amendment, the president should make a Presidential Decision Directive to call for a certification by the NNSA administrator and a public hearing under oath to certify the same results cannot be achieved through scientific simulation in order for a nuclear test to be conducted.
Conclusion
The United States should continue its voluntary moratorium on all types of explosive nuclear weapons tests and implement further checks on the president’s ability to call for a resumption of nuclear testing.
This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.