Transforming the Carceral Experience: Leveraging Technology for Rehabilitation

Despite a $182 billion annual cost, the U.S. correctional system perpetuates itself: At least 95% of all state prisoners will be released from prison at some point, yet more than 50% of them reoffend within three years. 

A key driver of high recidivism is the systemic negligence of the carceral experience. While much attention is given to interventions post-release, rehabilitation inside correctional facilities is largely invisible to the public. This dynamic results in approximately 2 million incarcerated persons being locked in a “time capsule”—the world passes them by as they serve their sentences. This is a missed opportunity, as simple interventions like accessing educational resources and maintaining family contact during incarceration can cut recidivism by up to 56%. Reduced recidivism translates into more robust workforce, safer communities, and higher political participation. The new administration should harness the momentous bipartisan interest in criminal justice reform, audit the condition and availability of rehabilitative resources in prisons and jails, invest in digital and technology infrastructure, and sustainably end mass incarceration through building meaningful digital citizenship behind bars. 

Challenge and Opportunity

In the post-COVID-19 world, robust and reliable technology and digital infrastructure are prerequisites for any program and resource delivery. However, the vast majority of U.S. correctional facilities still lack adequate technology infrastructure, with cascading effects on the availability of in-prison programs, utilization of digital resources, and incarcerated people’s transition to the free world. 

As many other institutions quickly embrace new technology, prisons lag behind. In Massachusetts, prisons struggle to provide even basic rehabilitative, educational, and vocational training programs due to a shortage of hardware devices, such as tablets and Chromebooks, and insufficient staffing. Similarly, in Florida, internet access is constrained by legislation and exacerbated by a lack of funding. Many prisons are forced to limit or entirely cancel programs when in-person visits are inaccessible, due to either COVID-19 restrictions or simply insufficient transportation options for resource providers. Consequently, only 0.5% of incarcerated individuals are enrolled in educational courses. The situation is equally dire in juvenile detention centers from California to Louisiana, where poor access to educational opportunities contributes to low graduation rates, severely limiting future employment prospects for at-risk youths.

Despite these systemic challenges, there is a strong, bipartisan recognition of the need to improve conditions within the carceral system—and therefore a unique opportunity for reform. 

The Federal Communications Commission (FCC) has passed the most comprehensive regulations on incarcerated people’s communication services, setting rate caps for various means of virtual communications. Electronic devices, such as tablets and Chromebooks, are gradually being accepted in correctional facilities, and they carry education resources and entertainment. Foundationally, federal investments in broadband and digital equity present a generational opportunity for correctional facilities and incarcerated people. These investments will provide baseline assessment of the network conditions and digital landscape in prisons, and the learnings can lay the very foundation to enable incarcerated people to enter the digital age prepared, ready to contribute to their communities from the day they return home.

This is just the beginning. 

Plan of Action

Recommendation 1. Invest in technology infrastructure inside correctional facilities.

A significant investment in technology infrastructure within correctional facilities is the prerequisite to transforming corrections. 

The Infrastructure Investment and Jobs Act (IIJA), through the Broadband Equity, Access, and Deployment (BEAD) and Digital Equity (DE) programs, sets a good precedent. BEAD and DE funding enable digital infrastructure assessments and improvements inside correctional facilities. These are critical for delivering educational programs, maintaining family connections, and facilitating legal and medical communications. However, only a few corrections systems are able to utilize the funding, as BEAD and DE do not have a specific focus on improving the carceral system, and states tend to prioritize other vulnerable populations (e.g., the rural, aging, veteran populations) over the incarcerated. Currently incarcerated individuals are difficult to reach, so they are routinely neglected from the planning process of funding distribution across the country. 

The new administration should recognize the urgent need to modernize digital infrastructure behind bars and allocate new and dedicated federal funding sources specifically for correctional institutions. The administration can ensure the implementation of best practices through grant guidelines. For example, it could stipulate that prior to accessing funding, states have to conduct a comprehensive network assessment, including speed and capacity tests, a security risk analysis, and a thorough audit of existing equipment and wiring. Further, it could mandate that all new networks built or consolidated using federal funding be vendor-neutral, ensuring robust competition among service providers down the road. 

Recommendation 2. Incentivize mission-driven technology solutions.

Expanding mandatory access to social benefits for incarcerated individuals will incentivize mission-driven technology innovation and adoption in this space.

There have been best practices on how to do so at both the federal and state levels. For example, the Second Chance Pell restored educational opportunities for incarcerated individuals and inspired the emergence of mission-driven educational technologies. Another critical federal action was the Consolidated Appropriations Act of 2023 (specifically, Section 5121), which mandated Medicaid enrollment and assessment for juveniles, thereby expanding demand for health and telehealth solutions in correctional facilities. 

The new administration should work with Congress to propose new legislation that mandates access to social benefits for those behind bars. Specifically, access to mental health assessment, screening, and treatment, as well as affordable access to communication with families and loved ones on the outside, will be critical to successful rehabilitation and reentry. Additionally, it should invest in robust research focusing on in-prison interventions. Such projects can be rare and more costly, given the complexity of doing research in a correctional environment and the dearth of in-prison interventions. But they will play a big part in establishing the basis for data-driven policies.

Recommendation 3. Remove procurement barriers for new solutions and encourage pilots.

Archaic procurement procedures pose significant barriers to competition in the correctional technology industry and block innovative solutions from being piloted. 

The prison telecommunications industry, for example, has been dominated by two private companies for decades. The effective duopoly has consolidated the market by entering into exclusive contracts with high percentages of kickback and so-called “premium services.” These procurement and contracting tactics minimize healthy competition from new entrants of the industry. 

Some states and federal agencies are trying to change this. In July 2024, the FCC ruled out revenue-sharing between correctional agencies and for-profit providers, ending the arms race of higher commission for good. On a state level, California’s RFI initiative exemplifies how strategic procurement processes can encourage public-private partnerships to deliver cutting-edge technology solutions to government agencies.

The administration should take a strong stance by issuing an executive order asking all Federal Bureau of Prisons facilities, including ICE detention centers, to competitively procure innovative technology solutions and establish pilots across its institutions, setting an example and a playbook for state corrections to follow. 

Recommendation 4. Invest in need assessments, topic-specific research and development of best practices through National Science Foundation and Bureau of Justice Assistance. 

Accurate needs assessments, topic-specific research, development of best practices, and technical assistance are all critical to smooth delivery and implementation. 

The Department of Justice, through the Bureau of Justice Assistance (BJA), offers a range of technical assistance (TA) programs that can support state and local correctional facilities in implementing these technology and educational initiatives. Programs such as the Community-based Reentry Program and the Encouraging Innovation: Field-Initiated Program have demonstrated success in providing the necessary resources and expertise to ensure these reforms are effectively integrated. 

However, these TA programs tend to disproportionately benefit correctional facilities where significant programs are already in place but are less useful for “first timers,” where taking that first step is hard enough.

The new administration should work with the National Science Foundation (NSF) and the BJA to systematically assess and understand challenges faced by correctional systems trying to take the first step of reform. Many first-timer agencies have deep understanding of the issues they experience (“program providers complain that tablets are not online”) but limited knowledge on how to assess the root causes of the issues (multiple proprietary wireless networks in place). 

The NSF can bring together subject matter experts to offer workshops to correctional workers on network assessments, program cataloging, and human-centered design on service delivery. These workshops can help grow capacity at correctional facilities. The NSF should also establish guidelines and standards for these assessments. In addition to the TA efforts, the BJA could offer informational sessions, seminars, and gatherings for practitioners, as many of them learn best from each other. In parallel to learning on the ground, the BJA should also organize centralized task forces to oversee and advise on implementation across jurisdictions, document best practice, and make recommendations. 

Conclusion

Investing in interventions behind the walls is not just a matter of improving conditions for incarcerated individuals—it is a public safety and economic imperative. By reducing recidivism through education and family contact, we can improve reentry outcomes and save billions in taxpayer dollars. A robust technology infrastructure and an innovative provider ecosystem are prerequisites to delivering outcomes. As 95% of incarcerated individuals will reenter society one day, it is vital to ensure that they can become contributing members of their communities. These investments will create a stronger workforce, more stable families, and safer communities. Now is the time for the new administration to act and ensure that the carceral system enables rehabilitation, not recidivism.

Creating a National Exposome Project

The U.S. government should establish a public-private National Exposome Project (NEP) to generate benchmark human exposure levels for the ~80,000 chemicals to which Americans are regularly exposed. Such a project will revolutionize our ability to treat and prevent human disease. An investment of $10 billion over 20 years would fuel a new wave of scientific discovery and advancements in human health. To date, there has not been a systematic assessment of how exposures to these environmental chemicals (such as pesticides, solvents, plasticizers, medications, preservatives, flame retardants, fossil fuel exhaust, and food additives) impact human health across the lifespan and in combination with one another. 

While there is emerging scientific consensus that environmental exposures play a role in most diseases, including autoimmune conditions and many of the most challenging neurodegenerative diseases and cancers, the lack of exposomic reference data restrains the ability of scientists and physicians to understand their root causes and manage them. The biomedical impact of creating a reference exposome would be no less than that of the Human Genome Project and will serve as the basis of technological advancement, the development of new medicines and advanced chemicals, and improved preventative healthcare and the clinical management of diseases. 

Challenge and Opportunity

The Human Genome Project greatly advanced our understanding of the genetics of disease and helped accelerate a biotech revolution, creating an estimated $265 billion economic impact in 2019 alone. However, genetics has been unable to independently explain the root causes of the majority of diseases from which we suffer, including neurodegenerative diseases like Alzheimer’s and Parkinson’s and many types of cancer. We know exposures to chemicals and pollution are responsible for or mediate the 70–90% of disease causation not explained by genetics. However, because we lack an understanding of their underlying causal factors, many new medicines in development are more palliative than curative. If we want to truly prevent and cure the most intractable illnesses, we must uncover the complex environmental factors that contribute to their pathogenesis.

In addition to the social and economic benefits that would come from reducing our society’s disease burden, American leadership in exposomics would also strengthen the foundation of our biomedical innovation ecosystem, making the U.S. the premier partner for what is likely to be the most advanced health-related research field in this century. 

Three key trends are converging to make now the best time to act: First, the costs of chemical sensors and the data and analytics infrastructure to manage them have fallen precipitously over the last two decades. Second, a few existing small scale exposomic projects offer a blueprint for how to build the NEP. Third, advancements in artificial intelligence (AI) and machine learning are making possible entirely new tools to make causal inferences from complex environmental data, which can inform research into treatments and policies to prevent diseases. 

Plan of Action 

To bring the National Exposome Project to life, Congress should appropriate $10 billion over 20 years to Department of Health and Human Services (HHS) to establish a National Exposomics Project Office within the Office of the HHS Secretary, whose director reports directly to the HHS Secretary. The NEP director should be given authority to establish partnerships with HHS agencies (National Institutes of Health, Centers for Disease Control, Advanced Research Projects Agency for Health, Food and Drug Administration) and other federal agencies (Environmental Protection Agency, Commerce, Defense, Homeland Security, National Science Foundation), and to fund and enter into agreements with state and local governments, academic, and private sector partners. The NEP will operate through a series of public-private cores that each are responsible for one of three pillars.

Recommendation 1. Create a reference human exposome

Through partnerships with industry, government, and academic partners, the NEP would generate comprehensive data on the body burden of chemicals and the corresponding biological responses in a representative sampling of the U.S. (>500,000 individuals). This would likely require collecting bio samples (such as blood, saliva, etc.) from participating individuals, performing advanced chemical analysis on the samples using technologies such as high- resolution mass spectrometry, and following up over the study with the participants to observe which health conditions emerge. Critically, bio samples will need to be collected repeatedly over time and bio-banked in order to ensure that the temporal aspect of exposures (such as whether someone was exposed to a particular chemical as a child or as an adult) is included in the final complete data set.

High-throughput toxicological data using microphysiological systems with human cells and tissues could also be generated on a priority list (~1000) of chemicals of concern to understand their potential harm or benefit.

These data would inform a reference standard for particular chemical exposures, which would contain the distribution of exposure levels across the population, the potential health hazards associated with a particular exposure level, and the potential combinations of exposures that would be of concern. This information could ultimately be integrated into patient electronic health records for use in clinical practice. 

Recommendation 2. Develop cutting-edge data and analytical standards for exposomic analysis

The NEP would develop both a data standard for collecting and making available exposomic data to researchers, companies, and the public and advanced analytics to enable high-value causal insights to be extracted from these data to enable policymaking and scientific discovery. Importantly, the data would include both biochemical data collected directly as part of this project and in-field sensor data that is already being collected at individual, local, regional, national, and global levels by trusted third-party organizations, such as air/water quality. A key challenge in understanding the connections between a set of exposures and a disease state today is the lack of data standardization. The NEP’s investments in standardization and analytics could result in a global standard for how environmental exposure data is collected, cementing the U.S. as the global leader.

Recommendation 3. Catalyze biomedical innovation and entrepreneurship 

A NEP could bolster new entrepreneurial ecosystems in advanced diagnostics, medicines, and clinical services. With access to a core reference exposome as a foundation, the ingeniousness of American entrepreneurs and scientists could result in a wellspring of innovation, creating the potential to reverse the rising incidence rates of many intractable illnesses of our modern era. One can imagine a future where exposomic tests are a part of routine physicals, informing physicians and patients exactly how to prevent certain diseases from emerging or progressing, or one where exposomic data is used to identify novel biological targets for curative therapeutics that could reverse the course of an environmentally caused disease. 

The Size of the Prize

The National Exposome Project offers great potential to catalyze biomedical entrepreneurship and innovation. 

First, the high-quality reference levels of exposures generated by the NEP could unlock significant opportunities in medical diagnostics. Already, great work is being done in diagnostics to understand how environmental exposures are driving diseases from autism to congenital heart defects in newborns. NEP would accelerate such work, enabling the early detection and monitoring of conditions that today have limited diagnostic approaches. 

Second, a deeper understanding of exposures could lead to the faster development of new medicines. One way the NEP data set could do this would be by enabling biologists to identify novel molecular targets for medicines that might otherwise be overlooked—for example, the NEP data might reveal that certain exposures are protective and beneficial for patients with a given disease, a finding that could be more deeply examined at the molecular level to identify a novel therapeutic strategy. In addition, we know that genetics is unable to explain the hundreds of failed drug trials. Exposomics could rescue many drugs that failed testing due to environmentally related nonresponse by identifying the causative agents.

Finally, we expect that the NEP would likely result in significant advances in the physical hardware and instrumentation that is used for large-scale chemical analysis and research, and in the AI-driven computational approaches that would be necessary for the data analysis. These advancements would set the U.S. up to be the leader in exposomic sequencing and analysis, much in the same way that the Human Genome Project established the U.S. as the leader in genetic sequencing. Furthermore, these technical advances would likely be useful in many domains outside of human health where chemical analysis is useful in developing new products—such as in the agriculture, industrial chemical, and energy industries. 

Conclusion 

To catalyze the next generation of biomedical innovation, we need to establish a national network of exposome facilities to track human exposure levels over time, accelerate efforts to create toxicological profiles of these chemicals, develop advanced analytical models to establish causal links to human disease, and use this foundational knowledge to further the development of new medicines and policies to reduce harmful exposure. This knowledge will transform our biomedical and healthcare industries, as well as provide a path for an improved chemical industry that creates products that are safer by design. The result will be longer health spans, reductions in mortality and morbidity, and economic development associated with spurring new startups that can create new therapies, technologies, and interventions.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

Frequently Asked Questions
Who is likely to push back on this proposal, and how can that hurdle be overcome?

Anyone concerned with large government initiatives may object to the proposed budget for this project. While we acknowledge the investment needed is substantial, the upside to the public is enormous, borne out both in direct economic development benefits in new exposomic industries created as a result and in the potential demystification of a large portion of currently unexplained diseases that afflict us.


Industries responsible for manufacturing products that potentially expose populations to suspected harmful chemicals may also push back on this effort. As a response, we believe that there is abundant misinformation fueled by underpowered or poorly designed studies on chemicals, including those with more harmful reputations than data supports. A more systemic data set and a newly created industry that gives people more complete, personalized, and real-time data on exposure can not only support debunking myths but also expand the set of possible actions to mitigate exposure, taking us out of a continued cycle of finger-pointing. Indeed, such a systematic approach should reveal many positive associations with modern chemicals and health outcomes, such as preservatives reducing food-borne illness or antibiotics reducing microbial-based disease.

What accountability and evaluation measures will be included?

Governance and accountability will be critical to ensure proper stewardship of taxpayer dollars and responsible engagement with the complex set of stakeholders across the country. We therefore propose creating an external advisory committee made up of community members, industry representatives, and key opinion leaders to provide oversight over the project’s design and execution and advice and recommendations throughout all stages to the NEP director.

What is the first step to get this proposal off the ground? Is there a pilot version that could be advanced to demonstrate proof of concept?

The first steps to realizing this vision have actually already begun. ARPA-H, the agency responsible for high-risk, high-reward research and development for health, has begun to fund some foundational exposomic work. National Institutes of Health’s All of Us program has also set a foundation for what might be possible in regards to large-scale bio-banking studies. However, to have the needed impact at scale, the NEP needs to be launched on a much bigger scale, outside of existing programs, and focus on spurring economic development and the creation of new industries.

What has doomed similar efforts in the past and how will your proposal avoid those pitfalls?

Some of the most important factors that determine success of ambitious efforts like this are the specifics of the legislative authority, the leadership/governance structure, and how much appropriations can be made available upfront. Further, while collaboration across agencies is clear, establishing clear decision-making structure with the proper oversight is critical. This is why we believe creating a dedicated program office, with a clear leader who reports directly to a member of the cabinet, endowed with the necessary authorities including Other Transactions Authority, is key to success.

Fixing Impact: How Fixed Prices Can Scale Results-Based Procurement at USAID

The United States Agency for International Development (USAID) currently uses Cost-Plus-Fixed-Fee (CPFF) as its de facto default funding and contracting model. Unfortunately, this model prioritizes administrative compliance over performance, hindering USAID’s development goals and U.S. efforts to counter renewed Great Power competition with Russia, the People’s Republic of China (PRC), and other competitors. The U.S. foreign aid system is losing strategic influence as developing nations turn to faster and more flexible (albeit riskier) options offered by geopolitical competitors like the PRC. 

To respond and maintain U.S. global leadership, USAID should transition to heavily favor a Fixed-Price model – tying payments to specific, measurable objectives rather than incurred costs – to enhance the United States’ ability to compete globally and deliver impact at scale. Moreover, USAID should require written justifications for not choosing a Fixed-Price model, shifting the burden of proof. (We will use “Fixed-Price” to refer to both Firm Fixed Price Contracts and Fixed Amount Award Grants, wherein payments are linked to results or deliverables.) 

This shock to the system would encourage broader adoption of Fixed-Price models, reducing administrative burdens, incentivizing implementers (of contracts, cooperative agreements, and grants) to focus on outcomes, and streamlining outdated and inefficient procurement processes. The USAID Bureau for Management’s Office of Acquisition and Assistance (OAA) should lead this transition by developing a framework for greater use of Firm Fixed Price (FFP) contracts and Fixed Amount Award (FAA) grants, establishing criteria for defining milestones and outcomes, retraining staff, and providing continuous support. With strong support from USAID leadership, this shift will reduce administrative burdens within USAID and improve competitiveness by expanding USAID’s partner base and making it easier for smaller organizations to collaborate. 

Challenge and Opportunity

Challenge

The U.S. remains the largest donor of foreign assistance around the world, accounting for 29% of total official development assistance from major donor governments in 2023. Its foreign aid programs have paid dividends over the years in American jobs and economic growth, as well as an unprecedented and unrivaled network of alliances and trading partners. Today, however, USAID has become mired once again in procurement inefficiencies, reversing previous trends and efforts at reform and blocking – for years – sensible initiatives such as third country national (TCN) warrants, thereby reducing the impact of foreign aid for those it intends to help and impeding the U.S. Government’s (USG) ability to respond to growing Great Power Competition.

Foreign aid serves as a critical instrument of foreign policy influence, shaping geopolitical landscapes and advancing national interests on the global stage. No actor has demonstrated this more clearly than the PRC, whose rise as a major player in global development adds pressure on the U.S. to maintain its leadership. Notably, China has increased its spending of foreign assistance for economic development by 525% in the last 15 years. Through the Belt & Road Initiative, its Digital Silk Road, alternative development banks, and increasingly sophisticated methods of wielding its soft power, the PRC has built a compelling and attractive foreign assistance model which offers quick, low-cost solutions without the governance “strings” attached to U.S. aid. While it seems to fulfill countries’ needs efficiently, hidden costs include long-term debt, high lifecycle expenses, and potential Chinese ownership upon default. 

By contrast, USAID’s Cost-Plus-Fixed-Fee (CPFF) foreign assistance model – in which implementers are guaranteed to recover their costs and earn a profit – mainly prioritizes tracking receipts over achieving results and therefore often fails to achieve intended outcomes, with billions spent on programs that lack measurable impact or fail to meet goals. Implementers are paid for budget compliance, regardless of results, placing all performance risk on the government. 

The USG invented CPFF to establish fair prices where no markets existed. However, its use has now extended far beyond this purpose – including for products and services with well-established commercial markets. The compliance infrastructure necessary to administer USAID awards and adhere to the documentation/reporting requirements favors entrenched contractors – as noted by USAID Administrator Samantha Power – stifles innovation, and keeps prices high, thereby encumbering America’s ability to agilely work with local partners and respond to changing conditions. (Note: USAID typically uses “award” to refer to contracts, cooperative agreements, and grants. We use “award” in this same manner to refer to all three procurement mechanisms. We use “Fixed-Price Awards” to refer to fixed-price grants and contracts. “Fixed Amount Awards,” however, specifically refers to a fixed-price grant.)

In light of the growing Great Power Competition with China and Russia – and threats by those who wish to undermine the US-led liberal international order – as well as the possibility of further global shocks like COVID-19 or the war in Ukraine, USAID must consider whether its current toolset can maintain a position of strategic strength in global development. Furthermore, amid declining Official Development Assistance (ODA) – 2% year-over-year – and a global failure to meet the UN Sustainable Development Goals (SDGs), it is critical for USAID to reconcile the gap between its funding and lack of results. Without change, USAID funding will largely continue to fall short of objectives. The time is now for USAID to act.

Opportunity

While USAID cannot have a de jure default procurement mechanism, CPFF has become the de facto default procurement mechanism, but it does not have to be. USAID has other mechanisms to deploy funding at its disposal. In fact, at least two alternative award and contract pricing models exist:

  1. Time and materials (T&M): The implementer proposes a set of fully loaded (i.e., inclusive of salary, benefits, overhead, plus profit) hourly rates for different labor categories and the USG pays for time incurred – not results delivered.
  2. Fixed-Price (Firm Fixed Price, FFP, for contracts, or Fixed Amount Award, FAA/Fixed Obligation Grants, FOG, for grants): The implementer proposes a set fee and is paid for milestones or results (not receipts).

While CPFF simply reimburses providers for costs plus profit, the Fixed-Price alternatives tie funding to achieving milestones, promoting efficiency and accountability. The Code of Federal Regulations (§ 200.1) permits using Fixed-Price mechanisms whenever pricing data can establish a reasonable estimate of implementation costs. 

USAID has acknowledged the need to adapt funding mechanisms to better support local and impact-driven organizations and enhance cost-effectiveness. USAID has already started supporting these goals by incorporating evidence-based approaches and transitioning to models that emphasize cost-effectiveness and impact. As an example, in the last Trump administration, USAID’s Office of the Chief Economist (OCE) issued the Promoting Impact and Learning with Cost-Effectiveness Evidence (PILCEE) award, which aims to enhance USAID’s programmatic effectiveness by promoting the use of cost-effectiveness evidence in strategic planning, policy-making, activity design, and implementation. Progress, though, remains limited. Funding disbursed based on performance milestones has remained unchanged since Fiscal Year (FY) 2016. In FY 2022, Fixed Amount Awards represented only 12.4% of new awards, or 1.4% by value. 

An October 2020 Stanford Social Innovation Review article by two USAID officials argued that the Agency could enhance its use of Fixed Amount Awards by promoting “performance over compliance”. Other organizations have already begun to make this shift: the Millennium Challenge Corporation (MCC) and The Global Fund to Fight AIDS, Tuberculosis and Malaria – among others – have invested in increasing results-based approaches and embedding different results-based instruments into their procurement processes for increased aid effectiveness.

Results Over Receipts: Similar Examples
Millennium Challenge Corporation (MCC)MCC has increasingly adopted results-based approaches and embedded results-based instruments such as performance-based contracts and awards into its Compact procurement to enhance the cost-effectiveness of its investments and of service provision. This progress includes the expansion of MCC’s portfolio to approximately USD 40 million in implemented or anticipated RBF programs spanning sectors such as health, energy, agriculture, utility management, gender, education, and public infrastructure in five countries.
The Global Fund to Fight AIDS, Tuberculosis and MalariaSince 2021, The Global Fund has supported its Principal Recipients across more than ten countries to use results-based contracts to improve results. It has created a “How to Guide and Toolkit” that offers a systematic path to design results-based contracts that avoid common traps and are compliant with The Global Fund requirements, templates, and intervention-specific guidelines for malaria bed nets mass campaigns and HIV prevention, diagnostic and treatment services for key populations.
SwitzerlandSwitzerland has increasingly shifted from traditional input-based methods to results-based approaches. A recent review found a diverse body of 51 results-based finance applications with both private and public actors by the State Secretariat of Economic Affairs (SECO) and the Swiss Agency for Development and Cooperation (SDC).

To shift USAID into an Agency that invests in impact at scale, we propose going one step further, and making Fixed-Price awards the de facto default procurement mechanism across USAID by requiring procurement officials to provide written justification for choosing CPFF. 

This would build on the work completed during the first Trump administration under Administrator Mark Green, including the creation of the first Acquisition and Assistance Strategy, designed to “empower and equip [USAID] partners and staff to produce results-driven solutions” by, inter alia, “increasing usage of awards that pay for results, as opposed to presumptively reimbursing for costs”, and the promotion of the Pay-for-Results approach to development.

Such a  change would unlock benefits for both the USG and for global development, including:

  1. Better alignment of risk and reward by ensuring implementers are paid only when they deliver on pre-agreed milestones. The risk of not achieving impact would no longer be solely borne by the USG, and implementers would be highly incentivized to achieve results.
  2. Promotion of a results-driven culture by shifting focus from administrative oversight to actual outcomes. By agreeing to milestones at the start of an award, USAID would give implementers flexibility to achieve results and adapt more nimbly to changing circumstances and place the focus on performing and reporting results, rather than administrative reporting.
  3. Diversification of USAID’s partner base by reducing the administrative burden associated with being a USAID implementer. This would allow the Agency to leverage the unique strengths, contextual knowledge, and innovative approaches of a diverse set of development actors. By allowing the Agency to work more nimbly with small businesses and local actors on shared priorities, it would further enhance its ability to counter current Great Power Competition with China and Russia.
  4. Incentivization of cost efficiency, motivating implementers to reduce expenses if they want to increase their profits, without extra cost to the USG.
  5. Facilitation of greater progress by USAID and the USG toward the UN’s 2030 Agenda for Sustainable Development, in ways likely to attract more meaningful and substantive private sector partnerships and leverage scarce USG resources.

Plan of Action 

Making Fixed-Price the de facto default option for both grants and contracts would provide the U.S. foreign aid procurement process a necessary shock to the system. The success of such a large institutional shift will require effective change management; therefore, it should be accompanied with the necessary training and support for implementing staff. This would entail, inter alia, establishing a dedicated team within OAA specialized in the design and implementation of FFPs and FAAs; and changing the culture of USAID procurement by supporting both contracting and programming staff with a robust change management program, including training and strong messaging from USAID leadership and education for Congressional appropriators. 

Recommendation 1. Making Fixed-Price the de facto “default” option for both grants and contracts, and tying payments to results. 

Fixed-Price as the default option for both grants and contracts would come at a low additional cost to USAID (assuming staff are able to be redistributed). The Agency’s Senior Procurement Executive, Chief Acquisition Officer (CAO), and Director for OAA should first convene a design working group composed of representatives from program offices, technical offices, OAA, and the General Counsel’s office tasked with reviewing government spending by category to identify sectors exempt from the “Fixed-Price default” mandate, namely for work that lacks deep commercial markets (e.g., humanitarian assistance or disaster relief). This working group would then propose a phased approach for adopting Fixed-Price as the default option across these sectors. After making its recommendations, the working group would be disbanded and a more permanent dedicated team would carry this effort forward (see Recommendation 2).

Once reset, Contract and Agreement Officers would justify any exceptions (i.e., the choice of T&M or CPFF) in an explanatory memo. The CAO could delegate authority to supervising Contracting Officers or other acquisition officials to approve these exceptions. To ensure that the benefits of Fixed-Price results-based contracting reach all levels of awardees, this requirement should become a flow-down clause in all prime awards. This will require additional training for the prime award recipient’s own overseers.

Recommendation 2. Establishing a dedicated team within USAID’s OAA, or the equivalent office in the next administration, specialized in the design and implementation of FFPs and FAAs.

To facilitate a smooth transition, USAID should create a dedicated team within OAA specialized in designing and implementing FFPs and FAAs using existing funds and personnel. This team would have expertise in the choices involved in designing Fixed-Price agreements: results metrics and targets, pricing for results, and optimizing payment structures to incentivize results.

They would have the mandate and resources necessary to support expanding the use of and the amount of funding flowing through high-quality FFPs and FAAs. They would jumpstart the process and support Acquisition and Program Officers by developing guidelines and procedures for Fixed-Price models (along with sector-specific recommendations), overseeing their design and implementation, and evaluating effectiveness. As USAID will learn along the way about how to best implement the Fixed-Price model across sectors, this team will also need to capture lessons learned from the initial experiences to lower the costs and increase the confidence of Acquisition and Assistance Officers using this model going forward. 

Recommendation 3. Launching a robust change management program to support USAID acquisition, assistance, program, and legislative and public affairs staff in making the shift to Fixed-Price grant and contract management. 

Successfully embedding Fixed-Price as the default option will entail a culture shift within USAID, requiring a multi-faceted approach. This will include the retraining of Contracts and Agreements Officers and their Representatives – who have internalized a culture of administrative compliance and been evaluated primarily on their extensive administrative oversight skills – and promoting a reorganization of the culture of Monitoring, Evaluation and Learning (MEL) and Collaboration, Learning and Adaptation (CLA) to prioritize results over reporting. Setting contracting and agreements staff up for success requires capacity building in the form of training, toolkits, and guidelines on how to implement Fixed-Price models across USAID’s diverse sectors. Other USG agencies make greater use of Fixed-Price awards, and alternative training for both government and prime contractor overseers exists. OAA’s Professional Development and Training unit should adapt existing training from these other agencies, specifically ensuring it addresses how to align payments with results.

Furthermore, the broader change management program should seek to create the appropriate internal incentive structure at the Agency for Acquisition and Assistance staff, motivating and engaging them in this significant restructuring of foreign aid. To succeed at this, the mandate for change needs to come from the top, reassuring staff that the Fixed-Price model does not expose individuals, the Agency, or implementers to undue legal or financial liability.

While this change will not require a Congressional Notification, the Office of Legislative & Public Affairs (LPA) should join this effort early on, including as part of the design working group. LPA would also play a guiding role in both internal and external communications, especially in educating members of Congress and their staffs on the importance and value of this change to improve USAID effectiveness and return on taxpayer dollars. Entrenched players with significant investments in existing CPFF systems will resist this effort, including with political lobbying; LPA will play an important role informing Congress and the public.

Conclusion

USAID’s current reliance on CPFF has proven inadequate in driving impact and must evolve to meet the challenges of global development and Great Power Competition. To create more agile, efficient, and results-driven foreign assistance, the Agency should adopt Fixed-Price as the de facto default model for disbursing funds, prioritizing results over administrative reporting. By embracing a results-based model, USAID will enhance its ability to respond to global shocks and geopolitical shifts, better positioning the U.S. to maintain strategic influence and achieve its foreign policy and development objectives while fostering greater accountability and effectiveness in its foreign aid programs. Implementing these changes will require a robust change management program, which would include creating a dedicated team within OAA, retraining staff and creating incentives for them to take on the change, ongoing guidance throughout the award process, and education and communication with Congress, implementing partners, and the public. This transformation is essential to ensure that U.S. foreign aid continues to play a critical role in advancing national interests and addressing global development challenges.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

Frequently Asked Questions​
How does the proposal align with the Code of Federal Regulations?

​The revisions to the Code of Federal Regulations, specifically the Uniform Guidance (2 CFR 200) provision, represent an exciting opportunity for USAID and its partners. These changes, which took effect on October 1, 2024, align with the Office of Management & Budget’s vision for enhanced oversight, transparency, and management of USAID’s foreign assistance. This update opens the door to several significant improvements in key reform areas: simplified requirements for federal assistance; reduced burdens on staff and implementing partners; and the introduction of new tools to support USAID’s localization efforts. The updated regulations will reduce the need for exception requests to OMB, speeding up timelines between planning and budget execution. This regulatory update presents a valuable opportunity for USAID to streamline its aid practices, pave the way for the adoption of the Fixed-Price model, and create a performance-driven culture at USAID. For these changes to come into full effect, USAID will need to ensure the necessary flow-down and enforcement of them through accompanying policies, guidance, and training. USAID will also need to ensure that these changes flow down and are incorporated into both prime and sub-awards.

How might adopting a Fixed-Price model support localization?

Wider adoption of Fixed-Price could expand USAID’s pool of qualified local partners, enhancing engagement with diverse implementers and facilitating more sustainable, locally-driven development outcomes. Fixed-Price grants and contracts disburse payments based on achieving pre-agreed milestones rather than on incurred costs, reducing the administrative burden of compliance. This simplified approach enables local organizations –many of which often lack the capacity to manage complex cost-tracking requirements –to be more competitive for USAID programs and to be better prepared to manage USAID awards. By linking payments to results rather than detailed expense documentation, the Fixed-Price model gives local organizations greater flexibility and autonomy in achieving their objectives, empowering them to leverage their contextual knowledge and innovative approaches more effectively. This results in a partnership where local actors can operate independently and adapt quickly to changing circumstances, without the bureaucratic burdens traditionally associated with USAID funding.

How might adopting Fixed-Price acquisition and assistance support USAID’s ability to achieve its small and disadvantaged business goals?

In the same way that Fixed-Price could help USAID diversify its partner base and increase localization, it could also help expand the Agency’s pool of qualified small businesses, enhancing engagement with diverse implementers, and facilitating more sustainable development outcomes while achieving its Congressionally mandated small and disadvantaged business utilization goals. The current extensive use of CPFF favors entrenched implementers who have already paid for the expensive administrative compliance systems it requires. Fixed-Price grants and contracts have fewer administrative burdens enabling new small businesses–many of which often lack the administrative infrastructure necessary to manage complex cost-tracking requirements–to be more competitive for USAID programs and to be better prepared to manage USAID awards.

Do any divisions or bureaus at USAID already predominantly use FAAs?

USAID’s research and development arm, Development Innovation Ventures (DIV), uses fixed-fee awards almost exclusively to fund innovative implementers. Yet proven interventions rarely transition from DIV into mainstream USAID programs. Innovators and impact-first organizations find themselves well suited for USAID’s R&D, but with no path forward due to the use of CPFF at scale.

Would switching from CPFF to Fixed-Price translate into more or fewer costs for the government?

USAID has historically relied on expensive procedures to ensure implementers are using funding in ways that align with USG policies and procedures. These concerns are reduced, however, when the government pays for outcomes (rather than tracking receipts). For example, the government would no longer need to verify whether the implementer has the proper accounting and reporting systems in place, nor would the government need to spend time negotiating indirect rates nor implementing incurred cost audits. As detailed regulations on the permissibility of specific costs under federal acquisition and assistance don’t apply to Fixed-Price awards and contracts, neither the government nor the implementer needs to spend time examining the allowability of costs. Furthermore, we expect wider use of Fixed-Price models to lead to significantly improved results per dollar spent. This means that, although there would be initial costs associated with strategy implementation, we would expect Fixed-Price to be significantly more cost-effective.

Are there existing examples of how USAID has implemented change management efforts to improve aid effectiveness?

Yes, USAID has made recent efforts to provide more effective aid by incorporating evidence-based approaches and transitioning to models that emphasize cost-effectiveness and impact. In order to do this, during the last Trump administration, USAID elevated the Office of the Chief Economist (OCE) by enlarging its size and mandate. The OCE issued the activity Promoting Impact and Learning with Cost-Effectiveness Evidence (PILCEE), which aims to enhance USAID’s programmatic effectiveness by promoting the use of cost-effectiveness evidence in strategic planning, policy-making, activity design, and implementation. Our approach of establishing a team within OAA would draw on lessons learned from the OCE approach while reducing any associated costs by not establishing an entirely new operating unit.

Building Regional Cyber Coalitions: Reimagining CISA’s JCDC to Empower Mission-Focused Cyber Professionals Across the Nation

State, local, tribal, and territorial governments along with Critical Infrastructure Owners (SLTT/CIs) face escalating cyber threats but struggle with limited cybersecurity staff and complex technology management. Relying heavily on private-sector support, they are hindered by the private sector’s lack of deep understanding of SLTT/CI operational environments. This gap leaves SLTT/CIs vulnerable and underprepared for emerging threats all while these practitioners on the private sector side end up underleveraged.

To address this, CISA should expand the Joint Cyber Defense Collaborative (JCDC) to allow broader participation by practitioners in the private sector who serve public sector clients, regardless of the size or current affiliation of their company, provided they can pass a background check, verify their employment, and demonstrate their role in supporting SLTT governments or critical infrastructure sectors. 

Challenge and Opportunity

State, local, tribal, and territorial (SLTT) governments face a significant increase in cyber threats, with incidents like remote access trojans and complex malware attacks rising sharply in 2023. These trends indicate not only a rise in the number of attacks but also an increase in their sophistication, requiring SLTTs to contend with a diverse and evolving array of cyber threats.The 2022 Nationwide Cybersecurity Review (NCSR) found that most SLTTs have not yet achieved the cybersecurity maturity needed to effectively defend against these sophisticated attacks, largely due to limited resources and personnel shortages. Smaller municipalities, especially in rural areas, are particularly impacted, with many unable to implement or maintain the range of tools required for comprehensive security. As a result, SLTTs remain vulnerable, and critical public infrastructure risks being compromised. This urgent situation presents an opportunity for CISA to strengthen regional cybersecurity efforts through enhanced public-private collaboration, empowering SLTTs to build resilience and raise baseline cybersecurity standards.

Average cyber maturity scores for the State, Local, Tribal, and Territorial peer groups are at the minimum required level or below. Source: Center for Internet Security

Average cyber maturity scores for the State, Local, Tribal, and Territorial peer groups are at the minimum required level or below. Source: Center for Internet Security

Furthermore, effective cybersecurity requires managing a complex array of tools and technologies. Many SLTT organizations, particularly those in critical infrastructure sectors, need to deploy and manage dozens of cybersecurity tools, including asset management systems, firewalls, intrusion detection systems, endpoint protection platforms, and data encryption tools, to safeguard their operations.

An example of the immense array of different combinations of cybersecurity tools that could comprise a full suite necessary to implement baseline cybersecurity controls. Source: The Software Analyst Newsletter

An example of the immense array of different combinations of cybersecurity tools that could comprise a full suite necessary to implement baseline cybersecurity controls. Source: The Software Analyst Newsletter

The ability of SLTTs to implement these tools is severely hampered by two critical issues: insufficient funding and a shortage of skilled cybersecurity professionals to operate such a large volume of tools that require tuning and configuration. Budget constraints mean that many SLTT organizations are forced to make difficult decisions about which tools to prioritize, and the shortage of qualified cybersecurity professionals further limits their ability to operate them. The Deloitte-NASCIO Cybersecurity Study highlights how state Chief Information Security Officers (CISOs) are increasingly turning to the private sector to fill gaps in their workforce, procuring staff-augmentation resources to support security control deployment, management of Security Operations Centers (SOCs), and incident response services.

Figure 3: The Top 5 Security Concerns for Nationwide Cybersecurity Review Respondents include lack of sufficient funding and inadequate availability of cybersecurity professionals. Source: Centers for Internet Security.

The Top 5 Security Concerns for Nationwide Cybersecurity Review Respondents include lack of sufficient funding and inadequate availability of cybersecurity professionals. Source: Centers for Internet Security.

What Strong Regionalized Communities Would Achieve

This reliance on private-sector expertise presents a unique opportunity for federal policymakers to foster stronger public-private partnerships. However,  currently, JCDC membership entry requirements are vague and appear to favor more established companies, limiting participation from many professionals who are actively engaged in this mission. 

The JCDC is led by CISA’s Stakeholder Engagement Division (SED) which also serves as the agency’s hub for the shared stakeholder information that unifies CISA’s approach to whole-of-nation operational collaboration. One of the Joint Cyber Defense Collaborative’s (JCDC) main goals is to “organize and support efforts that strengthen the foundational cybersecurity posture of critical infrastructure entities,” ensuring they are better equipped to defend against increasingly sophisticated threats

Given the escalating cybersecurity challenges, there is a significant opportunity for CISA to enhance localized collaboration between the public and  private sectors in the form of improving the quality of service delivery that personnel at managed service providers and software companies can provide. This helps SLTTs/CIs close the workforce gap, allows vendors to create new services focused on SLTT/CIs consultative needs, and boosts a talent market that incentivizes companies to hire more technologists fluent in the “business” needs of SLTTs/CIs. 

Incentivizing the Private Sector to Participate

With intense competition for market share in cybersecurity, vendors will need to provide good service and successful outcomes in order to retain and grow their portfolio of business. They  will have to compete on their ability to deliver better, more tailored service to SLTTs/CIs and pursue talent that is more fluent in government operations, which incentivizes candidates to build great reputations amongst SLTT/CI customers.

Plan of Action

Recommendation 1. Community Platform

To accelerate the progress of CISA’s mission to improve the cyber baseline for SLTT/CIs, the Joint Cyber Defense Collaborative (JCDC) should expand into a regional framework aligned with CISA’s 10 regional offices to support increasing participation. The new, regionalized JCDC should facilitate membership for all practitioners that support the cyber defense of SLTT/CIs, regardless of whether they are employed by a private or public sector entity. With a more complete community, CISA will be able to direct focused, custom implementation strategies that require deep public-private collaboration.

Participants from relevant sectors should be able to join the regional JCDC after passing background checks, employment verification, and, where necessary, verification that the employer is involved in security control implementation for at least one eligible regional client. This approach allows the program to scale rapidly and ensures fairness across organizations of all sizes. Private sector representatives, such as solutions engineers and technical account managers, will be granted conditional membership to the JCDC, with need-to-know access to its online resources. The program will emphasize the development of collaborative security control implementation strategies centered around the client, where CISA coordinates the implementation functions between public and private sector staff, as well as between cybersecurity vendors and MSPs that serve each SLTT/CI entity.

Recommendation 2. Online Training Platform

Currently, CISA provides a multitude of training offerings both online and in-person, most of which are only accessible by government employees. Expanding CISA’s training offerings to include programs that teach practitioners at MSPs and Software Companies how to become fluent in the operation of government is essential for raising the cybersecurity baseline across various National Cybersecurity Review (NCSR) areas with which SLTTs currently struggle. The training platform should be a flexible, learn-at-your-own-pace virtual learning platform, and CISA is encouraged to develop on existing platforms with existing user bases, such as Salesforce’s Trailhead. Modules should enable students around specific challenges tailored to the SLTT/CI operating environment, such as applying patches to workstations that belong to a Police or Fire Department, where the availability of critical systems is essential, and downtime could mean lives. 

The platform should offer a gamified learning experience, where participants can earn badges and certificates as they progress through different learning paths. These badges and certificates will serve as a way for companies and SLTT/CIs to understand which individuals are investing the most time learning and delivering the best service. Each badge will correspond to specific skills or competencies, allowing participants to build a portfolio of recognized achievements. This approach has already proven effective, as seen in the use of Salesforce’s Trailhead by other organizations like the Center for Internet Security (CIS), which offers an introductory course on CIS Controls v8 through the platform. 

The benefits of this training platform are multifaceted. First, it provides a structured and scalable way to upskill a large number of cybersecurity professionals across the country with a focus on tailored implementation of cybersecurity controls for SLTT/CIs. Second, the badge system incentivizes ongoing participation, ensuring that cybersecurity professionals can continue to maintain their reputation if they choose to move jobs between companies or between the public and private sectors. Third, the platform fosters a sense of community and collaboration around the client, allowing CISA to understand the specific individuals supporting each SLTT/CI organization, in the case that it needs to mobilize a team with both security knowledge and government operations knowledge around an incident response scenario.

Recommendation 3. A “Smart Rolodex”

A Customer Relationship Management (CRM) system should be integrated within CISA’s Office of Stakeholder Engagement to manage the community of cyber defenders more effectively and streamline incident response efforts. The CRM will maintain a singular database of regionalized JCDC members, their current company, their expertise, and their roles within critical infrastructure sectors. This system will act as a “smart Rolodex,” enabling CISA to quickly identify and coordinate with the most suitable experts during incidents, ensuring a swift and effective response. The recent recommendations by a CISA panel underscore the importance of this approach, emphasizing that a well-organized and accessible database is crucial for deploying the right resources in real-time and enhancing the overall effectiveness of the JCDC.

Recommendation 4. Establishment of Merit-Based Recognition Programs

Finally, to foster a sense of mission and camaraderie among JCDC participants, recognition programs should be introduced to increase morale and highlight above-and-beyond contributions to national cybersecurity efforts. Digital badges, emblematic patches, “CISA Swag” or challenge coins will be awarded as symbols of achievement within the JCDC, boosting morale and practitioner commitment to the greater mission. These programs will also enhance the appeal of cybersecurity careers, elevating those involved with the JCDC, and encouraging increased participation and retention within the JCDC initiative.

Cost Analysis

Estimated Costs and Justification

The proposed regional JCDC program requires procuring ~100,000 licenses for a digital communication platform (Based on Slack) across all of its regions and 500 licenses for a popular Customer Relationship Management (CRM) platform(Based on Salesforce) for its Office of Stakeholder Engagement to be able to access records. The estimated annual costs are as follows:

Digital Communication Platform Licenses:

CRM Platform Licenses:

Total Estimated Cost:

Buffer for Operational Costs: To ensure the program’s success, a buffer of approximately 15% should be added to cover additional operational expenses, unforeseen costs, and any necessary uplifts or expansions in features or seats. This does not take into consideration volume discounts that CISA would normally expect when purchasing through a reseller such as Carahsoft or CDW.

Cost Justification: Although the initial investment is significant, the potential savings from avoiding cyber incidents should far outweigh these costs. Considering that the average cost of a data breach in the U.S. is approximately $9.48 million, preventing even a few such incidents through this program could easily justify the expenditure

Conclusion

The cybersecurity challenges faced by State, Local, Tribal, and Territorial (SLTT) governments and critical infrastructure sectors are becoming increasingly complex and urgent. As cyber threats continue to evolve, it is clear that the existing defenses are insufficient to protect our nation’s most vital services. The proposed expansion of the Joint Cyber Defense Collaborative (JCDC) to allow broader participation by practitioners in the private sector who serve public sector clients, regardless of the size or current affiliation of their company presents a crucial opportunity to enhance collaboration, particularly among SLTTs, and to bolster the overall cybersecurity baseline.These efforts align closely with CISA’s strategic goals of enhancing public-private partnerships, improving the cybersecurity baseline, and fostering a skilled cybersecurity workforce. By taking decisive action now, we can create a more resilient and secure nation, ensuring that our critical infrastructure remains protected against the ever-growing array of cyber threats.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

Establishing a Cyber Workforce Action Plan

The next presidential administration should establish a comprehensive Cyber Workforce Action Plan to address the critical shortage of cybersecurity professionals and bolster national security. This plan encompasses innovative educational approaches, including micro-credentials, stackable certifications, digital badges, and more, to create flexible and accessible pathways for individuals at all career stages to acquire and demonstrate cybersecurity competencies.

The initiative will be led by the White House Office of the National Cyber Director (ONCD) in collaboration with key agencies such as the Department of Education (DoE), Department of Homeland Security (DHS), National Institute of Standards and Technology (NIST), and National Security Agency (NSA). It will prioritize enhancing and expanding existing initiatives—such as the CyberCorps: Scholarship for Service program that recruits and places talent in federal agencies—while also spearheading new engagements with the private sector and its critical infrastructure vulnerabilities. To ensure alignment with industry needs, the Action Plan will foster strong partnerships between government, educational institutions, and the private sector, particularly focusing on real-world learning opportunities.

This Action Plan also emphasizes the importance of diversity and inclusion by actively recruiting individuals from underrepresented groups, including women, people of color, veterans, and neurodivergent individuals, into the cybersecurity workforce. In addition, the plan will promote international cooperation, with programs to facilitate cybersecurity workforce development globally. Together, these efforts aim to close the cybersecurity skills gap, enhance national defense against evolving cyber threats, and position the United States as a global leader in cybersecurity education and workforce development.

Challenge and Opportunity

The United States and its allies face a critical shortage of cybersecurity professionals, in both the public and private sectors. This shortage poses significant risks to our national security and economic competitiveness in an increasingly digital world.

In the federal government, the cybersecurity workforce is aging rapidly, with only about 3% of information technology (IT) specialists under 30 years old. Meanwhile, nearly 15% of the federal cyber workforce is eligible for retirement. This demographic imbalance threatens the government’s ability to defend against sophisticated and evolving cyber threats.

The private sector faces similar challenges. According to recent estimates, there are nearly half a million unfilled cybersecurity positions in the United States. This gap is expected to grow as cyber threats become more complex and pervasive across all industries. Small and medium-sized businesses are particularly vulnerable, often lacking the resources to compete for scarce cyber talent.

The cybersecurity talent shortage extends beyond our borders, affecting our allies as well. As cyber threats from adversarial nation states become increasingly global in nature, our international partners’ ability to defend against these threats directly impacts U.S. national security. Many of our allies, particularly in Eastern Europe and Southeast Asia, lack robust cybersecurity education and training programs, further exacerbating the global skills gap.

A key factor contributing to this shortage is the lack of accessible, flexible pathways into cybersecurity careers. Traditional education and training programs often fail to keep pace with rapidly evolving technology and threat landscapes. Moreover, they frequently overlook the potential of career changers and nontraditional students who could bring valuable diverse perspectives to the field.

However, this challenge presents a unique opportunity to revolutionize cybersecurity education and workforce development. By leveraging innovative approaches such as apprenticeships, micro-credentials, stackable certifications, peer-to-peer learning platforms, digital badges, and competition-based assessments, we can create more agile and responsive training programs. These methods can provide learners with immediately applicable skills while allowing for continuous upskilling as the field evolves.

Furthermore, there’s an opportunity to enhance cybersecurity awareness and basic skills among all American workers, not just those in dedicated cyber roles. As digital technologies permeate every aspect of modern work, a baseline level of cyber hygiene and security consciousness is becoming essential across all sectors.

By addressing these challenges through a comprehensive Cyber Workforce Action Plan, we can not only strengthen our national cybersecurity posture but also create new pathways to well-paying, high-demand jobs for Americans from all backgrounds. This initiative has the potential to position the United States as a global leader in cyber workforce development, enhancing both our national security and our economic competitiveness in the digital age.

Evidence of Existing Initiatives

While numerous excellent cybersecurity workforce development initiatives exist, they often operate in isolation, lacking cohesion and coordination. ONCD is positioned to leverage its whole-of-government approach and the groundwork laid by its National Cyber Workforce and Education Strategy (NCWES) to unite these disparate efforts. By bringing together the strengths of various initiatives and their stakeholders, ONCD can transform high-level strategies into concrete, actionable steps. This coordinated approach will maximize the impact of existing resources, reduce duplication of efforts, and create a more robust and adaptable cybersecurity workforce development ecosystem. This proposed Action Plan is the vehicle to turn these collective workforce-minded strategies into tangible, measurable outcomes.

At the foundation of this plan lies the NICE Cybersecurity Workforce Framework, developed by NIST. This common lexicon for cybersecurity work roles and competencies provides the essential structure upon which we can build. The Cyber Workforce Action Plan seeks to expand on this foundation by creating standardized assessments and implementation guidelines that can be adopted across both public and private sectors.

Micro-credentials, stackable certifications, digital badges, and other innovations in accessible education—as demonstrated by programs like SANS Institute’s GIAC certifications and CompTIA’s offerings—form a core component of the proposed plan. These modular, skills-based learning approaches allow for rapid validation of specific competencies—a crucial feature in the fast-evolving cybersecurity landscape. The Action Plan aims to standardize and coordinate these and similar efforts, ensuring widespread recognition and adoption of accessible credentials across industries.

The array of gamification and competition-based learning approaches—including but not limited to National Cyber League, SANS NetWars, and CyberPatriot—are also exemplary starting points that would benefit from greater federal engagement and coordination. By formalizing these methods within education and workforce development programs, the government can harness their power to simulate real-world scenarios and drive engagement at a national scale.

Incorporating lessons learned from the federal government’s previous DoE CTE CyberNet program, the National Science Foundation’s (NSF) Scholarship for Service Program (SFS), and the National Security Agency’s (NSA) GenCyber camps—the Action Plan emphasizes the importance of early engagement (the middle grades and early high school years) and practical, hands-on learning experiences. By extending these principles across all levels of education and professional development, we can create a continuous pathway from high school through to advanced career stages.

A Cyber Workforce Action Plan would provide a unifying praxis to standardize competency assessments, create clear pathways for career progression, and adapt to the evolving needs of both the public and private sectors. By building on the successes of existing initiatives and introducing innovative solutions to fill critical gaps in the cybersecurity talent pipeline, we can create a more robust, diverse, and skilled cybersecurity workforce capable of meeting the complex challenges of our digital future.

Plan of Action 

Recommendation 1. Create a Cyber Workforce Action Plan.

ONCD will develop and oversee the plan, in close collaboration with DoE, NIST, NSA, and other relevant agencies. The plan has three distinct components:

1. Develop standardized assessments aligned with the NICE framework. ONCD will work with NIST to create a suite of standardized assessments to evaluate cybersecurity competencies that:

2. Establish a system of stackable and portable micro-credentials. To provide flexible and accessible pathways into cybersecurity careers, ONCD will work with DoE, NIST, and the private sector to help develop and support systems of micro-credentials that are:

3. Integrate more closely with more federal initiatives. The Action Plan will be integrated with existing federal cybersecurity programs and initiatives, including:

This proposal emphasizes stronger integration with existing federal initiatives and greater collaboration with the private sector. Instead of creating entirely new credentialing standards, ONCD will explore opportunities to leverage widely adopted commercial certifications, such as those from Google, CompTIA, and other private-sector leaders. By selecting and promoting recognized commercial standards where applicable, ONCD can streamline efforts, avoiding duplication and ensuring the cybersecurity workforce development approach is aligned with what is already successful in industry. Where necessary, ONCD will work with NIST and industry professionals to ensure these commercial certifications meet federal needs, creating a more cohesive and efficient approach across both government and industry. This integrated public-private strategy will allow ONCD to offer a clear leadership structure and accountability mechanism while respecting and utilizing commercial technology and standards to address the scale and complexity of the cybersecurity workforce challenge.

The Cyber Workforce Action Plan will emphasize strong collaborations with the private sector, including the establishment of a Federal Cybersecurity Curriculum Advisory Board composed of experts from relevant federal agencies and leading private-sector companies. This board will work directly with universities to develop model curricula that incorporate the latest cybersecurity tools, techniques, and threat landscapes, ensuring that graduates are well-prepared for the specific challenges faced by both federal and private-sector cybersecurity professionals.

To provide hands-on learning opportunities, the Action Plan will include a new National Cyber Internship Program. Managed by the Department of Labor in partnership with DHS’s Cybersecurity and Infrastructure Security Agency (CISA) and leading technology companies, the program will match students with government agencies and private-sector companies. An online platform will be developed, modeled after successful programs like Hacking for Defense, where industry partners can propose real-world cybersecurity projects for student teams.

To incentivize industry participation, the General Services Administration (GSA) and DoD will update federal procurement guidelines to require companies bidding on cybersecurity-related contracts to certify that they offer internship or early-career opportunities for cybersecurity professionals. Additionally, CISA will launch a “Cybersecurity Employer of Excellence” certification program, which will be a prerequisite for companies bidding on certain cybersecurity-related federal contracts.

The Action Plan will also address the global nature of cybersecurity challenges by incorporating international cooperation elements. This includes adapting the plan for international use in strategically important regions, facilitating joint training programs and professional exchanges with allied nations, and promoting global standardization of cybersecurity education through collaboration with international standards organizations.

Ultimately, this effort intends to implement a national standard for cybersecurity competencies—providing clear, accessible pathways for career progression and enabling more agile and responsive workforce development in this critical field. 

Recommendation 2. Implement an enhanced CyberCorps fellowship program.

ONCD should expand the NSF’s CyberCorps Scholarship for Service program as an immediate, high-impact initiative. Key features of the expanded CyberCorps fellowship program include:

1. Comprehensive talent pipeline: While maintaining the current SFS focus on students, the enhanced CyberCorps will also target recent graduates and early-career professionals with 1–5 years of work experience. This expansion addresses immediate workforce needs while continuing to invest in future talent. The program will offer competitive salaries, benefits, and loan forgiveness options to attract top talent from both academic and private-sector backgrounds.

2. Multiagency exposure and optional rotations: While cross-sector exposure remains valuable for building a holistic understanding of cybersecurity challenges, the rotational model will be optional or limited based on specific agency needs. Fellows may be offered the opportunity to rotate between agencies or sectors only if their skill set and the hosting agency’s environment are conducive to short-term placements. For fellows placed in agencies or sectors where longer ramp-up times are expected, a deeper, longer-term placement may be more effective. Drawing on lessons from the Presidential Innovation Fellows and the U.S. Digital Corps, the program will emphasize flexibility to ensure that fellows can make meaningful contributions within the time frame and that knowledge transfer between sectors remains a core objective.

3. Advanced mentorship and leadership development: Building on the SFS model, the expanded program will foster a strong community of cyber professionals through cohort activities and mentorship pairings with senior leaders across government and industry. A new emphasis on leadership training will prepare fellows for senior roles in government cybersecurity.

4. Focus on emerging technologies: Complementing the SFS program’s core cybersecurity curriculum, the expanded CyberCorps will emphasize cutting-edge areas such as artificial intelligence in cybersecurity, quantum computing, and advanced threat detection. This focus will prepare fellows to address future cybersecurity challenges.

5. Extended impact through education and mentorship: The program will encourage fellows to become cybersecurity educators and mentors in their communities after their service, extending the program’s impact beyond government service and strengthening America’s overall cyber workforce.

By implementing these enhancements to the CyberCorps program as a first step and quick win, the Action Plan will initiate a more comprehensive approach to federal cybersecurity workforce development. The enhanced CyberCorps fellowship program will also emphasize diversity and inclusion to address the critical shortage of cybersecurity professionals and bring fresh perspectives to cyber challenges. The program will actively recruit individuals from underrepresented groups, including women, people of color, veterans, and neurodivergent individuals.

To achieve this, the program will partner with organizations like Girls Who Code and the Hispanic IT Executive Council to promote cybersecurity careers and expand the applicant pool. The Department of Labor, in conjunction with the NSF, will establish a Cyber Opportunity Fund to provide additional scholarships and grants for individuals from underrepresented groups pursuing cybersecurity education through the CyberCorps program.

In addition, the program will develop standardized apprenticeship components that provide on-the-job training and clear pathways to full-time employment, with a focus on recruiting from diverse industries and backgrounds. Furthermore, partnerships with Historically Black Colleges and Universities, Hispanic-Serving Institutions, and Tribal Colleges and Universities will be strengthened to enhance their cybersecurity programs and create a pipeline of diverse talent for the CyberCorps program.

The CyberCorps program will expand its scope to include an international component, allowing for exchanges with allied nations’ cybersecurity agencies and bringing international students to U.S. universities for advanced studies. This will help position the United States as a global leader in cybersecurity education and training while fostering a worldwide community of professionals capable of responding effectively to evolving cyber threats.

By incorporating these elements, the enhanced CyberCorps fellowship program will not only address immediate federal cybersecurity needs but also contribute to building a diverse, skilled, and globally aware cybersecurity workforce for the future.

Implementation Considerations

To successfully establish and execute the comprehensive Action Plan and its associated initiatives, careful planning and coordination across multiple agencies and stakeholders will be essential. Below are some of the key timeline and funding considerations the ONCD should factor into its implementation.

Key milestones and actions for the first two years

Months 1–6:

Months 7–12:

Months 13–18:

Months 19–24:

Program evaluation and quality assurance

Beyond these key milestones, the Action Plan must establish clear evaluation frameworks to ensure program quality and effectiveness, particularly for integrating non-federal education programs into federal hiring pathways. For example, to address OPM’s need for evaluating non-federal technical and career education programs under the Recent Graduates Program, the Action Plan will implement the following evaluation framework:

The implementation of these criteria will be overseen by the same advisory board established in Months 1-6, expanding their scope to include program evaluation and certification. This approach leverages existing governance structures while providing OPM with quantifiable metrics to evaluate non-federal program graduates. 

Budgetary, resource, and personnel needs

The estimated annual budget for the proposed initiative ranges from $125 million to $200 million. This range considers cost-effective resource allocation strategies, including the integration of existing platforms and focused partnerships. Key components of the program include:

Potential funding sources

Funding for this initiative can be sourced through a variety of channels. First, congressional appropriations via the annual budget process are expected to provide a significant portion of the financial support. Additionally, reallocating existing funds from cybersecurity and workforce development programs could account for approximately 25–35% of the overall budget. This reallocation could include funding from current programs like NICE, SFS, and other workforce development grants, which can be repurposed to support this broader initiative without requiring entirely new appropriations.

Public-private partnerships will also be explored, with potential contributions from industry players who recognize the value of a robust cybersecurity workforce. Grants from federal entities such as DHS, DoD, and NSF are viable options to supplement the program’s financial needs. To offset costs, fees collected from credentialing and training programs could serve as an additional revenue stream.

Finally, the Action Plan and its initiatives will seek contributions from international development funds aimed at capacity-building, as well as financial support from allied nations to aid in the establishment of joint international programs.

Conclusion

Establishing a comprehensive Cyber Workforce Action Plan represents a pivotal move toward securing America’s digital future. By creating flexible, accessible career pathways into cybersecurity, fostering innovative education and training models, and promoting both domestic diversity and international cooperation, this initiative addresses the urgent need for a skilled and resilient cybersecurity workforce.

The impact of this proposal is wide-ranging. It will not only reinforce national security by strengthening the nation’s cyber defenses but also contribute to economic growth by creating high-paying jobs and advancing U.S. leadership in cybersecurity on the global stage. By expanding access to cybersecurity careers and engaging previously underutilized talent pools, this initiative will ensure the workforce reflects the diversity of the nation and is prepared to meet future cybersecurity challenges.

The next administration must make the implementation of this plan a national priority. As cyber threats grow more complex and sophisticated, the nation’s ability to defend itself depends on developing a robust, adaptable, and highly skilled cybersecurity workforce. Acting swiftly to establish this strategy will build a stronger, more resilient digital infrastructure, ensuring both national security and economic prosperity in the 21st century. We urge the administration to allocate the necessary resources and lead the transformation of cybersecurity workforce development. Our digital future—and our national security—demand immediate action.

Teacher Education Clearinghouse for AI and Data Science

The next presidential administration should develop a teacher education and resource center that includes vetted, free, self-guided professional learning modules, resources to support data-based classroom activities, and instructional guides pertaining to different learning disciplines. This would provide critical support to teachers to better understand and implement data science education and use of AI tools in their classroom. Initial resource topics would be: 

In addition, this resource center would develop and host free, pre-recorded, virtual training sessions to support educators and district professionals to better understand these resources and practices so they can bring them back to their contexts. This work would improve teacher practice and cut administrative burdens. A teacher education resource would lessen the digital divide and ensure that our educators are prepared to support their students in understanding how to use AI tools so that each and every student can be college and career ready and competitive at the global level. This resource center would be developed using a process similar to the What Works Clearinghouse, such that it is not endorsing a particular system or curriculum, but is providing a quality rating, based on the evidence provided. 

Challenge and Opportunity

AI is an incredible technology that has the power to revolutionize many areas, especially how educators teach and prepare the next generation to be competitive in higher education and the workforce. A recent RAND study showed leaders in education indicating promise in adapting instructional content to fit the level of their students and for generating instructional materials and lesson plans. While this technology holds a wealth of promise, the field has developed so rapidly that people across the workforce do not understand how best to take advantage of AI-based technologies. One of the most crucial areas for this is in education. AI-enabled tools have the potential to improve instruction, curriculum development, and assessment, but most educators have not received adequate training to feel confident using them in their pedagogy. In a Spring 2024 pilot study (Beiting-Parrish & Melville, in preparation), initial results indicated that 64.3% of educators surveyed had not had any professional development or training in how to use AI tools. In addition, more than 70% of educators surveyed felt they did not know how to pick AI tools that are safe for use in the classroom, and that they were not able to detect biased tools. Additionally, the RAND study indicated only 18% of educators reported using AI tools for classroom purposes. Within those 18%, approximately half of those educators used AI because they had been specifically recommended or directly provided a tool for classroom use. This suggests that educators need to be given substantial support in choosing and deploying tools for classroom use. Providing guidance and resources to support vetting tools for safe, ethical, appropriate, and effective instruction is one of the cornerstone missions of the Department of Education. This education should not rest on the shoulders of individual educators who are known to have varying levels of technical and curricular knowledge, especially for veteran teachers who have been teaching for more than a decade.

If the teachers themselves do not have enough professional development or expertise to select and teach new technology, they cannot be expected to thoroughly prepare their students to understand emerging technologies, such as AI, nor the underpinning concepts necessary to understand these technologies, most notably data science and statistics. As such, students’ futures are being put at risk from a lack of emphasis in data literacy that is apparent across the nation. Recent results from the National Assessment of Education Progress (NAEP), assessment scores show a shocking decline in student performance in data literacy, probability, and statistics skills – outpacing declines in other content areas. In 2019, the NAEP High School Transcript Study (HSTS) revealed that only 17% of students completed a course in statistics and probability, and less than 10% of high school students completed AP Statistics. Furthermore, the HSTS study showed that less than 1% of students completed a dedicated course in modern data science or applied data analytics in high school. Students are graduating with record-low proficiency in data, statistics, and probability, and graduating without learning modern data science techniques. While students’ data and digital literacy are failing, there is a proliferation of AI content online; they are failing to build the necessary critical thinking skills and a discerning eye to determine what is real versus what has been AI-generated, and they aren’t prepared to enter the workforce in sectors that are booming. The future the nation’s students will inherit is one in which experience with AI tools and Big Data will be expected to be competitive in the workforce.

Whether students aren’t getting the content because it isn’t given its due priority, or because teachers aren’t comfortable teaching the content, AI and Big Data are here, and our educators don’t have the tools to help students get ready for a world in the midst of a data revolution. Veteran educators and preservice education programs alike may not have an understanding of the essential concepts in statistics, data literacy, or data science that allow them to feel comfortable teaching about and using AI tools in their classes. Additionally, many of the standard assessment and practice tools are not fit for use any longer in a world where every student can generate an A-quality paper in three seconds with proper prompting. The rise of AI-generated content has created a new frontier in information literacy; students need to know to question the output of publically available LLM-based tools, such as Chat-GPT, as well as to be more critical of what they see online, given the rise of AI-generated deep fakes, and educators need to understand how to either incorporate these tools into their classrooms or teach about them effectively. Whether educators are ready or not, the existing Digital Divide has the potential to widen, depending on whether or not they know how to help students understand how to use AI safely and effectively and have the access to resources and training to do so.

The United States finds itself at a crossroads in the global data boom. Demand in the economic marketplace, and threat to national security by way of artificial intelligence and mal-, mis-, and disinformation, have educators facing an urgent problem in need of an immediate solution. In August of 1958, 66 years ago, Congress passed the National Defense Education Act (NDEA), emphasizing teaching and learning in science and mathematics. Specifically in response to the launch of Sputnik, the law supplied massive funding to, “insure trained manpower of sufficient quality and quantity to meet the national defense needs of the United States.” The U.S. Department of Education, in partnership with the White House Office of Science and Technology Policy, must make bold moves now to create such a solution, as Congress did once before.

Plan of Action

In the years since the Space Race, one problem with STEM education persists: K-12 classrooms still teach students largely the same content; for example, the progression of high school mathematics including algebra, geometry, and trigonometry is largely unchanged. We are no longer in a race to space – we’re now needing to race against data. Data security, artificial intelligence, machine learning, and other mechanisms of our new information economy are all connected to national security, yet we do not have educators with the capacity to properly equip today’s students with the skills to combat current challenges on a global scale. Without a resource center to house the urgent professional development and classroom activities America’s educators are calling for, progress and leadership in spaces where AI and Big Data are being used will continue to dwindle, and our national security will continue to be at risk. It’s beyond time for a new take on the NDEA that emphasizes more modern topics in the teaching and learning of mathematics and science, by way of data science, data literacy, and artificial intelligence. 

Previously, the Department of Education has created resource repositories to support the dissemination of information to the larger educational praxis and research community. One such example is the What Work Clearinghouse, a federally vetted library of resources on educational products and empirical research that can support the larger field. The WWC was created to help cut through the noise of many different educational product claims to ensure that only high-quality tools and research were being shared. A similar process is happening now with AI and Data Science Resources; there are a lot of resources online, but many of these are of dubious quality or are even spreading erroneous information. 

To combat this, we suggest the creation of something similar to the WWC, with a focus on vetted materials for educator and student learning around AI and Data Science. We propose the creation of the Teacher Education Clearinghouse (TEC) underneath the Institute of Education Sciences, in partnership with the Office of Education Technology. Currently, WWC costs approximately $2,500,000 to run, so we anticipate a similar budget for the TEC website. The resource vetting process would begin with a Request for Information from the larger field that would encourage educators and administrators to submit high quality materials. These materials would be vetted using an evaluation framework that looks for high quality resources and materials. 

For example, the RFI might request example materials or lesson goals for the following subjects:

A framework for evaluating how useful these contributions might be for the Teacher Education Clearinghouse would consider the following principles:

Additionally, this would also include a series of quick start guide books that would be broken down by topic and include a set of resources around foundational topics such as, “Introduction to AI” and “Foundational Data Science Vocabulary”. 

When complete, this process would result in a national resource library, which would house a free series of asynchronous professional learning opportunities and classroom materials, activities, and datasets. This work could be promoted through the larger DoE as well as through the Regional Educational Laboratory program and state level stakeholders. The professional learning would consist of prerecorded virtual trainings and related materials (ex: slide decks, videos, interactive components of lessons, etc.). The materials would include educator-facing materials to support their professional development in Big Data and AI alongside student-facing lessons on AI Literacy that teachers could use to support their students. All materials would be publicly available for download on an ED-owned website. This will allow educators from any district, and any level of experience, to access materials that will improve their understanding and pedagogy. This especially benefits educators from less resourced environments because they can still access the training they need to adequately support their students, regardless of local capacity for potentially expensive training and resource acquisition. Now is the time to create such a resource center because there currently isn’t a set of vetted and reliable resources that are available and accessible to the larger educator community and teachers desperately need these resources to support themselves and their students in using these tools thoughtfully and safely. The successful development of this resource center would result in increased educator understanding of AI and data science such that the standing of U.S. students increases on such international measurements as the International Computer and Information Literacy Study (ICILS), as well as increased participation in STEAM fields that rely on these skills.

Conclusion

The field of education is at a turning point; the rise of advancements in AI and Big Data necessitate increased focus on these areas in the K-12 classroom; however, most educators do not have the preparation needed to adequately teach these topics to fully prepare their students. For the United States to continue to be a competitive global power in technology and innovation, we need a workforce that understands how to use, apply, and develop new innovations using AI and Data Science. This proposal for a library of high quality, open-source, vetted materials would support democratization of professional development for all educators and their students.

Retiring Baby Boomers Can Turn Workers into Owners: Securing American Business Ownership through Employee Ownership

The economic vitality and competitiveness of America’s economy is in jeopardy. The Silver Tsunami of retiring business owners puts half of small businesses at risk: 2.9 million companies are owned by someone at or near retirement age, of which 375,000 are manufacturing, trade, and distribution businesses critical to our supply chains. Add to this that 40 percent of U.S. corporate stock is owned by foreign investors, which funnels these companies’ profits out of our country, weakening our ability to reinvest in our own competitiveness. If the steps to expand the availability of employee ownership were to address even just 10% of the Silver Tsunami companies over 10 employees, this would preserve an estimated 57K small businesses and 2.6M jobs, affecting communities across the U.S. Six hundred billion dollars in economic activity by American-owned firms would be preserved, ensuring that these firms’ profits continue to flow into American pockets.

Broad-based employee ownership (EO) is a powerful solution that preserves local American business ownership, protects our supply chains and the resiliency of American manufacturing, creates quality jobs, and grows the household balance sheets of American workers and their families. Expanding access to financing for EO is crucial at this juncture, given the looming economic threats of the Silver Tsunami and foreign business ownership.

Two important opportunities expand capital access to finance sales of businesses into EO, building on over 50 years of federal support for EO and over 65 years of supporting the flow of small business private capital to where it is not in adequate supply: first, the Employee Equity Investment Act (EEIA), and second, addressing barriers in the SBA 7(a) loan guarantee program.

Three trends create tremendous urgency to leverage employee ownership small business acquisition: (1) the Silver Tsunami, representing $6.5T in GDP and one in five private sector workers nationwide, (2) fewer than 30 percent of businesses are being taken over by family members, and (3) only one in five businesses put up for sale is able to find a buyer. 

Without preserving Silver Tsunami businesses, the current 40 percent share of foreign ownership will only grow. Supporting U.S. private investors in the mergers and acquisitions (M&A) space to proactively pitch EO to business owners, and come with readily available financing, enables EO to compete with other acquisition offers, including foreign firms.  

In communities all across the U.S., from urban to suburban to rural (where arguably the need to find buyers and the impact of job losses can be most acute), EO is needed to preserve these businesses and their jobs in our communities, maintain U.S. stock ownership, preserve manufacturing production capacity and competitive know how, and create the potential for the next generation of business owners to create economic opportunity for themselves and their families.

Challenge and Opportunity

Broad-based employee ownership (EO) of American small businesses is one of the most promising opportunities to preserve American ownership and small business resiliency and vitality, and help address our country’s enormous wealth gap. EO creates the opportunity to have a stake in the game, and to understand what it means to be a part owner of a business for today’s small business workforces. 

However, the growth of EO, and its ability to preserve American ownership of small businesses in our local economies, is severely hampered by access to financing.   

Most EO transactions (which are market rate sales) require the business owner to first learn about EO, then to not only initiate the transaction (typically hiring a consultant to structure the deal for them), but also to finance as much as 50 percent or more of the sale. This contrasts to how the M&A market traditionally works: buyers who provide the financing are the ones who initiate the transaction with business owners. This difference is a core reason why EO hasn’t grown as quickly as it could, given all of the backing provided through federal tax breaks dating back to 1974.

More than one form of EO is needed to address the urgent Silver Tsunami and related challenges, including Employee Stock Ownership Plans (ESOPs) which are only a fit for companies of about 40 employees and above, and worker-owned cooperatives and Employee Ownership Trusts (EOTs), which are a fit for companies of about 10 employees and above (below 10 is a challenge for any EO transition). Of small businesses with greater than 10 employees, those with 10-19 employees make up 51% of the total; those with 20-49 employees make up 33%. In other words, the vast majority of companies with over 10 employees (the minimum size threshold for EO transitions) are below the 40+ employee threshold required for an ESOP. This underscores the importance of ensuring financing access for worker coops and EOTs that can support transitions of companies in the 10-40 employee range.

Without action, we are at risk of losing the small businesses and jobs that are in need of buyers as a result of the Silver Tsunami.

Across the entire small business economy, 2.9M businesses that provide 32.1M jobs are estimated to be at risk, representing $1.3T in payroll and $6.5T in business revenue. Honing in on only manufacturing, wholesale trade and transportation & warehousing businesses, there are an estimated 375,000 businesses at risk that provide 5.5M jobs combined, representing $279.2B of payroll and $2.3T of business revenue.

Plan of Action

Two important opportunities will expand capital access to finance sales of businesses into EO and solve the supply-demand imbalance created in the small business merger and acquisition marketplace with too many businesses needing buyers and being at risk of closing down due to the Silver Tsunami.

First, passing new legislation, the Employee Equity Investment Act (EEIA), would establish a zero-subsidy credit facility at the Small Business Administration, enabling Congress to preserve the legacy of local businesses and create quality jobs with retirement security by helping businesses transition to employee ownership. By supporting private investment funds, referred to as Employee Equity Investment Companies (EEICs), Congress can support the private market to finance the sale of privately-held small- and medium-sized businesses from business owners to their employees through credit enhancement capabilities at zero subsidy cost to the taxpayer.

EEICs are private investment companies licensed by the Small Business Administration that can be eligible for low-cost, government-backed capital to either create or grow employee-owned businesses. In the case of new EO transitions, the legislation intends to “crowd in” private institutional capital sources to reduce the need for sellers to self-finance a sale to employees. Fees paid into the program by the licensed funds enable it to operate at a zero-subsidy cost to the federal government. 

The Employee Equity Investment Act (EEIA) helps private investors that specialize in EO to compete in the mergers & acquisition (M&A) space.

Second, addressing barriers to EO lending in the SBA 7(a) loan guarantee program by passing legislation that removes the personal guarantee requirement for worker coops and EOTs would help level the playing field, enabling companies transitioning to EO to qualify for this loan guarantee without requiring a single employee-owner to personally guarantee the loan on behalf of the entire owner group of 10, 50 or 500 employees. 

Importantly, our manufacturing supply chain depends on a network of tier 1, 2 and 3 suppliers across the entire value chain, a mix of very large and very small companies (over 75% of manufacturing suppliers have 20 or fewer employees). The entire sector faces an increasingly fragile supply chain and growing workforce shortages, while also being faced with the Silver Tsunami risk. Ensuring that EO transitions can help us preserve the full range of suppliers, distributors and other key businesses will depend on having capital that can finance companies of all sizes. The SBA 7(a) program can guarantee loans of up to $5M, on the smaller end of the small business company size. 

Even though the SBA took steps in 2023 to make loans to ESOPs easier than under prior rules, the biggest addressable market for EO loans that fit within the SBA’s 7(a) loan size range are for worker coops and EOTs (because ESOPs are only a fit for companies with about 40 employees or fewer, given higher regulatory costs). Worker coops and EOTs are currently not able to utilize this SBA product. 

The legislative action needed is to require the SBA to remove the requirement for a personal guarantee under the SBA 7(a) loan guarantee program for acquisitions financing for worker cooperatives and Employee Ownership Trusts. The Capital for Cooperatives Act (introduced to both the House and the Senate most recently in May 2021) provides a strong starting point for the legislative changes needed. There is precedent for this change; the Paycheck Protection Program loans and SBA Economic Injury Disaster Loans (EIDL) were made during the pandemic to cooperatives without requiring personal guarantees as well as the aforementioned May 2023 rule change allowing majority ESOPs to borrow without personal guarantee.

There is not any expected additional cost to this program outside of some small updates to policies and public communication about the changes. 

Addressing barriers to EO lending in the SBA 7(a) loan guarantee program would open up bank financing to the full addressable market of EO transactions.

The Silver Tsunami of retiring business owners puts half of all employer-businesses urgently at risk if these business owners can’t find buyers, as the last of the baby boomers turns 65 in 2030. Maintaining American small business ownership, with 40% of stock of American companies already owned by foreign stockholders, is also critical. EO preserves domestic productive capacity as an alternative to acquisition by foreign firms, including China, and other strategic competitors, which bolsters supply chain resiliency and U.S. strategic competitiveness. Manufacturing is a strong fit for EO, as it is consistently in the top two sectors for newly formed employee-owned companies, making up 20-25% of all new ESOPs

Enabling private investors in the M&A space to proactively pitch EO to business owners, and come with readily available financing will help address these urgent needs, preserving small business assets in our communities, while simultaneously creating a new generation of American business owners.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

Frequently Asked Questions
How many employee-owned companies are there in the U.S. today?

There are an estimated 7,500+ EO companies in the U.S. today, with nearly 40,000 employee-owners and assets well above $2T. Most are ESOPs (about 6,500), plus about 1,000 worker cooperatives, and under 100 EOTs.

How much of the Silver Tsunami risk could these supports for employee ownership financing potentially address?

For every 1% of Silver Tsunami companies with more than 10 employees that is able to transition to EO based on these recommendations, an estimated 5.7K firms, $60.7B in sales, 260K jobs, and 12.3B in payroll would be preserved.

How much support has Congress and the federal government provided for employee ownership and small business access to capital in the past?

Congress and the federal government have demonstrated their support of small business and the EO form of small business in many ways, which this proposed two-pronged legislation builds on, for example:



  • Creation of the SBIC program in the SBA in 1958 designed to stimulate the small business segment of the U.S. economy by supplementing “the flow of private equity capital and long-term loan funds which small-business concerns need for the sound financing of their business operations and for their growth, expansion, and modernization, and which are not available in adequate supply [emphasis added]”

  • Passage of multiple pieces of federal legislation providing tax benefits to EO companies dating back to 1974

  • Passage of the Main Street Employee Ownership Act in 2018, which was passed with the intention of removing barriers to SBA loans or guarantees for EO transitions, including to allow ESOPs and worker coops to qualify for loans under the SBA’s 7(a) program. The law stipulated that the SBA “may” make the changes the law provided, but the regulations SBA initially issued made things harder, not easier. Over the next few years, Representatives Dean Phillips (D-MN) and Nydia Velazquez (D-NY), both on the House Small Business Committee, led an effort to get the SBA to make the most recent changes that benefitted ESOPs but not the other forms of EO.

  • Release of the first Job Quality Toolkit by the Commerce Department in July 2021, which explicitly includes EO as one of the job quality strategies

  • Passage of the WORK Act (Worker Ownership, Readiness, and Knowledge) in 2023 (incorporated as Section 346 of the SECURE 2.0 Act), which directs the Department of Labor (DOL) to create an Employee Ownership Initiative within the department to coordinate and fund state employee ownership outreach programs and also requires the DOL to set new standards for ESOP appraisals. The program was to be funded at $4 million in fiscal year 2025 (which starts in October 2024), gradually increasing to $16 million by fiscal year 2029, but it has yet to be appropriated.

I’ve never heard about EO transitions using a worker coop or an Employee Ownership Trust. How widespread is this?

EO transitions using worker cooperatives have been happening for decades. Over the past ten years, this practice has grown significantly. There is a 30-member network of practitioners that actively support small business transitions utilizing worker coops and EOTs called Workers to Owners. Employee Ownership Trusts are newer in the U.S. (though they are the standard EO form in Europe, with decades of strong track record) and are a rapidly growing form of EO with a growing set of practitioners.

Why does there need to be a specialized program to capitalize EO funds?

Given the supply ~ demand imbalance of retiring business owners created by the Silver Tsunami (lots of businesses need buyers), as well as the outsized positive benefits of EO, prioritizing this form of business ownership is critical to preserving these business assets in our local and national economies. Capital to finance the transactions is central to ensuring EO’s ability to play this important role.

What is the scale of the SBA’s 7(a) loan program?

The SBA 7(a) loan program has been and continues to be, critical to opening up bank (and some CDFI) financing for small businesses writ large by guaranteeing loans up to $5M. In FY23, the SBA guaranteed more than 57,300 7(a) loans worth $27.5 billion.

What are the SBA’s 7(a) loan program’s general rules for personal guarantees?

The SBA 7(a) loan program’s current rules require that all owners with 20% or more ownership of a business provide a personal guarantee for the loan, but absent anyone owning 20%, at least one individual must provide the personal guarantee. The previously mentioned May 2023 rule changes updated this for majority ESOPs.

What would the SBA use in place of the personal guarantee?

Just as with the ESOP form of EO, the SBA would be able to consider documented proof of an EO borrower’s ability to repay the loan based on equity, cash flow, and profitability to determine lending criteria.

But isn’t it risky to lend without a personal guarantee?

Research into employee ownership demonstrates that EO companies have faster growth, higher profits, and that they outlast their competitors in business cycle downturns. There is precedent for offering loans without a personal guarantee. First, during COVID, the SBA extended both EIDL (Economic Injury Disaster Loans) and PPP (Paycheck Protection Program) loans to cooperatives without requiring a personal guarantee. Second, the SBA’s May 2023 rule changes allow majority ESOPs to borrow without personal guarantee.

Why is the largest addressable market for the SBA 7(a) loan within EO transitions for worker coops and EOTs?

The overlap of the EO transaction value with the $5M ceiling for the 7(a) loan guarantee has the largest overlap with transaction values that are suitable for worker coops and EOTs. This is because ESOPs are not viable below about $750K-$1M transaction value due to higher regulatory-related costs, but the other forms of EO are viable down to about 10 or so employees.


A typical bank- or CDFI- financed EO transaction is a senior loan of 50-70% and a seller note of 30-50%. With a $5M ceiling for the 7(a) loan guarantee, this would cap the EO transaction value for 7(a) loans at $10M (a 50% seller note of $5M alongside a $5M bank loan). If a sale price is 4-6x EBITDA (a measure of annual profit) at this transaction value, this would cap the eligible company EBITDA at $1.7-$2.5M, which captures only the lowest company size thresholds that could be viable for the ESOP form.

Why is the SBA 7(a) loan especially important in the context of preserving supply chain resiliency?

Supply chain fragility and widespread labor shortages are the two greatest challenges facing American manufacturing operators today, with 75% of manufacturers citing attracting and retaining talent as their primary business challenge, and 65% citing supply chain disruptions as their next greatest challenge. Many don’t realize that the manufacturing sector is built like a block tower, with the Tier 1 (largest) suppliers to manufacturers at the top, Tier 2 suppliers at the next level down, and the widest foundational layer made up of Tier 3 suppliers. For example, a typical auto manufacturer will rely on 18,000 suppliers across its entire value chain, over 98% of which are small or medium sized businesses. In fact, 75% of manufacturing businesses have fewer than 20 employees. It is critical that we preserve American businesses across the entire value chain, and opening up financing for EO for companies of all sizes is absolutely critical.

How important is the manufacturing sector to the overall American economy?

The manufacturing sector generates 12% of U.S. GDP (gross domestic product), and if we count the value of the sector’s purchasing, the number goes to nearly one quarter of GDP. The sector also employs nearly one in ten American workers (over 14 million). Manufacturing plays a vital role in both our national security and in public health. Finally, the sector has long been a source of quality jobs and a cornerstone of middle class employment.

Why didn’t the SBA in its May 2023 ruling expand this option for worker coops and Employee Ownership Trusts?

Though we aren’t certain the reasoning, it is most likely because ESOPs have the largest lobbying presence. Given the broad support by the federal government of ESOPs through a myriad of tax benefits designed to encourage companies to transition to ESOPs, it is the biggest form of EO, enabling its lobbying presence. As discussed, their size threshold (based on the costs to comply with the regulatory requirements) put ESOPs out of reach for companies with below $750K – $1M EBITDA (a measure of annual profit), which leaves a large swath of America’s small businesses not supported by the SBA 7(a) loan guarantee when they are transacting an employee ownership succession plan.

Why can’t the SBA just make a rule change for its 7(a) loan guarantee program?

Likely, the lack of lobbying presence by parties representing the non-ESOP forms of employee ownership has resulted in the rule change not applying to the other forms of broad-based employee ownership. However, the data (as outlined above) clearly shows that worker cooperatives and EOTs are needed to address the full breadth of Silver Tsunami EO need, given the size overlap of loans that fit the size guidelines of the 7(a) loan guarantee and the fit with the form of EO. As such, legislators that are focused on American business resiliency and competitiveness are in the good positions to direct the SBA to mirror the ESOP personal loan guarantee treatment for worker cooperatives and EOTs.

Creating a Science and Technology Hub in Congress

Congress should create a new Science and Technology (S&T) Hub within the Government Accountability Office’s (GAO) Science, Technology Assessment, and Analytics (STAA) team to support an understaffed and overwhelmed Congress in addressing pressing science and technology policy questions. A new hub would connect Congress with technical experts and maintain a repository of research and information as well as translate this material to members and staff. There is already momentum building in Congress with several recent reforms to strengthen capacity, and the reversal of the Chevron doctrine infuses the issue with a new sense of urgency. The time is now for Congress to invest in itself. 

Challenge and Opportunity

Congress does not have the tools it needs to contend with pressing scientific and technical questions. In the last few decades, Congress grappled with increasingly complex science and technology policy questions, such as social media regulation, artificial intelligence, and climate change. At the same time, its staff capacity has diminished; between 1994 to 2015, the Government Accountability Office (GAO) and Congressional Research Service (CRS), key congressional support agencies, lost about a third of their employees. Staff on key science related committees like the House Committee on Science, Space, and Technology fell by nearly half.

As a result, members frequently lack the resources they need to understand science and technology. “[T]hey will resort to Google searches, reading Wikipedia, news articles, and yes, even social media reports. Then they will make a flurry of cold calls and e-mails to whichever expert they can get on the phone,” one former science staffer noted. “You’d be surprised how much time I spend explaining to my colleagues that the chief dangers of AI will not come from evil robots with red lasers coming out of their eyes,” representative Jay Obernolte (R-CA), who holds a master’s degree in AI, told  The New York Times. And AI is just one example of a pressing science need Congress must handle, but does not have the tools to grapple with.

Moreover, reliance on external information can intensify polarization, because each side depends on a different set of facts and it is harder to find common ground. Without high-quality, nonpartisan science and technology resources, billions of dollars in funding may be allocated to technologies that do not work or policy solutions at odds with the latest science. 

Additional science support could help Congress navigate complex policy questions related to emerging research,  understand science and technologies’ impacts on legislative issues, and grapple with the public benefits or negative consequences of various science and technology issues. 

The Supreme Court’s 2024 decision in Loper Bright Enterprises v. Raimondo instills a new sense of urgency. The reversal of the decades old “Chevron deference,” which directed courts to defer to agency interpretations in instances where statutes were unclear or silent, means Congress will now have to legislate with more specificity. To do so, it will need the best possible experts and technical guidance. 

There is momentum building for Congress to invest in itself. For the past several years, the Select Committee on the Modernization of Congress (which became a permanent subcommittee of the Committee on House Administration) advocated for increases to staff pay and resources to improve recruitment and retention. Additionally, the GAO Science, Technology Assessment, and Analytics (STAA) team has expanded to meet the moment. From 2019 to 2022, STAA’s staff grew from 49 to 129 and produced 46 technology assessments and short-form explainers. These investments are promising but not sufficient. Congress can draw on this energy and the urgency of a post-Chevron environment to invest in a Science and Technology Hub. 

Plan of Action

Congress should create a new Science and Technology Hub in GAO STAA

Congress should create a Science and Technology Hub within the GAO’s STAA. While most of the STAA’s current work responds to specific requests from members, a new hub within the STAA would build out more proactive and exploratory work by 1) brokering long-term relationships between experts and lawmakers and 2) translating research for Congress. The new hub would maintain relationships with rank-and-file members, not just committees or leadership. The hub could start by advising Congress on emerging issues where the partisan battle lines have not been drawn, such as AI, and over time it will build institutional trust and advise on more partisan issues. 

Research shows that both parties respect and use congressional support agencies, such as GAO, so they are a good place to house the necessary expertise. Housing the new hub within STAA would also build on the existing resources and support STAA already provides and capitalizes on the recent push to expand this team. The Hub could have a small staff of approximately 100 employees. The success of recently created small offices such as the Office of Whistleblower Ombuds proves that a modest staff can be effective. In a post-Chevron world, this hub could also play an important role liaising with federal agencies about how different statutory formulations will change implementation of science related legislation and helping members and staff understand the ins and outs of the passage to implementation process. 

The Hub should connect Congress with a wider range of subject matter experts.

Studies show that researcher-policymaker interactions are most effective when they are long-term working relationships rather than ad hoc interactions. The hub could set up advisory councils of experts to guide Congress on different key areas. Though ad hoc groups of experts have advised Congress over the years, Congress does not have institutionalized avenues for soliciting information. The hub’s nonpartisan staff should also screen for potential conflicts of interest. As a starting point, these advisory councils would support committee and caucus staff as they learn about emerging issues, and over time it could build more capacity to manage requests from individual member officers. Agencies like the National Academies of Sciences, Engineering, and Medicine already employ the advisory council model; however, they do not serve Congress exclusively nor do they meet staff needs for quick turnaround or consultative support. The advisory councils would build on the advisory council model of the Office of Technology Assessment (OTA), an agency that advised Congress on science between the 1970s and 1990s. The new hub could take proactive steps to center representation in its advisory councils, learning from the example of the United Kingdom Parliament’s Knowledge Exchange Unit and its efforts to increase the number of women and people of color Parliament hears from. 

The Hub should help compile and translate information for Congress.

The hub could maintain a one-stop shop to help Congress find and understand data and research on different policy-relevant topics.  The hub could maintain this repository and draw on it to distill large amounts of information into memos that members could digest. It could also hold regular briefings for members and staff on emerging issues. Over time, the Hub could build out a “living evidence” approach in which a body of research is maintained and updated with the best possible evidence at a regular cadence. Such a resource would help counteract the effects of understaffing and staff turnover and provide critical assistance in legislating and oversight, particularly important in a post-Chevron world. 

Conclusion

Taking straightforward steps like creating an S&T hub, which brokers relationships between Congress and experts and houses a repository of research on different policy topics, could help Congress to understand and stay up-to-date on urgent science issues in order to ensure more effective decision making in the public interest.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

Frequently Asked Questions
What other investments can Congress make in itself at this time?

There are a number of additional investments Congress can make that would complement the work of the proposed Science and Technology Hub, including additional capacity for other Congressional support agencies and entities beyond GAO. For example, Congress could lift the cap on the number of staff each member can hire (currently set at 18), and invest in pipelines for recruitment and retention of personal and committee staff with science expertise. Additionally, Congress could advance digital technologies available to Congress for evidence access and networking with the expert community.

Why should the Hub be placed at GAO and how can the GAO adapt to meet this need?

The Hub should be placed in GAO to build on the momentum of recent investments in the STAA team. GAO has recently invested in building human capital with expertise in science and technology that can support the development of the Hub. The GAO should seize the moment to reimagine how it supports Congress as a modern institution. The new hub in the STAA should be part of an overall evolution, and other GAO departments should also capitalize on the momentum and build more responsive and member-focused processes to support Congress.

Creating an HHS Loan Program Office to Fill Critical Gaps in Life Science and Health Financing

We propose the establishment of a Department of Health and Human Services Loan Programs Office (HHS LPO) to fill critical and systematic gaps in financing that prevent innovative life-saving medicines and other critical health technologies from reaching patients, improving health outcomes, and bolstering our public health. To be effective, the HHS LPO requires an authority to issue or guarantee loans, up to $5 billion in total. Federally financed debt can help fill critical funding gaps and complement ongoing federal grants, contracts, reimbursement, and regulatory policies and catalyze private-sector investment in innovation.

Challenge and Opportunity

Despite recent advances in the biological understanding of human diseases and a rapidly advancing technological toolbox, commercialization of innovative life-saving medicines and critical health technologies face enormous headwinds. This is due in part to the difficulty in accessing sustained financing across the entire development lifecycle. Further, macroeconomic trends such as non-zero interest rates have substantially reduced deployed capital from venture capital and private equity, especially with longer investment horizons. 

The average new medicine requires 15 years and over $2 billion to go from the earliest stages of discovery to widespread clinical deployment. Over the last 20 years, the earliest and riskiest portions of the drug discovery process have shifted from the province of large pharmaceutical companies to a patchwork of researchers, entrepreneurs, venture capitalists, and supporting organizations. While this trend has enabled new entrants into the biotechnology landscape, it has also required startup companies to navigate labyrinthine processes of technical regulatory guidelines, obtaining long-term and risk-friendly financing, and predicting payor and provider willingness to ultimately adopt the product.

Additionally, there are major gaps in healthcare infrastructure such as lack of adequate drug manufacturing capacity, healthcare worker shortages, and declining rural hospitals. Limited investment is available for critical infrastructure to support telehealth, rural healthcare settings, biomanufacturing, and decentralized clinical trials, among others.

The challenges in health share some similarities to other highly regulated, capital-intensive industries, such as energy. The Department of Energy (DOE) Loan Program Office (LPO) was created in 2005 to offer loans and loan guarantees to support businesses in deploying innovative clean energy, advanced transportation, and tribal energy projects in the United States. LPO has closed more than $40 billion in deals to date. While agencies across HHS rely primarily on grants and contracts to deploy research and development (R&D) funding, capital-intensive projects are best deployed as loans, not only to appropriately balance risk between the government and lendees but also to provide better stewardship over taxpayer resources via mechanisms that create liquidity with lower budget impact. Moreover, private-sector financing rates are subject to market-based interest rates, which can have enormous impacts on available capital for R&D.

Plan of Action

There are many federal credit programs across multiple departments and agencies that provide a strong blueprint for the HHS LPO to follow. Examples include the aforementioned DOE Loan Programs Office, which provides capital to scale large-scale energy infrastructure projects using new technologies, and the Small Business Administration’s credit programs, which provide credit financing to small businesses via several loan and loan matching programs.

Proposed Actions

We propose the following three actions:

Scope

Similar to how DOE LPO services the priorities of the DOE, the HHS LPO would develop strategy priorities based on market gaps and public health gaps. It would also develop a rigorous diligence process to prioritize, solicit, assess, and manage potential deals, in alignment with the Federal Credit Reform Act and the associated policies set forth by the Office of Management and Budget and followed by all federal credit programs. It would also seek companion equity investors and creditors from the private sector to create leverage and would provide portfolio support via demand-alignment and -generation mechanisms (e.g., advance manufacturing commitments and advanced market commitments from insurers).

We envision several possible areas of focus for the HHS LPO:

  1. Providing loans or loan guarantees to amplify investment funds that use venture capital or other private investment tools, such as early-stage drug development or biomanufacturing capacity. While these funds may already exist, they are typically underpowered.
  2. Providing large-scale financing in partnership with private investors to fund major healthcare infrastructure gaps, such as rural hospitals, decentralized clinical trial capacity, telehealth services, and advanced biomanufacturing capacity.
  3. Providing financing to test new innovative finance models, e.g. portfolio-based R&D bonds, designed to attract additional capital into under-funded R&D and lower financial risks.

Conclusion

To address the challenges in bringing innovative life-saving medicines and critical health technologies to market, we need an HHS Loan Programs Office that would not only create liquidity by providing or guaranteeing critical financing for capital-intensive projects but address critical gaps in the innovation pipeline, including treatments for rare diseases, underserved communities, biomanufacturing, and healthcare infrastructure. Finally, it would be uniquely positioned to pilot innovative financing mechanisms in partnership with the private sector to better align private capital towards public health goals.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

Frequently Asked Questions
What is the DOE Loan Programs Office, and how is it similar to the proposed HHS Loan Programs Office?

The DOE LPO, enabled via the Energy Policy Act of 2005, enables the Secretary of Energy to provide loan guarantees toward publicly or privately financed projects involving new and innovative energy technologies.


The DOE LPO provides a bridge to private financing and bankability for large-scale, high-impact clean energy and supply chain projects involving new and innovative technologies. It also expands manufacturing capacity and energy access within the United States. The DOE LPO has enabled companies involving energy and energy manufacturing technologies to achieve infrastructure-scale growth, including Tesla, an electric car manufacturer; Lithium Americas Corp., a company supplying lithium for batteries; and the Agua Caliente Solar Project, a solar power station sponsored by NRG Solar that was the largest in the world at its time of construction.


The HHS LPO would similarly augment, guarantee, or bridge to private financing for projects involving the development and deployment of new and innovative technologies in life sciences and healthcare. It would draw upon the structure and authority of the DOE LPO as its basis.

What potential use cases would the HHS LPO serve?

The HHS LPO could look to the DOE LPO for examples as to how to structure potential funds or use cases. The DOE LPO’s Title 17 Clean Energy Financing Program provides four eligible project categories: (1) projects deploying new or significantly improved technology; (2) projects manufacturing products representing new or significantly improved technologies; (3) projects receiving credit or financing from counterpart state-level institutions; and (4) projects involving existing infrastructure that also share benefits to customers or associated communities.


Drawing on these examples, the HHS LPO could support project categories such as (1) emerging health and life science technologies; (2) the commercialization and scaling access of novel technologies; and (3) expanding biomanufacturing capacity in the United States, particularly for novel platforms (e.g., cell and gene therapies).

How much would the HHS LPO cost?

The budget could be estimated via its authority to make or guarantee loans. Presently, the DOE LPO has over $400 billion in loan authority and is actively managing a portfolio of just over $30 billion. Given this benchmark and the size of the private market for early-stage healthcare venture capital valued at approximately $20 billion, we encourage the creation of an HHS LPO with $5 billion in loan-making authority. Using proportional volume to the $180 million sought by DOE LPO in FY2023, we estimate that an HHS LPO with $5 billion in loan-making authority would require a budget appropriation of $30 million.

What accountability or oversight measures are required to ensure proper operation and evaluate performance?

The HHS LPO would be subject to oversight by the HHS Inspector General, OMB, as well as the respective legislative bodies, the House of Representatives Energy and Commerce Committee and the Senate Health, Education, Labor and Pension Committee.


Like the DOE LPO, the HHS LPO would publish an Annual Portfolio Status Report detailing its new investments, existing portfolio, and other key financial and operational metrics.

What alternative options could serve the same purpose as the HHS LPO, and why is the HHS LPO preferable?

It is also possible for Congress to authorize existing funding agencies, such as BARDA, the Advanced Research Projects Agency for Health (ARPA-H), or the National Institutes for Health (NIH), with loan authority. However, due the highly specialized talent needed to effectively operate a complex loan financing operation, the program is significantly more likely to succeed if housed into a dedicated HHS LPO that would then work closely with the other health-focused funding agencies within HHS.


The other alternative is to expand the authority for other LPOs and financing agencies, such as the DOE LPO or the U.S. Development Finance Corporation, to focus on domestic health. However, that is likely to create conflicts of priority given their already large and diverse portfolios.

What are the next steps required to stand up the HHS LPO?

The project requires legislation similar to the Department of Energy’s Title 17 Clean Energy Financing Program, created via the Energy Policy Act of 2005 and subsequently expanded via the Infrastructure Investment and Jobs Act in 2021 and the Inflation Reduction Act in 2022.


This legislation would direct the HHS to establish an office, presumably a Loan Programs Office, to make loan guarantees to support new and innovative technologies in life sciences and healthcare. While the LPO could reside within an existing HHS division, the LPO would most ideally be established in a manner that enables it to serve projects across the full Department, including those from the National Institutes of Health, Food and Drug Administration, Biomedical Advanced Research and Development Authority, and the Centers for Medicare and Medicaid Services. As such, it would preferably not reside within any single one of these organizations. Like the DOE LPO, the HHS LPO would be led by a director, who would be directed to hire the necessary finance, technical, and operational experts for the function of the office.


Drawing on the Energy Policy Act of 2005 that created the DOE LPO, enabling legislation for the HHS Loan Programs office would direct the Secretary of HHS to make loan guarantees in consultation with the Secretary of Treasury toward projects involving new and innovative technologies in healthcare and life sciences. The enabling legislation would include several provisions:



  • Necessary included terms and conditions for loan guarantees created via the HHS LPO, including loan length, interest rates, and default provisions;

  • Allowance of fees to be captured via the HHS LPO to provide funding support for the program; and

  • A list of eligible project types for loan guarantees.

Who are potential supporters of this policy? Who are potential skeptics?

Supporters are likely to include companies developing and deploying life sciences and healthcare technologies, including early-stage biotechnology research companies, biomanufacturing companies, and healthcare technology companies. Similarly, patient advocates would be similarly supportive because of the LPO’s potential to bring new technologies to market and reduce the overall Weighted Average Cost of Capital (WACC) for biotechnology companies, potentially supporting expanded access.


Existing financiers of research in biomedical sciences technology may be neutral or ambivalent toward this policy. On one hand, it would provide expanded access to syndicated financing or loan guarantees that would compound the impact of each dollar invested. On the other hand, most financiers currently use equity financing, which enables the demand for a high rate of return via subsequent investment and operation. An LPO could provide financing that requires a lower rate of return, thereby diluting the impact of current financiers in the market.


Skeptics are likely to include groups opposing expansions of government spending, particularly involving higher-risk mechanisms like loan guarantees. The DOE LPO has drawn the attention of several such skeptics, oftentimes leading to increased levels of oversight from legislative stakeholders. The HHS LPO could expect similar opposition. Other skeptics may include companies with existing medicines and healthcare technologies, who may be worried about competitors introducing products with prices and access provisions that have been enabled via financing with lower WACC.

Modernizing AI Fairness Analysis in Education Contexts

The 2022 release of ChatGPT and subsequent foundation models sparked a generative AI (GenAI) explosion in American society, driving rapid adoption of AI-powered tools in schools, colleges, and universities nationwide. Education technology was one of the first applications used to develop and test ChatGPT in a real-world context. A recent national survey indicated that nearly 50% of teachers, students, and parents use GenAI Chatbots in school, and over 66% of parents and teachers believe that GenAI Chatbots can help students learn more and faster. While this innovation is exciting and holds tremendous promise to personalize education, educators, families, and researchers are concerned that AI-powered solutions may not be equally useful, accurate, and effective for all students, in particular students from minoritized populations. It is possible that as this technology further develops that bias will be addressed; however, to ensure that students are not harmed as these tools become more widespread it is critical for the Department of Education to provide guidance for education decision-makers to evaluate AI solutions during procurement, to support EdTech developers to detect and mitigate bias in their applications, and to develop new fairness methods to ensure that these solutions serve the students with the most to gain from our educational systems. Creating this guidance will require leadership from the Department of Education to declare this issue as a priority and to resource an independent organization with the expertise needed to deliver these services.  

Challenge and Opportunity

Known Bias and  Potential Harm

There are many examples of the use of AI-based systems introducing more bias into an already-biased system. One example with widely varying results for different student groups is the use of GenAI tools to detect AI-generated text as a form of plagiarism. Liang et. al  found that several GPT-based plagiarism checkers frequently identified the writing of students for whom English is not their first language as AI-generated, even though their work was written before ChatGPT was available. The same errors did not occur with text generated by native English speakers. However, in a publication by Jiang (2024), no bias against non-native English speakers was encountered in the detection of plagiarism between human-authored essays and ChatGPT-generated essays written in response to analytical writing prompts from the GRE, which is an example of how thoughtful AI tool design and representative sampling in the training set can achieve fairer outcomes and mitigate bias. 

Beyond bias, researchers have raised additional concerns about the overall efficacy of these tools for all students; however, more understanding around different results for subpopulations and potential instances of bias(es) is a critical aspect of deciding whether or not these tools should be used by teachers in classrooms. For AI-based tools to be usable in high-stakes educational contexts such as testing, detecting and mitigating bias is critical, particularly when the consequences of being incorrect are so high, such as for students from minoritized populations who may not have the resources to recover from an error (e.g., failing a course, being prevented from graduating school). 

Another example of algorithmic bias before the widespread emergence of GenAI which illustrates potential harms is found in the Wisconsin Dropout Early Warning System. This AI-based tool was designed to flag students who may be at risk of dropping out of school; however, an analysis of the outcomes of these predictions found that the system disproportionately flagged African American and Hispanic students as being likely to drop out of school when most of these students were not at risk of dropping out). When teachers learn that one of their students is at risk, this may change how they approach that student, which can cause further negative treatment and consequences for that student, creating a self-fulfilling prophecy and not providing that student with the education opportunities and confidence that they deserve. These examples are only two of many consequences of using systems that have underlying bias and demonstrate the criticality of conducting fairness analysis before these systems are used with actual students. 

Existing Guidance on Fair AI & Standards for Education Technology Applications

Guidance for Education Technology Applications

Given the harms that algorithmic bias can cause in educational settings, there is an opportunity to provide national guidelines and best practices that help educators avoid these harms. The Department of Education is already responsible for protecting student privacy and provides guidelines via the Every Student Succeeds Act (ESSA) Evidence Levels to evaluate the quality of EdTech solution evidence. The Office of Educational Technology, through support of a private non-profit organization (Digital Promise) has developed guidance documents for teachers and administrators, and another for education technology developers (U.S. Department of Education, 2023, 2024). In particular, “Designing for Education with Artificial Intelligence” includes guidance for EdTech developers including an entire section called “Advancing Equity and Protecting Civil Rights” that describes algorithmic bias and suggests that, “Developers should proactively and continuously test AI products or services in education to mitigate the risk of algorithmic discrimination.” (p 28). While this is a good overall guideline, the document critically is not sufficient to help developers conduct these tests

Similarly, the National Institute of Standards and Technology has released a publication on identifying and managing bias in AI . While this publication highlights some areas of the development process and several fairness metrics, it does not provide specific guidelines to use these fairness metrics, nor is it exhaustive. Finally demonstrating the interest of industry partners, the EDSAFE AI Alliance, a philanthropically-funded alliance representing a diverse group of companies in educational technology, has also created guidance in the form of the 2024 SAFE (Safety, Accountability, Fairness, and Efficacy) Framework. Within the Fairness section of the framework, the authors highlight the importance of using fair training data, monitoring for bias, and ensuring accessibility of any AI-based tool. But again, this framework does not provide specific actions that education administrators, teachers, or EdTech developers can take to ensure these tools are fair and are not biased against specific populations. The risk to these populations and existing efforts demonstrate the need for further work to develop new approaches that can be used in the field. 

Fairness in Education Measurement

As AI is becoming increasingly used in education, the field of educational measurement has begun creating a set of analytic approaches for finding examples of algorithmic bias, many of which are based on existing approaches to uncovering bias in educational testing. One common tool is called Differential Item Functioning (DIF), which checks that test questions are fair for all students regardless of their background. For example, it ensures that native English speakers and students learning English have an equal chance to succeed on a question if they have the same level of knowledge . When differences are found, this indicates that a student’s performance on that question is not based on their knowledge of the content. 

While DIF checks have been used for several decades as a best practice in standardized testing, a comparable process in the use of AI for assessment purposes does not yet exist. There also is little historical precedent indicating that for-profit educational companies will self-govern and self-regulate without a larger set of guidelines and expectations from a governing body, such as the federal government. 

We are at a critical juncture as school districts begin adopting AI tools with minimal guidance or guardrails, and all signs point to an increase of AI in education. The US Department of Education has an opportunity to take a proactive approach to ensuring AI fairness through strategic programs of support for school leadership, developers in educational technology, and experts in the field. It is important for the larger federal government to support all educational stakeholders under a common vision for AI fairness while the field is still at the relative beginning of being adopted for educational use. 

Plan of Action 

To address this situation, the Department of Education’s Office of the Chief Data Officer should lead development of a national resource that provides direct technical assistance to school leadership, supports software developers and vendors of AI tools in creating quality tech, and invests resources to create solutions that can be used by both school leaders and application developers. This office is already responsible for data management and asset policies, and provides resources on grants and artificial intelligence for the field. The implementation of these resources would likely be carried out via grants to external actors with sufficient technical expertise, given the rapid pace of innovation in the private and academic research sectors. Leading the effort from this office ensures that these advances are answering the most important questions and can integrate them into policy standards and requirements for education solutions. Congress should allocate additional funding to the Department of Education to support the development of a technical assistance program for school districts, establish new grants for fairness evaluation tools that span the full development lifecycle, and pursue an R&D agenda for AI fairness in education. While it is hard to provide an exact estimate, similar existing programs currently cost the Department of Education between $4 and $30 million a year. 

Action 1. The Department of Education Should Provide Independent Support for School Leadership Through a Fair AI Technical Assistance Center (FAIR-AI-TAC) 

School administrators are hearing about the promise and concerns of AI solutions in the popular press, from parents, and from students. They are also being bombarded by education technology providers with new applications of AI within existing tools and through new solutions. 

These busy school leaders do not have time to learn the details of AI and bias analysis, nor do they have the technical background required to conduct deep technical evaluations of fairness within AI applications. Leaders are forced to either reject these innovations or implement them and expose their students to significant potential risk with the promise of improved learning. This is not an acceptable status quo.  

To address these issues, the Department of Education should create an AI Technical Assistance Center (the Center) that is tasked with providing direct guidance to state and local education leaders who want to incorporate AI tools fairly and effectively. The Center should be staffed by a team of professionals with expertise in data science, data safety, ethics, education, and AI system evaluation. Additionally, the Center should operate independently of AI tool vendors to maintain objectivity.

There is precedent for this type of technical support. The U.S. Department of Education’s Privacy Technical Assistance Center (PTAC) provides guidance related to data privacy and security procedures and processes to meet FERPA guidelines; they operate a help desk via phone or email, develop training materials for broad use, and provide targeted training and technical assistance for leaders. A similar kind of center could be stood up to support leaders in education who need support evaluating proposed policy or procurement decisions.  

This Center should provide a structured consulting service offering a variety of levels of expertise based on the individual stakeholder’s needs and the variety of levels of potential impact of the system/tool being evaluated on learners; this should include everything from basic levels of AI literacy to active support in choosing technological solutions for educational purposes. The Center should partner with external organizations to develop a certification system for high-quality AI educational tools that have passed a series of fairness checks. Creating a fairness certification (operationalized by third party evaluators)  would make it much easier for school leaders to recognize and adopt fair AI solutions that meet student needs. 

Action 2. The Department of Education Should Provide Expert Services, Data, and Grants for EdTech Developers 

There are many educational technology developers with AI-powered innovations. Even when well-intentioned, some of these tools do not achieve their desired impacts or may be unintentionally unsafe due to a lack of processes and tests for fairness and safety.

Educational Technology developers generally operate under significant constraints when incorporating AI models into their tools and applications. Student data is often highly detailed and deeply personal, potentially containing financial, disability, and educational status information that is currently protected by FERPA, which makes it unavailable for use in AI model training or testing. 

Developers need safe, legal, and quality datasets that they can use for testing for bias, as well as appropriate bias evaluation tools. There are several promising examples of these types of applications and new approaches to data security, such as the recently awarded NSF SafeInsights project, which allows analysis without disclosing the underlying data. In addition, philanthropically-funded organizations such as the Allen Institute for AI have released LLM evaluation tools that could be adapted and provided to Education Technology developers for testing. A vetted set of evaluation tools, along with more detailed technical resources and instructions for how to use them would encourage developers to incorporate bias evaluations early and often. Currently, there are very few market incentives or existing requirements that push developers to invest the necessary time or resources into this type of fairness analysis. Thus, the government has a key role to play here.

The Department of Education should also fund a new grant program that tasks grantees with developing a robust and independently validated third-party evaluation system that checks for fairness violations and biases throughout the model development process from pre-processing of data, to the actual AI use, to testing after AI results are created. This approach would support developers in ensuring that the tools they are publishing meet an agreed-upon minimum threshold for safe and fair use and could provide additional justification for the adoption of AI tools by school administrators.

Action 3. The Department of Education Should Develop Better Fairness R&D Tools with Researchers 

There is still no consensus on best practices for how to ensure that AI tools are fair. As AI capabilities evolve, the field needs an ongoing vetted set of analyses and approaches that will ensure that any tools being used in an educational context are safe and fair for use with no unintended consequences.

The Department of Education should lead the creation of a a working group or task force comprised of subject matter experts from education, educational technology, educational measurement, and the larger AI field to identify the state of the art in existing fairness approaches for education technology and assessment applications, with a focus on modernized conceptions of identity. This proposed task force would be an inter-organizational group that would include representatives from several different federal government offices, such as the Office of Educational Technology and the Chief Data Office as well as prominent experts from industry and academia. An initial convening could be conducted alongside leading national conferences that already attract thousands of attendees conducting cutting-edge education research (such as the American Education Research Association and National Council for Measurement in Education).

The working group’s mandate should include creating a set of recommendations for federal funding to advance research on evaluating AI educational tools for fairness and efficacy. This research agenda would likely span multiple agencies including NIST, the Institute of Education Sciences of the U.S. Department of Education, and the National Science Foundation. There are existing models for funding early stage research and development with applied approaches, including the IES “Accelerate, Transform, Scale” programs that integrate learning sciences theory with efforts to scale theories through applied education technology program and Generative AI research centers that have the existing infrastructure and mandates to conduct this type of applied research. 

Additionally, the working group should recommend the selection of a specialized group of researchers who would contribute ongoing research into new empirically-based approaches to AI fairness that would continue to be used by the larger field. This innovative work might look like developing new datasets that deliberately look for instances of bias and stereotypes, such as the CrowS-Pairs dataset. It may build on current cutting edge research into the specific contributions of variables and elements of LLM models that directly contribute to biased AI scores, such as the work being done by the AI company Anthropic. It may compare different foundation LLMs and demonstrate specific areas of bias within their output. It may also look like a collaborative effort between organizations, such as the development of the RSM-Tool, which looks for biased scoring. Finally, it may be an improved auditing tool for any portion of the model development pipeline. In general, the field does not yet have a set of universally agreed upon actionable tools and approaches that can be used across contexts and applications; this research team would help create these for the field.

Finally, the working group should recommend policies and standards that would incentivize vendors and developers working on AI education tools to adopt fairness evaluations and share their results.

Conclusion

As AI-based tools continue being used for educational purposes, there is an urgent need to develop new approaches to evaluating these solutions to fairness that include modern conceptions of student belonging and identity. This effort should be led by the Department of Education, through the Office of the Chief Data Officer, given the technical nature of the services and the relationship with sensitive data sources. While the Chief Data Officer should provide direction and leadership for the project, partnering with external organizations through federal grant processes would provide necessary capacity boosts to fulfill the mandate described in this memo.As we move into an age of widespread AI adoption, AI tools for education will be increasingly used in classrooms and in homes. Thus, it is imperative that robust fairness approaches are deployed before a new tool is used in order to protect our students, and also to protect the developers and administrators from potential litigation, loss of reputation, and other negative outcomes.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

Frequently Asked Questions
What are some examples of what is currently being done to ensure fairness in AI applications for educational purposes?

When AI is used to grade student work, fairness is evaluated by comparing the scores assigned by AI to those assigned by human graders across different demographic groups. This is often done using statistical metrics, such as the standardized mean difference (SMD), to detect any additional bias introduced by the AI. A common benchmark for SMD is 0.15, which suggests the presence of potential machine bias compared to human scores. However, there is a need for more guidance on how to address cases where SMD values exceed this threshold.


In addition to SMD, other metrics like exact agreement, exact + adjacent agreement, correlation, and Quadratic Weighted Kappa are often used to assess the consistency and alignment between human and AI-generated scores. While these methods provide valuable insights, further research is needed to ensure these metrics are robust, resistant to manipulation, and appropriately tailored to specific use cases, data types, and varying levels of importance.

What are some concerns about using AI in education for students with diverse and overlapping identities?

Existing approaches to demographic post hoc analysis of fairness assume that there are two discrete populations that can be compared, for example students from African-American families vs. those not from African-American families, students from an English language learner family background vs. those that are not, and other known family characteristics. However in practice, people do not experience these discrete identities. Since at least the 1980s, contemporary sociological theories have emphasized that a person’s identity is contextual, hybrid, and fluid/changing. One current approach to identity that integrates concerns of equity that has been applied to AI is “intersectional identity” theory . This approach has begun to develop promising new methods that bring contemporary approaches to identity into evaluating fairness of AI using automated methods. Measuring all interactions between variables results in too small a sample; these interactions can be prioritized using theory or design principles or more advanced statistical techniques (e.g., dimensional data reduction techniques).

Elevate and Strengthen the Presidential Management Fellows Program

Founded in 1977, the Presidential Management Fellows (PMF) program is intended to be “the Federal Government’s premier leadership development program for advanced degree holders across all academic disciplines” with a mission “to recruit and develop a cadre of future government leaders from all segments of society.” The challenges facing our country require a robust pipeline of talented and representative rising leaders across federal agencies. The PMF program has historically been a leading source of such talent. 

The next Administration should leverage this storied program to reinvigorate recruitment for a small, highly-skilled management corps of upwardly-mobile public servants and ensure that the PMF program retains its role as the government’s premier pipeline for early-career talent. It should do so by committing to placing all PMF Finalists in federal jobs (rather than only half, as has been common in recent years), creating new incentives for agencies to engage, and enhancing user experience for all PMF stakeholders. 

Challenge and Opportunity

Bearing the Presidential Seal, the Presidential Management Fellows (PMF) Program is the Federal Government’s premier leadership development program for advanced degree holders across all academic disciplines. Appropriately for a program created in the President’s name, the application process for the PMF program is rigorous and competitive. Following a resume and transcript review, two assessments, and a structured interview, the Office of Personnel Management (OPM) selects and announces PMF Finalists. 

Selection as a Finalist is only the first step in a PMF applicant’s journey to a federal position. After they are announced, PMF Finalists have 12 months to find an agency posting by completing a second round of applications to specific positions that agencies have designated as eligible for PMFs. OPM reports that “over the past ten years, on average, 50% of Finalists obtain appointments as Fellows.” Most Finalists who are placed are not appointed until late in the eligibility period: halfway through the 2024 eligibility window, only 85 of 825 finalists (10%) had been appointed to positions in agencies.

For applicants and universities, this reality can be dispiriting and damage the reputation of the program, especially for those not placed. The yearlong waiting period ending without a job offer for about half of Finalists belies the magnitude of the accomplishment of rising to the top of such a competitive pool of candidates eager to serve their country. Additionally, Finalists who are not placed in a timely manner will be likelier to pursue job opportunities outside of federal service.  At a moment when the federal government is facing an extraordinary talent crisis with an aging workforce and large-scale retirements, the PMF program must better serve its purpose as a trusted source of high-level, early-career talent.

zThe current program design also affects the experience of agency leaders—such as hiring managers and Chief Human Capital Officers (CHCOs)—as they consider hiring PMFs. When agencies hire a PMF for a 2-year placement, they cover the candidate’s salary plus an $8,000 fee to OPM’s PMF program office to support its operations. Agencies consider hiring PMF Finalists with the knowledge that the PMF has the option to complete a 6-month rotational assignment outside of their hiring unit. These factors may create the impression that hiring a PMF is “costlier” than other staffing options.

Despite these challenges, the reasons for agencies to invest in the PMF program remain numerous:

The PMF is still correctly understood as the government’s premier onramp program for early career managerial talent. With some thoughtful realignment, it can sustain and strengthen this role and improve experience for all its core stakeholders.  

Plan of Action

The next Administration should take a direct hand in supporting the PMF Program. As the President’s appointee overseeing the program, the OPM Director should begin by publicly setting an ambitious placement percentage goal and then driving the below reforms to advance that goal. 

Recommendation 1. Increase the Finalist placement rate by reducing the Finalist pool.

The status quo reveals misalignment between the pool of PMF Finalists and demand for PMFs across government. This may be in part due to the scale of demand, but is also a consequence of PMF candidates and finalists with ever-broader skill sets, which makes placement more challenging and complex. Along with the 50% placement rates, the existing imbalance between finalists and placements is reflected in the decision to contract the finalist pool from 1100 in 2022 to 850 in 2023 and 825 in 2024. The next Administration should adjust the size of the Finalist pool further to ensure a near-100% placement rate and double down on its focus on general managerial talent to simplify disciplinary matching. Initially, this might mean shrinking the pool from the 825 advanced in 2024 to 500 or even fewer. 

The core principle is simple: PMF Finalists should be a valuable resource for which agencies compete. There should be (modestly) fewer Finalists than realistic agency demand, not more. Critically, this change would not aim to reduce the number of PMFs serving in government. Rather, it seeks to sustain the current numbers while dramatically reducing the number of Finalists not placed and creating a healthier set of incentives for all parties.

When the program can reliably boast high placement rates, then the Federal government can strategize on ways to meaningfully increase the pool of Fellows and use the program to zero in on priority hard-to-hire disciplines outside of general managerial talent.

Recommendation 2. Attach a financial incentive to hiring and retaining a PMF while improving accountability. 

To underscore the singular value of PMFs and their role in the hiring ecosystem, the next Administration should attach a financial incentive to hiring a PMF. 

Because of the $8,000 placement fee, PMFs are seen as a costlier route than other sources of talent. A financial incentive to hire PMFs would reverse this dynamic. The next Administration might implement a large incentive of $50,000 per Fellow, half of which would be granted when a Fellow is placed and the other half to be granted when the Fellow accepts a permanent full-time job offer in the Federal government. This split payment would signal an investment in Fellows as the future leaders of the federal government. 

Assuming an initial cohort of 400 placed Fellows at $50,000 each, OPM would require $20 million plus operating costs for the PMF program office. To secure funds, the Administration could seek appropriations, repurpose funds through normal budget channels, or pursue an agency pass-the-hat model like the financing of the Federal Executive Board and Hiring Experience program offices. 

To parallel this incentive, the Administration should also implement accountability measures to ensure agencies more accurately project their PMF needs by assigning a cost to failing to place some minimum proportion–perhaps 70%–of the Finalists projected in a given cycle. This would avoid too many unplaced Finalists. Agencies that fail to meet the threshold should have reduced or delayed access to the PMF pool in subsequent years. 

Recommendation 3. Build a Stronger Support Ecosystem 

In support of these implementation changes, the next Administration should pursue a series of actions to elevate the program and strengthen the PMF ecosystem. 

Even if the Administration pursues the above recommendations, some Finalists would remain unpaired. The PMF program office should embrace the role of a talent concierge for a smaller, more manageably-sized cohort of yet-unpaired Finalists, leveraging relationships across the government, including with PMF Alumni and the Presidential Management Alumni Association (PMAA) and OPM’s position as the government’s strategic talent lead to encourage agencies to consider specific PMF Finalists in a bespoke way. The Federal government should also consider ways to privilege applications from unplaced Finalists who meet criteria for a specific posting.

To strengthen key PMF partnerships in agencies, the Administration should elevate the role of PMF Coordinators beyond “other duties as assigned” to a GS-14 “PMF Director.” With new incentives to encourage placement and consistent strategic orientation from agency partners, agencies will be in a better position to project their placement needs by volume and role and hire PMF Finalists who meet them. PMF Coordinators would have explicit performance measures that reflect ownership over the success of the program.

The Administration must commit and sustain senior-level engagement—in the White House and at the senior levels of OMB, OPM, and in senior agency roles including Deputy Secretaries, Assistant Secretaries for Management, and Chief Human Capital Officers—to drive forward these changes. It must seize key leverage points throughout the budget and strategic management cycle, including OPM’s Human Capital Operating Plan process, OMB’s Strategic Reviews process, and the Cross-Agency Priority Goal setting rhythms. And it must sustain focus, recognizing that these new design elements may not succeed in their first cycle, and should provide support for experimentation and innovation.

Current PMF Program Compared to Proposed Reform
Status QuoProposed Change
Size of Finalist Pool800-1100400-500
Placement Rate~50%Target 100%, achieve 80-90%
Total Placements400-550320-450
Number of Unplaced Finalists400-550<100
Financial modelAgencies carry salary and benefits and pay a premium of $8,000 to OPM in cost recovery to fund program officeEach Fellow carries a financial incentive encouraging speedy placement; program office and incentive funded centrally
Experience for FinalistsFrustrating waits are typical; many hundreds of potential public servants left unplaced; experience of being a Finalist does not always reflect magnitude of accomplishmentFinalists are a valuable, scarce commodity; they have more potential matches with agencies and experience shorter waits
Experience for AgenciesLarge pool of Finalists is difficult to navigate; agencies harbor concerns about quality of Fellows waiting for placement; little urgency to act; PMFs seen as one talent pool among many; Program Coordination is often an “other duty as assigned”Smaller pool that is easier to navigate; even higher quality finalist pool; significant urgency to act to capture financial incentive and meet talent needs; clear understanding of role PMFs play in talent strategy; coordination and needs forecasting resides in a higher-graded, strategically-oriented role
Experience for Program OfficeCost-recovery model creates significant uncertainty in budgeting and operations planning; difficult to make selections due to inconsistent agency need forecastingProgram office manages access to a valuable asset; with less “selling,” staff focuses on bespoke pairing for smaller number of unpaired applicants and shaping each year’s finalist pool to reflect improved needs forecasts

Conclusion

For decades, the PMF program has consistently delivered top-tier talent to the federal government. However, the past few years have revealed a need for reform to improve the experience of PMF hopefuls and the agencies that will undoubtedly benefit from their skills. With a smaller Finalist pool, healthier incentives, and a more supportive ecosystem, agencies would compete for a subsidized pool of high-quality talent available to them at lower cost than alternative route, and Fellows who clear the significant barrier of the rigorous selection process would have far stronger assurance of a placement. If these reforms are successfully implemented, esteem for the government’s premier onramp for rising managerial talent will rise, contributing to the impression that the Federal government is a leading and prestigious employer of our nation’s rising leaders. 

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

Frequently Asked Questions
What is the role of the PMF rotation?

The PMF program is a 2-year placement with an optional 6-month rotation in another office within the appointing agency or another agency. The rotation is an important and longstanding design element of a program aiming to build a rising cohort of managerial talent with a broad purview. While the current program requires agencies pay OPM the full salary, benefits, and a placement fee for placing a PMF, the one quarter rotation may act as a barrier to embracing PMF talent. This can be addressed by adding a significant subsidy to balance this concern.

How does shrinking the size of the Finalist pool enhance the program?

In the current program, OPM uses a rule of thumb to set the number of Finalists at approximately 80% of anticipated demand to minimize the number of unplaced Finalists. This is a prudent approach, reflected in shifting Finalist numbers in recent years: from 1100 in 2022 to 850 in 2023 and 825 in 2024. Despite adjusting the Finalist pool, unfortunately placement rates have remained near 50%. Agencies are failing to follow-through on their projected demand for PMFs, which has unfortunate consequences for Finalists and presents management challenges for the PMF program office.


This reform proposal would take a large step by reducing the Finalist pool to well below the stated demand–500 or less–and focus on general managerial talent to make the pairing process simpler. This would be, fundamentally, a temporary reset to raise placement rates and improve user experience for candidates, agencies, and the program management team. As placement rhythms strengthen along the lines described above, there is every reason for the program to grow.

Is a subsidy for PMF Finalists going to cost the government more money?

The subsidy proposed for placing a PMF candidate would not require a net increase in federal expenditures. In the status quo, all costs of the PMF program are borne by the government: agencies pay salaries and benefits, and pay a fee to OPM at the point of appointment. This proposal would surface and centralize these costs and create an agency incentive through the subsidy to hire PMFs, either by “recouping” funds collected from agencies through a pass-the-hat revolving fund or “capitalizing” on a central investment from another source. In either case, it would ensure that PMF Finalists are a scarce asset to be competed for, as the program was envisioned, and that the PMF program office manages thoughtful access to this asset for the whole government, rather than needing to be “selling” to recover operational costs.

A Quantitative Imaging Infrastructure to Revolutionize AI-Enabled Precision Medicine

Medical imaging, a non-invasive method to detect and characterize disease, stands at a crossroads. With the explosive growth of artificial intelligence (AI), medical imaging offers extraordinary potential for precision medicine yet lacks adequate quality standards to safely and effectively fulfill the promise of AI. Now is the time to create a quantitative imaging (QI) infrastructure to drive the development of precise, data-driven solutions that enhance patient care, reduce costs, and unlock the full potential of AI in modern medicine.

Medical imaging plays a major role in healthcare delivery and is an essential tool in diagnosing numerous health issues and diseases (e.g., oncology, neurology, cardiology, hepatology, nephrology, pulmonary, and musculoskeletal). In 2023, there were more than 607 million imaging procedures in the United States and, per a 2021 study, $66 billion (8.9% of the U.S. healthcare budget) is spent on imaging.  

Despite the importance and widespread use of medical imaging like magnetic resonance imaging (MRI), X-ray, ultrasound, computed tomography (CT), it is rarely standardized or quantitative. This leads to unnecessary costs due to repeat scans to achieve adequate image quality, and unharmonized and uncalibrated imaging datasets, which are often unsuitable for AI/machine learning (ML) applications. In the nascent yet exponentially expanding world of AI in medical imaging, a well-defined standards and metrology framework is required to establish robust imaging datasets for true precision medicine, thereby improving patient outcomes and reducing spiraling healthcare costs.

Challenge and Opportunity 

The U.S. spends more on healthcare than any other high-income country yet performs worse on measures of health and healthcare. Research has demonstrated that medical imaging could help save money for the health system with every $1 spent on inpatient imaging resulting in approximately $3 total savings in healthcare delivered. However, to generate healthcare savings and improve outcomes, rigorous quality assurance (QA)/quality control(QC) standards are required for true QI and data integrity.   

Today, medical imaging suffers two shortcomings inhibiting AI: 

Both result in variability impacting assessments and reducing the generalizability of, and confidence in, imaging test results and compromise data quality required for AI applications.

The growing field of QI, however, provides accurate and precise (repeatable and reproducible) quantitative-image-based metrics that are consistent across different imaging devices and over time. This benefits patients (fewer scans, biopsies), doctors, researchers, insurers, and hospitals and enables safe, viable development and use of AI/ML tools.  

Quantitative imaging metrology and standards are required as a foundation for clinically relevant and useful QI. A change from “this might be a stage 3 tumor” to “this is a stage 3 tumor” will affect how oncologists can treat a patient. Quantitative imaging also has the potential to remove the need for an invasive biopsy and, in some cases, provide valuable and objective information before even the most expert radiologist’s qualitative assessment. This can mean the difference between taking a nonresponding patient off a toxic chemotherapeutic agent or recognizing a strong positive treatment response before a traditional assessment. 

Plan of Action 

The incoming administration should develop and fund a Quantitative Imaging Infrastructure to provide medical imaging with a foundation of rigorous QA/QC methodologies, metrology, and standards—all essential for AI applications.

Coordinated leadership is essential to achieve such standardization. Numerous medical, radiological, and standards organizations support and recognize the power of QI and the need for rigorous QA/QC and metrology standards (see FAQs). Currently, no single U.S. organization has the oversight capabilities, breadth, mandate, or funding to effectively implement and regulate QI or a standards and metrology framework.

As set forth below, earlier successful approaches to quality and standards in other realms offer inspiration and guidance for medical imaging and this proposal:

Recommendation 1. Create a Medical Metrology Center of Excellence for Quantitative Imaging. 

Establishing a QI infrastructure would transform all medical imaging modalities and clinical applications. Our recommendation is that an autonomous organization be formed, possibly appended to existing infrastructure, with the mandate and responsibility to develop and operationally support the implementation of quantitative QA/QC methodologies for medical imaging in the age of AI. Specifically this fully integrated QI Metrology Center of Excellence would need federal funding to:

Once implemented, the Center could focus on self-sustaining approaches such as testing and services provided for a fee to users.

Similar programs and efforts have resulted in funding (public and private) ranging from $90 million (e.g., Pathogen Genomics Centers of Excellence Network) to $150 million (e.g., Biology and Machine Learning – Broad Institute). Importantly, implementing a QI Center of Excellence would augment and complement federal funding currently being awarded through ARPA-H and the Cancer Moonshot, as neither have an overarching imaging framework for intercomparability between projects.  

While this list is by no means exhaustive, any organization would need input and buy-in from:

International organizations also have relevant programs, guidance, and insight, including:

Recommendation 2. Implement legislation and/or regulation providing incentives for standardizing all medical imaging. 

The variability of current standard-of-care medical imaging (whether acquired across different sites or over a period of time) creates different “appearances.” This variability can result in different diagnoses or treatment response measurements, even though the underlying pathology for a given patient is unchanged. Real-world examples abound, such as one study that found 10 MRI studies over three weeks resulted in 10 different reports. This heterogeneity of imaging data can lead to a variable assessment by a radiologist (inter-reader variability), AI interpretation (“garbage-in-garbage-out”), or treatment recommendations from clinicians. Efforts are underway to develop “vendor-neutral sequences” for MRI and other methods (such as quantitative ground truth references, metrological standards, etc.) to improve data quality and ensure intercomparable results across vendors and over time. 

To do so, however, requires coordination by all original equipment manufacturers (OEMs) or legislation to incentivize standards. The 1992 Mammography Quality Standards Act (MQSA) provides an analogous roadmap. MQSA’s passage implemented rigorous standards for mammography, and similar legislation focused on quality assurance of quantitative imaging, reducing or eliminating machine bias, and improved standards would reduce the need for repeat scans and improve datasets. 

In addition, regulatory initiatives could also advance quantitative imaging. For example, in 2022, the Food and Drug Administration (FDA) issued Technical Performance Assessment of Quantitative Imaging in Radiological Device Premarket Submissions, recognizing the importance of ground truth references with respect to quantitative imaging algorithms. A mandate requiring the use of ground truth reference standards would change standard practice and be a significant step to improving quantitative imaging algorithms.

Recommendation 3. Ensure a funded QA component for federally funded research using medical imaging. 

All federal medical research grant or contract awards should contain QA funds and require rigorous QA methodologies. The quality system aspects of such grants would fit the scope of the project; for example, a multiyear, multisite project would have a different scope than single-site, short-term work.

NIH spends the majority of its $48 billion budget on medical research. Projects include multiyear, multisite studies with imaging components. While NIH does have guidelines on research and grant funding (e.g., Guidance: Rigor and Reproducibility in Grant Applications), this guidance falls short in multisite, multiyear projects where clinical scanning is a component of the study.  

To the extent NIH-funded programs fail to include ground truth references where clinical imaging is used, the resulting data cannot be accurately compared over time or across sites. Lack of standardization and failure to require rigorous and reproducible methods compromises the long-term use and applicability of the funded research. 

By contrast, implementation of rigorous standards regarding QA/QC, standardization, etc. improve research in terms of reproducibility, repeatability, and ultimate outcomes. Further, confidence in imaging datasets enables the use of existing and qualified research in future NIH-funded work and/or imaging dataset repositories that are being leveraged for AI research and development, such as the Medical Imaging and Resource Center (MIDRC). (See also: Open Access Medical Imaging Repositories.)  

Recommendation 4. Implement a Clinical Standardization Program (CSP) for quantitative imaging. 

While not focused on medical imaging, the CDC’s CSPs have been incredibly successful and “improve the accuracy and reliability of laboratory tests for key chronic biomarkers, such as those for diabetes, cancer, and kidney, bone, heart, and thyroid disease.” By way of example, the CSP for Lipids Standardization has “resulted in an estimated benefit of $338M at a cost of $1.7M.” Given the breadth of use of medical imaging, implementing such a program for QI would have even greater benefits.  

Although many people think of the images derived from clinical imaging scans as “pictures,” the pixel and voxel numbers that make up those images contain meaningful biological information. The objective biological information that is extracted by QI is conceptually the same as the biological information that is extracted from tissue or fluids by laboratory assay techniques. Thus, quantitative imaging biomarkers can be understood to be “imaging assays.” 

The QA/QC standards that have been developed for laboratory assays can and should be adapted to quantitative imaging.  (See also regulations, history, and standards of the Clinical Laboratory Improvement Amendment (CLIA) ensuring quality laboratory testing.)

Recommendation 5. Implement an accreditation program and reimbursement code for quantitative imaging starting with qMRI.

The American College of Radiology currently provides basic accreditation for clinical imaging scanners and concomitant QA for MRI. These requirements, however, have been in place for nearly two decades and do not address many newer quantitative aspects (e.g., relaxometry and ADC) nor account for the impact of image variability in effective AI use. Several new Current Procedural Terminology (CPT) codes have been recently adopted focused on quantitative imaging. An expansion of reimbursement codes for quantitative imaging could drive more widespread clinical adoption.

QI is analogous to the quantitative blood, serum and tissue assays done in clinical laboratories, subject to CLIA, one of the most impactful programs for improving the accuracy and reliability of laboratory assays. This CMS-administered mandatory accreditation program promulgates quality standards for all laboratory testing to ensure the accuracy, reliability, and timeliness of patient test results, regardless of where the test was performed. 

Conclusion

These five proposals provide a range of actionable opportunities to modernize the approach to medical imaging to fit the age of AI, data integrity, and precision patient health. A comprehensive, metrology-based quantitative imaging infrastructure will transform medical imaging through:

With robust metrological underpinnings and a funded infrastructure, the medical community will have confidence in the QI data, unlocking powerful health insights only imaginable until now.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

Frequently Asked Questions
Is scanner variability and lack of standardization really an issue?

Yes. Using MRI as an example, numerous articles, papers, and publications acknowledge qMRI variability in scanner output can vary between manufacturers, over time, and after software or hardware maintenance or upgrades.

What is in-vivo imaging metrology, and why is it the future?

With in-vivo metrology, measurements are performed on the “body of living subjects (human or animal) without taking the sample out of the living subject (biopsy).” True in-vivo metrology will enable the diagnosis or understanding of tissue state before a radiologist’s visual inspection. Such measurement capabilities are objective, in contrast to the subjective, qualitative interpretation by a human observer. In-vivo metrology will enhance and support the practice of radiology in addition to reducing unnecessary procedures and associated costs.

What are the essential aspects of QI?

Current digital imaging modalities provide the ability to measure a variety of biological and physical quantities with accuracy and reliability, e.g., tissue characterization, physical dimensions, temperature, body mass components, etc. However, consensus standards and corresponding certification or accreditation programs are essential to bring the benefits of these objective QI parameters to patient care. The CSP follows this paradigm as does the earlier CLIA, both of which have been instrumental in improving the accuracy and consistency of laboratory assays. This proposal aims to bring the same rigor to immediately improve the quality, safety and effectiveness of medical imaging in clinical care and to advance the input data needed to create, as well as safely and responsibly use, robust imaging AI tools for the benefit of all patients.

What are “phantoms,” or ground truth references, and why are they important?

Phantoms are specialized test objects used as ground truth references for quantitative imaging and analysis. NIST plays a central role in measuring and testing solutions for phantoms. Phantoms are used in ultrasound, CT, MRI, and other imaging modalities for routine QA/QC and machine testing. Phantoms are key to harmonizing and standardizing data and improve data quality needed for AI applications.

What do you mean by “precision medicine”? Don’t we already have it?

Precision medicine is a popular term with many definitions/approaches applying to genetics, oncology, pharmacogenetics, oncology, etc. (See, e.g., NCI, FDA, NIH, National Human Genome Research Institute.) Generally, precision (or personalized) medicine focuses on the idea that treatment can be individualized (rather than generalized). While there have been exciting advances in personalized medicine (such as gene testing), the variability of medical imaging is a major limitation in realizing the full potential of precision medicine. Recognizing that medical imaging is a fundamental measurement tool from diagnosis through measurement of treatment response and toxicity assessment, this proposal aims to transition medical imaging practices to quantitative imaging to enable the realization of precision medicine and timely personalized approaches to patient care.

How does standardized imaging data and QI help radiology and support healthcare practitioners?

Radiologists need accurate and reliable data to make informed decisions. Improving standardization and advancing QI metrology will support radiologists by improving data quality. To the extent radiologists are relying on AI platforms, data quality is even more essential when it is used to drive AI applications, as the outputs of AI models rely on sound acquisition methods and accurate quantitative datasets.


Standardized data also helps patients by reducing the need for repeat scans, which saves time, money, and unnecessary radiation (for ionizing methods).

Does quantitative imaging improve accessibility to healthcare?

Yes! Using MRI as an example, qMRI can advance and support efforts to make MRI more accessible. Historically, MRI systems cost millions of dollars and are located in high-resource hospital settings. Numerous healthcare and policy providers are making efforts to create “accessible” MRI systems, which include portable systems at lower field strengths and to address organ-specific diseases. New low-field systems can reach patient populations historically absent from high-resource hospital settings. However, robust and reliable quantitative data are needed to ensure data collected in rural, nonhospital settings, or in Low and Middle Income Countries, can be objectively compared to data from high-resource hospital settings.


Further, accessibility can be limited by a lack of local expertise. AI could help fill the gap.
However, a QI infrastructure is needed for safe and responsible use of AI tools, ensuring adequate quality of the input imaging data.

What is a specific example of the benefits of standardization?

The I-SPY 2 Clinical Breast Trials provide a prime example of the need for rigorous QA and scanner standardization. The I-SPY 2 trial is a novel approach to breast cancer treatment that closely monitors treatment response to neoadjuvant therapy. If there is no immediate/early response, the patient is switched to a different drug. MR imaging is acquired at various points during the treatment to determine the initial tumor size and functional characteristics and then to measure any tumor shrinkage/response over the course of treatment. One quantitative MRI tumor characteristic that has shown promise for evaluation of treatment response and is being evaluated in the trial is ADC, a measure of tissue water mobility which is calculated from diffusion-weighted imaging. It is essential for the trial that MR results can be compared over time as well as across sites. To truly know whether a patient is responding, the radiologist must have confidence that any change in the MR reading or measurement is due to a physiological change and not due to a scanner change such as drift, gradient failure, or software upgrade.


For the I-SPY 2 trial, breast MRI phantoms and a standardized imaging protocol are used to test and harmonize scanner performance and evaluate measurement bias over time and across sites. This approach then provides clear data/information on image quality and quantitative measurement (e.g., ADC) for both the trial (comparing data from all sites is possible) as well as for the individual imaging sites.

What are the benefits of a metrological and standards-based framework for medical imaging in the age of AI?

Nonstandardized imaging results in variation that requires orders of magnitude more data to train an algorithm. More importantly, without reliable and standardized datasets, AI algorithms drift, resulting in degradation of both protocols and performance. Creating and supporting a standards-based framework for medical imaging will mitigate these issues as well as lead to:



  • Integrated and coordinated system for establishing QIBs, screening, and treatment planning.

  • Cost savings: Standardizing data and implementing quantitative results in superior datasets for clinical use or as part of large datasets for AI applications. Clinical Standardization Programs have focused on standardizing tests and have been shown to save “millions in health care costs.”

  • Better health outcomes: Standardization reduces reader error and enables new AI applications to support current radiology practices.

  • Support for radiologists’ diagnoses.

  • Fewer incorrect diagnoses (false positives and false negatives).

  • Elimination of millions of unnecessary invasive biopsies.

  • Fewer repeat scans.

  • Robust and reliable datasets for AI applications (e.g., preventing model collapse).


It benefits federal organizations such as the National Institutes of Health, Centers for Medicare and Medicaid Services, and Veterans Affairs as well as the private and nonprofit sectors (insurers, hospital systems, pharmaceutical, imaging software, and AI companies). The ultimate beneficiary, however, is the patient, who will receive an objective, reliable quantitative measure of their health—relevant for a point-in-time assessment as well as longitudinal follow-up.

Who is likely to push back on this proposal, and how can that hurdle be overcome?

Possible pushback from such a program may come from: (1) radiologists who are unfamiliar with the power of quantitative imaging for precision health and/or the importance and incredible benefits of clean datasets for AI applications; or (2) manufacturers (OEMs) who aim to improve output through differentiation and are focused on customers who are more interested in their qualitative practice.


Radiology practices: Radiology practices’ main objective is to provide the most accurate diagnosis possible in the least amount of time, as cost-effectively as possible. Standardization and calibration are generally perceived as requiring additional time and increased costs; however, these perceptions are often not true, and the variability in imaging introduces more time consumption and challenges. The existing standard of care relies on qualitative assessments of medical images.


While excellent for understanding a patient’s health at a single point in time (though even in these cases subtle abnormalities can be missed), longitudinal monitoring is impossible without robust metrological standards for reproducibility and quantitative assessment of tissue health. While a move from qualitative to quantitative imaging may require additional education, understanding, and time, such an infrastructure will provide radiologists with improved capabilities and an opportunity to supplement and augment the existing standard of care.


Further, AI is undeniably being incorporated into numerous radiology applications, which will require accurate and reliable datasets. As such, it will be important to work with radiology practices to demonstrate a move to standardization will, ultimately, reduce time and increase the ability to accurately diagnose patients.


OEMs: Imaging device manufacturers work diligently to improve their outputs. To the extent differentiation is seen as a business advantage, a move toward vendor-neutral and scanner-agnostic metrics may initially be met with resistance. However, all OEMs are investing resources to improve AI applications and patient health. All benefit from input data that is standard and robust and provides enough transparency to ensure FAIR data principles (findability, accessibility, interoperability, and reusability).


OEMs have plenty of areas for differentiation including improving the patient experience and shortening scan times. We believe OEMs, as part of their move to embrace AI, will find clear metrology and standards-based framework a positive for their own business and the field as a whole.

What is the first step to get this proposal off the ground? Could there be a pilot project?

The first step is to convene a meeting of leaders in the field within three months to establish priorities and timelines for successful implementation and adoption of a Center of Excellence. Any Center must be well-funded with experienced leadership and will need the support and collaboration across the relevant agencies and organizations.


There are numerous potential pilots. The key is to identify an actionable study where results could be achieved within a reasonable time. For example, a pilot study to demonstrate the importance of quantitative MRI and sound datasets for AI could be implemented at the Veterans Administration hospital system. This study could focus on quantifying benefits from standardization and implementation of quantitative diffusion MRI, an “imaging biopsy” modality as well as mirror advances and knowledge identified in the existing I-SPY 2 clinical breast trials.

Why have similar efforts failed in the past? How will your proposal avoid those pitfalls?

The timing is right for three reasons: (1) quantitative imaging is doable; (2) AI is upon us; and (3) there is a desire and need to reduce healthcare costs and improve patient outcomes.


There is widespread agreement that QI methodologies have enormous potential benefits, and many government agencies and industry organizations have acknowledged this. Unfortunately, there has been no unifying entity with sufficient resources and professional leadership to coordinate and focus these efforts. Many organizations have been organized and run by volunteers. Finally, some previously funded efforts to support quantitative imaging (e.g., QIN and QIBA) have recently lost dedicated funding.


With rapid advances in technology, including the promise of AI, there is new and shared motivation across communities to revise our approach to data generation and collection at-large—focused on standardization, precision, and transparency. By leveraging the existing widespread support, along with dedicated resources for implementation and enforcement, this proposal will drive the necessary change.

Is there an effort or need for an international component?

Yes. Human health has no geographical boundaries, so a global approach to quantitative imaging would benefit all. QI is being studied, implemented, and adopted globally.


However, as is the case in the U.S., while standards have been proposed, there is no international body to govern the implementation, coordination, and maturation of this process. The initiatives put forth here could provide a roadmap for global collaboration (ever-more important with AI) and standards that would speed up development and implementation both in the U.S. and abroad.