Retiring Baby Boomers Can Turn Workers into Owners: Securing American Business Ownership through Employee Ownership

The economic vitality and competitiveness of America’s economy is in jeopardy. The Silver Tsunami of retiring business owners puts half of small businesses at risk: 2.9 million companies are owned by someone at or near retirement age, of which 375,000 are manufacturing, trade, and distribution businesses critical to our supply chains. Add to this that 40 percent of U.S. corporate stock is owned by foreign investors, which funnels these companies’ profits out of our country, weakening our ability to reinvest in our own competitiveness. If the steps to expand the availability of employee ownership were to address even just 10% of the Silver Tsunami companies over 10 employees, this would preserve an estimated 57K small businesses and 2.6M jobs, affecting communities across the U.S. Six hundred billion dollars in economic activity by American-owned firms would be preserved, ensuring that these firms’ profits continue to flow into American pockets.

Broad-based employee ownership (EO) is a powerful solution that preserves local American business ownership, protects our supply chains and the resiliency of American manufacturing, creates quality jobs, and grows the household balance sheets of American workers and their families. Expanding access to financing for EO is crucial at this juncture, given the looming economic threats of the Silver Tsunami and foreign business ownership.

Two important opportunities expand capital access to finance sales of businesses into EO, building on over 50 years of federal support for EO and over 65 years of supporting the flow of small business private capital to where it is not in adequate supply: first, the Employee Equity Investment Act (EEIA), and second, addressing barriers in the SBA 7(a) loan guarantee program.

Three trends create tremendous urgency to leverage employee ownership small business acquisition: (1) the Silver Tsunami, representing $6.5T in GDP and one in five private sector workers nationwide, (2) fewer than 30 percent of businesses are being taken over by family members, and (3) only one in five businesses put up for sale is able to find a buyer. 

Without preserving Silver Tsunami businesses, the current 40 percent share of foreign ownership will only grow. Supporting U.S. private investors in the mergers and acquisitions (M&A) space to proactively pitch EO to business owners, and come with readily available financing, enables EO to compete with other acquisition offers, including foreign firms.  

In communities all across the U.S., from urban to suburban to rural (where arguably the need to find buyers and the impact of job losses can be most acute), EO is needed to preserve these businesses and their jobs in our communities, maintain U.S. stock ownership, preserve manufacturing production capacity and competitive know how, and create the potential for the next generation of business owners to create economic opportunity for themselves and their families.

Challenge and Opportunity

Broad-based employee ownership (EO) of American small businesses is one of the most promising opportunities to preserve American ownership and small business resiliency and vitality, and help address our country’s enormous wealth gap. EO creates the opportunity to have a stake in the game, and to understand what it means to be a part owner of a business for today’s small business workforces. 

However, the growth of EO, and its ability to preserve American ownership of small businesses in our local economies, is severely hampered by access to financing.   

Most EO transactions (which are market rate sales) require the business owner to first learn about EO, then to not only initiate the transaction (typically hiring a consultant to structure the deal for them), but also to finance as much as 50 percent or more of the sale. This contrasts to how the M&A market traditionally works: buyers who provide the financing are the ones who initiate the transaction with business owners. This difference is a core reason why EO hasn’t grown as quickly as it could, given all of the backing provided through federal tax breaks dating back to 1974.

More than one form of EO is needed to address the urgent Silver Tsunami and related challenges, including Employee Stock Ownership Plans (ESOPs) which are only a fit for companies of about 40 employees and above, and worker-owned cooperatives and Employee Ownership Trusts (EOTs), which are a fit for companies of about 10 employees and above (below 10 is a challenge for any EO transition). Of small businesses with greater than 10 employees, those with 10-19 employees make up 51% of the total; those with 20-49 employees make up 33%. In other words, the vast majority of companies with over 10 employees (the minimum size threshold for EO transitions) are below the 40+ employee threshold required for an ESOP. This underscores the importance of ensuring financing access for worker coops and EOTs that can support transitions of companies in the 10-40 employee range.

Without action, we are at risk of losing the small businesses and jobs that are in need of buyers as a result of the Silver Tsunami.

Across the entire small business economy, 2.9M businesses that provide 32.1M jobs are estimated to be at risk, representing $1.3T in payroll and $6.5T in business revenue. Honing in on only manufacturing, wholesale trade and transportation & warehousing businesses, there are an estimated 375,000 businesses at risk that provide 5.5M jobs combined, representing $279.2B of payroll and $2.3T of business revenue.

Plan of Action

Two important opportunities will expand capital access to finance sales of businesses into EO and solve the supply-demand imbalance created in the small business merger and acquisition marketplace with too many businesses needing buyers and being at risk of closing down due to the Silver Tsunami.

First, passing new legislation, the Employee Equity Investment Act (EEIA), would establish a zero-subsidy credit facility at the Small Business Administration, enabling Congress to preserve the legacy of local businesses and create quality jobs with retirement security by helping businesses transition to employee ownership. By supporting private investment funds, referred to as Employee Equity Investment Companies (EEICs), Congress can support the private market to finance the sale of privately-held small- and medium-sized businesses from business owners to their employees through credit enhancement capabilities at zero subsidy cost to the taxpayer.

EEICs are private investment companies licensed by the Small Business Administration that can be eligible for low-cost, government-backed capital to either create or grow employee-owned businesses. In the case of new EO transitions, the legislation intends to “crowd in” private institutional capital sources to reduce the need for sellers to self-finance a sale to employees. Fees paid into the program by the licensed funds enable it to operate at a zero-subsidy cost to the federal government. 

The Employee Equity Investment Act (EEIA) helps private investors that specialize in EO to compete in the mergers & acquisition (M&A) space.

Second, addressing barriers to EO lending in the SBA 7(a) loan guarantee program by passing legislation that removes the personal guarantee requirement for worker coops and EOTs would help level the playing field, enabling companies transitioning to EO to qualify for this loan guarantee without requiring a single employee-owner to personally guarantee the loan on behalf of the entire owner group of 10, 50 or 500 employees. 

Importantly, our manufacturing supply chain depends on a network of tier 1, 2 and 3 suppliers across the entire value chain, a mix of very large and very small companies (over 75% of manufacturing suppliers have 20 or fewer employees). The entire sector faces an increasingly fragile supply chain and growing workforce shortages, while also being faced with the Silver Tsunami risk. Ensuring that EO transitions can help us preserve the full range of suppliers, distributors and other key businesses will depend on having capital that can finance companies of all sizes. The SBA 7(a) program can guarantee loans of up to $5M, on the smaller end of the small business company size. 

Even though the SBA took steps in 2023 to make loans to ESOPs easier than under prior rules, the biggest addressable market for EO loans that fit within the SBA’s 7(a) loan size range are for worker coops and EOTs (because ESOPs are only a fit for companies with about 40 employees or fewer, given higher regulatory costs). Worker coops and EOTs are currently not able to utilize this SBA product. 

The legislative action needed is to require the SBA to remove the requirement for a personal guarantee under the SBA 7(a) loan guarantee program for acquisitions financing for worker cooperatives and Employee Ownership Trusts. The Capital for Cooperatives Act (introduced to both the House and the Senate most recently in May 2021) provides a strong starting point for the legislative changes needed. There is precedent for this change; the Paycheck Protection Program loans and SBA Economic Injury Disaster Loans (EIDL) were made during the pandemic to cooperatives without requiring personal guarantees as well as the aforementioned May 2023 rule change allowing majority ESOPs to borrow without personal guarantee.

There is not any expected additional cost to this program outside of some small updates to policies and public communication about the changes. 

Addressing barriers to EO lending in the SBA 7(a) loan guarantee program would open up bank financing to the full addressable market of EO transactions.

The Silver Tsunami of retiring business owners puts half of all employer-businesses urgently at risk if these business owners can’t find buyers, as the last of the baby boomers turns 65 in 2030. Maintaining American small business ownership, with 40% of stock of American companies already owned by foreign stockholders, is also critical. EO preserves domestic productive capacity as an alternative to acquisition by foreign firms, including China, and other strategic competitors, which bolsters supply chain resiliency and U.S. strategic competitiveness. Manufacturing is a strong fit for EO, as it is consistently in the top two sectors for newly formed employee-owned companies, making up 20-25% of all new ESOPs

Enabling private investors in the M&A space to proactively pitch EO to business owners, and come with readily available financing will help address these urgent needs, preserving small business assets in our communities, while simultaneously creating a new generation of American business owners.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
How many employee-owned companies are there in the U.S. today?

There are an estimated 7,500+ EO companies in the U.S. today, with nearly 40,000 employee-owners and assets well above $2T. Most are ESOPs (about 6,500), plus about 1,000 worker cooperatives, and under 100 EOTs.

How much of the Silver Tsunami risk could these supports for employee ownership financing potentially address?

For every 1% of Silver Tsunami companies with more than 10 employees that is able to transition to EO based on these recommendations, an estimated 5.7K firms, $60.7B in sales, 260K jobs, and 12.3B in payroll would be preserved.

How much support has Congress and the federal government provided for employee ownership and small business access to capital in the past?

Congress and the federal government have demonstrated their support of small business and the EO form of small business in many ways, which this proposed two-pronged legislation builds on, for example:



  • Creation of the SBIC program in the SBA in 1958 designed to stimulate the small business segment of the U.S. economy by supplementing “the flow of private equity capital and long-term loan funds which small-business concerns need for the sound financing of their business operations and for their growth, expansion, and modernization, and which are not available in adequate supply [emphasis added]”

  • Passage of multiple pieces of federal legislation providing tax benefits to EO companies dating back to 1974

  • Passage of the Main Street Employee Ownership Act in 2018, which was passed with the intention of removing barriers to SBA loans or guarantees for EO transitions, including to allow ESOPs and worker coops to qualify for loans under the SBA’s 7(a) program. The law stipulated that the SBA “may” make the changes the law provided, but the regulations SBA initially issued made things harder, not easier. Over the next few years, Representatives Dean Phillips (D-MN) and Nydia Velazquez (D-NY), both on the House Small Business Committee, led an effort to get the SBA to make the most recent changes that benefitted ESOPs but not the other forms of EO.

  • Release of the first Job Quality Toolkit by the Commerce Department in July 2021, which explicitly includes EO as one of the job quality strategies

  • Passage of the WORK Act (Worker Ownership, Readiness, and Knowledge) in 2023 (incorporated as Section 346 of the SECURE 2.0 Act), which directs the Department of Labor (DOL) to create an Employee Ownership Initiative within the department to coordinate and fund state employee ownership outreach programs and also requires the DOL to set new standards for ESOP appraisals. The program was to be funded at $4 million in fiscal year 2025 (which starts in October 2024), gradually increasing to $16 million by fiscal year 2029, but it has yet to be appropriated.

I’ve never heard about EO transitions using a worker coop or an Employee Ownership Trust. How widespread is this?

EO transitions using worker cooperatives have been happening for decades. Over the past ten years, this practice has grown significantly. There is a 30-member network of practitioners that actively support small business transitions utilizing worker coops and EOTs called Workers to Owners. Employee Ownership Trusts are newer in the U.S. (though they are the standard EO form in Europe, with decades of strong track record) and are a rapidly growing form of EO with a growing set of practitioners.

Why does there need to be a specialized program to capitalize EO funds?

Given the supply ~ demand imbalance of retiring business owners created by the Silver Tsunami (lots of businesses need buyers), as well as the outsized positive benefits of EO, prioritizing this form of business ownership is critical to preserving these business assets in our local and national economies. Capital to finance the transactions is central to ensuring EO’s ability to play this important role.

What is the scale of the SBA’s 7(a) loan program?

The SBA 7(a) loan program has been and continues to be, critical to opening up bank (and some CDFI) financing for small businesses writ large by guaranteeing loans up to $5M. In FY23, the SBA guaranteed more than 57,300 7(a) loans worth $27.5 billion.

What are the SBA’s 7(a) loan program’s general rules for personal guarantees?

The SBA 7(a) loan program’s current rules require that all owners with 20% or more ownership of a business provide a personal guarantee for the loan, but absent anyone owning 20%, at least one individual must provide the personal guarantee. The previously mentioned May 2023 rule changes updated this for majority ESOPs.

What would the SBA use in place of the personal guarantee?

Just as with the ESOP form of EO, the SBA would be able to consider documented proof of an EO borrower’s ability to repay the loan based on equity, cash flow, and profitability to determine lending criteria.

But isn’t it risky to lend without a personal guarantee?

Research into employee ownership demonstrates that EO companies have faster growth, higher profits, and that they outlast their competitors in business cycle downturns. There is precedent for offering loans without a personal guarantee. First, during COVID, the SBA extended both EIDL (Economic Injury Disaster Loans) and PPP (Paycheck Protection Program) loans to cooperatives without requiring a personal guarantee. Second, the SBA’s May 2023 rule changes allow majority ESOPs to borrow without personal guarantee.

Why is the largest addressable market for the SBA 7(a) loan within EO transitions for worker coops and EOTs?

The overlap of the EO transaction value with the $5M ceiling for the 7(a) loan guarantee has the largest overlap with transaction values that are suitable for worker coops and EOTs. This is because ESOPs are not viable below about $750K-$1M transaction value due to higher regulatory-related costs, but the other forms of EO are viable down to about 10 or so employees.


A typical bank- or CDFI- financed EO transaction is a senior loan of 50-70% and a seller note of 30-50%. With a $5M ceiling for the 7(a) loan guarantee, this would cap the EO transaction value for 7(a) loans at $10M (a 50% seller note of $5M alongside a $5M bank loan). If a sale price is 4-6x EBITDA (a measure of annual profit) at this transaction value, this would cap the eligible company EBITDA at $1.7-$2.5M, which captures only the lowest company size thresholds that could be viable for the ESOP form.

Why is the SBA 7(a) loan especially important in the context of preserving supply chain resiliency?

Supply chain fragility and widespread labor shortages are the two greatest challenges facing American manufacturing operators today, with 75% of manufacturers citing attracting and retaining talent as their primary business challenge, and 65% citing supply chain disruptions as their next greatest challenge. Many don’t realize that the manufacturing sector is built like a block tower, with the Tier 1 (largest) suppliers to manufacturers at the top, Tier 2 suppliers at the next level down, and the widest foundational layer made up of Tier 3 suppliers. For example, a typical auto manufacturer will rely on 18,000 suppliers across its entire value chain, over 98% of which are small or medium sized businesses. In fact, 75% of manufacturing businesses have fewer than 20 employees. It is critical that we preserve American businesses across the entire value chain, and opening up financing for EO for companies of all sizes is absolutely critical.

How important is the manufacturing sector to the overall American economy?

The manufacturing sector generates 12% of U.S. GDP (gross domestic product), and if we count the value of the sector’s purchasing, the number goes to nearly one quarter of GDP. The sector also employs nearly one in ten American workers (over 14 million). Manufacturing plays a vital role in both our national security and in public health. Finally, the sector has long been a source of quality jobs and a cornerstone of middle class employment.

Why didn’t the SBA in its May 2023 ruling expand this option for worker coops and Employee Ownership Trusts?

Though we aren’t certain the reasoning, it is most likely because ESOPs have the largest lobbying presence. Given the broad support by the federal government of ESOPs through a myriad of tax benefits designed to encourage companies to transition to ESOPs, it is the biggest form of EO, enabling its lobbying presence. As discussed, their size threshold (based on the costs to comply with the regulatory requirements) put ESOPs out of reach for companies with below $750K – $1M EBITDA (a measure of annual profit), which leaves a large swath of America’s small businesses not supported by the SBA 7(a) loan guarantee when they are transacting an employee ownership succession plan.

Why can’t the SBA just make a rule change for its 7(a) loan guarantee program?

Likely, the lack of lobbying presence by parties representing the non-ESOP forms of employee ownership has resulted in the rule change not applying to the other forms of broad-based employee ownership. However, the data (as outlined above) clearly shows that worker cooperatives and EOTs are needed to address the full breadth of Silver Tsunami EO need, given the size overlap of loans that fit the size guidelines of the 7(a) loan guarantee and the fit with the form of EO. As such, legislators that are focused on American business resiliency and competitiveness are in the good positions to direct the SBA to mirror the ESOP personal loan guarantee treatment for worker cooperatives and EOTs.

Creating a Science and Technology Hub in Congress

Congress should create a new Science and Technology (S&T) Hub within the Government Accountability Office’s (GAO) Science, Technology Assessment, and Analytics (STAA) team to support an understaffed and overwhelmed Congress in addressing pressing science and technology policy questions. A new hub would connect Congress with technical experts and maintain a repository of research and information as well as translate this material to members and staff. There is already momentum building in Congress with several recent reforms to strengthen capacity, and the reversal of the Chevron doctrine infuses the issue with a new sense of urgency. The time is now for Congress to invest in itself. 

Challenge and Opportunity

Congress does not have the tools it needs to contend with pressing scientific and technical questions. In the last few decades, Congress grappled with increasingly complex science and technology policy questions, such as social media regulation, artificial intelligence, and climate change. At the same time, its staff capacity has diminished; between 1994 to 2015, the Government Accountability Office (GAO) and Congressional Research Service (CRS), key congressional support agencies, lost about a third of their employees. Staff on key science related committees like the House Committee on Science, Space, and Technology fell by nearly half.

As a result, members frequently lack the resources they need to understand science and technology. “[T]hey will resort to Google searches, reading Wikipedia, news articles, and yes, even social media reports. Then they will make a flurry of cold calls and e-mails to whichever expert they can get on the phone,” one former science staffer noted. “You’d be surprised how much time I spend explaining to my colleagues that the chief dangers of AI will not come from evil robots with red lasers coming out of their eyes,” representative Jay Obernolte (R-CA), who holds a master’s degree in AI, told  The New York Times. And AI is just one example of a pressing science need Congress must handle, but does not have the tools to grapple with.

Moreover, reliance on external information can intensify polarization, because each side depends on a different set of facts and it is harder to find common ground. Without high-quality, nonpartisan science and technology resources, billions of dollars in funding may be allocated to technologies that do not work or policy solutions at odds with the latest science. 

Additional science support could help Congress navigate complex policy questions related to emerging research,  understand science and technologies’ impacts on legislative issues, and grapple with the public benefits or negative consequences of various science and technology issues. 

The Supreme Court’s 2024 decision in Loper Bright Enterprises v. Raimondo instills a new sense of urgency. The reversal of the decades old “Chevron deference,” which directed courts to defer to agency interpretations in instances where statutes were unclear or silent, means Congress will now have to legislate with more specificity. To do so, it will need the best possible experts and technical guidance. 

There is momentum building for Congress to invest in itself. For the past several years, the Select Committee on the Modernization of Congress (which became a permanent subcommittee of the Committee on House Administration) advocated for increases to staff pay and resources to improve recruitment and retention. Additionally, the GAO Science, Technology Assessment, and Analytics (STAA) team has expanded to meet the moment. From 2019 to 2022, STAA’s staff grew from 49 to 129 and produced 46 technology assessments and short-form explainers. These investments are promising but not sufficient. Congress can draw on this energy and the urgency of a post-Chevron environment to invest in a Science and Technology Hub. 

Plan of Action

Congress should create a new Science and Technology Hub in GAO STAA

Congress should create a Science and Technology Hub within the GAO’s STAA. While most of the STAA’s current work responds to specific requests from members, a new hub within the STAA would build out more proactive and exploratory work by 1) brokering long-term relationships between experts and lawmakers and 2) translating research for Congress. The new hub would maintain relationships with rank-and-file members, not just committees or leadership. The hub could start by advising Congress on emerging issues where the partisan battle lines have not been drawn, such as AI, and over time it will build institutional trust and advise on more partisan issues. 

Research shows that both parties respect and use congressional support agencies, such as GAO, so they are a good place to house the necessary expertise. Housing the new hub within STAA would also build on the existing resources and support STAA already provides and capitalizes on the recent push to expand this team. The Hub could have a small staff of approximately 100 employees. The success of recently created small offices such as the Office of Whistleblower Ombuds proves that a modest staff can be effective. In a post-Chevron world, this hub could also play an important role liaising with federal agencies about how different statutory formulations will change implementation of science related legislation and helping members and staff understand the ins and outs of the passage to implementation process. 

The Hub should connect Congress with a wider range of subject matter experts.

Studies show that researcher-policymaker interactions are most effective when they are long-term working relationships rather than ad hoc interactions. The hub could set up advisory councils of experts to guide Congress on different key areas. Though ad hoc groups of experts have advised Congress over the years, Congress does not have institutionalized avenues for soliciting information. The hub’s nonpartisan staff should also screen for potential conflicts of interest. As a starting point, these advisory councils would support committee and caucus staff as they learn about emerging issues, and over time it could build more capacity to manage requests from individual member officers. Agencies like the National Academies of Sciences, Engineering, and Medicine already employ the advisory council model; however, they do not serve Congress exclusively nor do they meet staff needs for quick turnaround or consultative support. The advisory councils would build on the advisory council model of the Office of Technology Assessment (OTA), an agency that advised Congress on science between the 1970s and 1990s. The new hub could take proactive steps to center representation in its advisory councils, learning from the example of the United Kingdom Parliament’s Knowledge Exchange Unit and its efforts to increase the number of women and people of color Parliament hears from. 

The Hub should help compile and translate information for Congress.

The hub could maintain a one-stop shop to help Congress find and understand data and research on different policy-relevant topics.  The hub could maintain this repository and draw on it to distill large amounts of information into memos that members could digest. It could also hold regular briefings for members and staff on emerging issues. Over time, the Hub could build out a “living evidence” approach in which a body of research is maintained and updated with the best possible evidence at a regular cadence. Such a resource would help counteract the effects of understaffing and staff turnover and provide critical assistance in legislating and oversight, particularly important in a post-Chevron world. 

Conclusion

Taking straightforward steps like creating an S&T hub, which brokers relationships between Congress and experts and houses a repository of research on different policy topics, could help Congress to understand and stay up-to-date on urgent science issues in order to ensure more effective decision making in the public interest.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
What other investments can Congress make in itself at this time?

There are a number of additional investments Congress can make that would complement the work of the proposed Science and Technology Hub, including additional capacity for other Congressional support agencies and entities beyond GAO. For example, Congress could lift the cap on the number of staff each member can hire (currently set at 18), and invest in pipelines for recruitment and retention of personal and committee staff with science expertise. Additionally, Congress could advance digital technologies available to Congress for evidence access and networking with the expert community.

Why should the Hub be placed at GAO and how can the GAO adapt to meet this need?

The Hub should be placed in GAO to build on the momentum of recent investments in the STAA team. GAO has recently invested in building human capital with expertise in science and technology that can support the development of the Hub. The GAO should seize the moment to reimagine how it supports Congress as a modern institution. The new hub in the STAA should be part of an overall evolution, and other GAO departments should also capitalize on the momentum and build more responsive and member-focused processes to support Congress.

Creating an HHS Loan Program Office to Fill Critical Gaps in Life Science and Health Financing

We propose the establishment of a Department of Health and Human Services Loan Programs Office (HHS LPO) to fill critical and systematic gaps in financing that prevent innovative life-saving medicines and other critical health technologies from reaching patients, improving health outcomes, and bolstering our public health. To be effective, the HHS LPO requires an authority to issue or guarantee loans, up to $5 billion in total. Federally financed debt can help fill critical funding gaps and complement ongoing federal grants, contracts, reimbursement, and regulatory policies and catalyze private-sector investment in innovation.

Challenge and Opportunity

Despite recent advances in the biological understanding of human diseases and a rapidly advancing technological toolbox, commercialization of innovative life-saving medicines and critical health technologies face enormous headwinds. This is due in part to the difficulty in accessing sustained financing across the entire development lifecycle. Further, macroeconomic trends such as non-zero interest rates have substantially reduced deployed capital from venture capital and private equity, especially with longer investment horizons. 

The average new medicine requires 15 years and over $2 billion to go from the earliest stages of discovery to widespread clinical deployment. Over the last 20 years, the earliest and riskiest portions of the drug discovery process have shifted from the province of large pharmaceutical companies to a patchwork of researchers, entrepreneurs, venture capitalists, and supporting organizations. While this trend has enabled new entrants into the biotechnology landscape, it has also required startup companies to navigate labyrinthine processes of technical regulatory guidelines, obtaining long-term and risk-friendly financing, and predicting payor and provider willingness to ultimately adopt the product.

Additionally, there are major gaps in healthcare infrastructure such as lack of adequate drug manufacturing capacity, healthcare worker shortages, and declining rural hospitals. Limited investment is available for critical infrastructure to support telehealth, rural healthcare settings, biomanufacturing, and decentralized clinical trials, among others.

The challenges in health share some similarities to other highly regulated, capital-intensive industries, such as energy. The Department of Energy (DOE) Loan Program Office (LPO) was created in 2005 to offer loans and loan guarantees to support businesses in deploying innovative clean energy, advanced transportation, and tribal energy projects in the United States. LPO has closed more than $40 billion in deals to date. While agencies across HHS rely primarily on grants and contracts to deploy research and development (R&D) funding, capital-intensive projects are best deployed as loans, not only to appropriately balance risk between the government and lendees but also to provide better stewardship over taxpayer resources via mechanisms that create liquidity with lower budget impact. Moreover, private-sector financing rates are subject to market-based interest rates, which can have enormous impacts on available capital for R&D.

Plan of Action

There are many federal credit programs across multiple departments and agencies that provide a strong blueprint for the HHS LPO to follow. Examples include the aforementioned DOE Loan Programs Office, which provides capital to scale large-scale energy infrastructure projects using new technologies, and the Small Business Administration’s credit programs, which provide credit financing to small businesses via several loan and loan matching programs.

Proposed Actions

We propose the following three actions:

Scope

Similar to how DOE LPO services the priorities of the DOE, the HHS LPO would develop strategy priorities based on market gaps and public health gaps. It would also develop a rigorous diligence process to prioritize, solicit, assess, and manage potential deals, in alignment with the Federal Credit Reform Act and the associated policies set forth by the Office of Management and Budget and followed by all federal credit programs. It would also seek companion equity investors and creditors from the private sector to create leverage and would provide portfolio support via demand-alignment and -generation mechanisms (e.g., advance manufacturing commitments and advanced market commitments from insurers).

We envision several possible areas of focus for the HHS LPO:

  1. Providing loans or loan guarantees to amplify investment funds that use venture capital or other private investment tools, such as early-stage drug development or biomanufacturing capacity. While these funds may already exist, they are typically underpowered.
  2. Providing large-scale financing in partnership with private investors to fund major healthcare infrastructure gaps, such as rural hospitals, decentralized clinical trial capacity, telehealth services, and advanced biomanufacturing capacity.
  3. Providing financing to test new innovative finance models, e.g. portfolio-based R&D bonds, designed to attract additional capital into under-funded R&D and lower financial risks.

Conclusion

To address the challenges in bringing innovative life-saving medicines and critical health technologies to market, we need an HHS Loan Programs Office that would not only create liquidity by providing or guaranteeing critical financing for capital-intensive projects but address critical gaps in the innovation pipeline, including treatments for rare diseases, underserved communities, biomanufacturing, and healthcare infrastructure. Finally, it would be uniquely positioned to pilot innovative financing mechanisms in partnership with the private sector to better align private capital towards public health goals.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
What is the DOE Loan Programs Office, and how is it similar to the proposed HHS Loan Programs Office?

The DOE LPO, enabled via the Energy Policy Act of 2005, enables the Secretary of Energy to provide loan guarantees toward publicly or privately financed projects involving new and innovative energy technologies.


The DOE LPO provides a bridge to private financing and bankability for large-scale, high-impact clean energy and supply chain projects involving new and innovative technologies. It also expands manufacturing capacity and energy access within the United States. The DOE LPO has enabled companies involving energy and energy manufacturing technologies to achieve infrastructure-scale growth, including Tesla, an electric car manufacturer; Lithium Americas Corp., a company supplying lithium for batteries; and the Agua Caliente Solar Project, a solar power station sponsored by NRG Solar that was the largest in the world at its time of construction.


The HHS LPO would similarly augment, guarantee, or bridge to private financing for projects involving the development and deployment of new and innovative technologies in life sciences and healthcare. It would draw upon the structure and authority of the DOE LPO as its basis.

What potential use cases would the HHS LPO serve?

The HHS LPO could look to the DOE LPO for examples as to how to structure potential funds or use cases. The DOE LPO’s Title 17 Clean Energy Financing Program provides four eligible project categories: (1) projects deploying new or significantly improved technology; (2) projects manufacturing products representing new or significantly improved technologies; (3) projects receiving credit or financing from counterpart state-level institutions; and (4) projects involving existing infrastructure that also share benefits to customers or associated communities.


Drawing on these examples, the HHS LPO could support project categories such as (1) emerging health and life science technologies; (2) the commercialization and scaling access of novel technologies; and (3) expanding biomanufacturing capacity in the United States, particularly for novel platforms (e.g., cell and gene therapies).

How much would the HHS LPO cost?

The budget could be estimated via its authority to make or guarantee loans. Presently, the DOE LPO has over $400 billion in loan authority and is actively managing a portfolio of just over $30 billion. Given this benchmark and the size of the private market for early-stage healthcare venture capital valued at approximately $20 billion, we encourage the creation of an HHS LPO with $5 billion in loan-making authority. Using proportional volume to the $180 million sought by DOE LPO in FY2023, we estimate that an HHS LPO with $5 billion in loan-making authority would require a budget appropriation of $30 million.

What accountability or oversight measures are required to ensure proper operation and evaluate performance?

The HHS LPO would be subject to oversight by the HHS Inspector General, OMB, as well as the respective legislative bodies, the House of Representatives Energy and Commerce Committee and the Senate Health, Education, Labor and Pension Committee.


Like the DOE LPO, the HHS LPO would publish an Annual Portfolio Status Report detailing its new investments, existing portfolio, and other key financial and operational metrics.

What alternative options could serve the same purpose as the HHS LPO, and why is the HHS LPO preferable?

It is also possible for Congress to authorize existing funding agencies, such as BARDA, the Advanced Research Projects Agency for Health (ARPA-H), or the National Institutes for Health (NIH), with loan authority. However, due the highly specialized talent needed to effectively operate a complex loan financing operation, the program is significantly more likely to succeed if housed into a dedicated HHS LPO that would then work closely with the other health-focused funding agencies within HHS.


The other alternative is to expand the authority for other LPOs and financing agencies, such as the DOE LPO or the U.S. Development Finance Corporation, to focus on domestic health. However, that is likely to create conflicts of priority given their already large and diverse portfolios.

What are the next steps required to stand up the HHS LPO?

The project requires legislation similar to the Department of Energy’s Title 17 Clean Energy Financing Program, created via the Energy Policy Act of 2005 and subsequently expanded via the Infrastructure Investment and Jobs Act in 2021 and the Inflation Reduction Act in 2022.


This legislation would direct the HHS to establish an office, presumably a Loan Programs Office, to make loan guarantees to support new and innovative technologies in life sciences and healthcare. While the LPO could reside within an existing HHS division, the LPO would most ideally be established in a manner that enables it to serve projects across the full Department, including those from the National Institutes of Health, Food and Drug Administration, Biomedical Advanced Research and Development Authority, and the Centers for Medicare and Medicaid Services. As such, it would preferably not reside within any single one of these organizations. Like the DOE LPO, the HHS LPO would be led by a director, who would be directed to hire the necessary finance, technical, and operational experts for the function of the office.


Drawing on the Energy Policy Act of 2005 that created the DOE LPO, enabling legislation for the HHS Loan Programs office would direct the Secretary of HHS to make loan guarantees in consultation with the Secretary of Treasury toward projects involving new and innovative technologies in healthcare and life sciences. The enabling legislation would include several provisions:



  • Necessary included terms and conditions for loan guarantees created via the HHS LPO, including loan length, interest rates, and default provisions;

  • Allowance of fees to be captured via the HHS LPO to provide funding support for the program; and

  • A list of eligible project types for loan guarantees.

Who are potential supporters of this policy? Who are potential skeptics?

Supporters are likely to include companies developing and deploying life sciences and healthcare technologies, including early-stage biotechnology research companies, biomanufacturing companies, and healthcare technology companies. Similarly, patient advocates would be similarly supportive because of the LPO’s potential to bring new technologies to market and reduce the overall Weighted Average Cost of Capital (WACC) for biotechnology companies, potentially supporting expanded access.


Existing financiers of research in biomedical sciences technology may be neutral or ambivalent toward this policy. On one hand, it would provide expanded access to syndicated financing or loan guarantees that would compound the impact of each dollar invested. On the other hand, most financiers currently use equity financing, which enables the demand for a high rate of return via subsequent investment and operation. An LPO could provide financing that requires a lower rate of return, thereby diluting the impact of current financiers in the market.


Skeptics are likely to include groups opposing expansions of government spending, particularly involving higher-risk mechanisms like loan guarantees. The DOE LPO has drawn the attention of several such skeptics, oftentimes leading to increased levels of oversight from legislative stakeholders. The HHS LPO could expect similar opposition. Other skeptics may include companies with existing medicines and healthcare technologies, who may be worried about competitors introducing products with prices and access provisions that have been enabled via financing with lower WACC.

Establishing White House Initiative for STEM Educational Excellence & Workforce Development at the U.S. Department of Education

Our national security and competitive edge rely on our science and technological innovation.  Now more than ever every child deserves access to a well-rounded and high-quality education that provides them with the critical thinking, problem solving skills that will enable them to access science and technology jobs and contribute to solving global challenges.  Science, technology, engineering and mathematics education (STEM). For the purposes of this memo STEM includes computer science, data science, AI and other emerging technology fields in addition to science, engineering and mathematics education. Education and workforce development must be at the forefront of the next administration.   

The next administration’s Department of Education (ED) has an incredible opportunity to support our nation’s youth, America’s current and future workforce, to succeed and thrive. Students, families and communities want and need more STEM learning experiences to realize the American Dream, and yet they cannot access them.    

In the FY25 President’s Budget, ED called for four full time employees to focus on STEM in the Office of the Deputy Secretary, yet the out-going Administration failed to support this imperative.  We hope that this imperative is funded and staffed by the new Administration.

Challenge and Opportunity

Now more than ever our economy and national defense call for every child to have access to a well-rounded and high-quality education that sets them up for success and provides them with the critical thinking and problem solving skills that will enable them to access economic opportunities and contribute to solving global challenges. A well-rounded education must include science, technology, engineering and mathematics education (STEM) and especially STEM learning experiences both in- and out-of-school that provide students with technical skills through hands-on, problem/project-based learning.  

The Invest in America package of bills (CHIPS + Science, Bipartisan Infrastructure Law and Inflation Reduction Act) have created decades of employment opportunities that unfortunately may in some regions of the nation go lacking for talent unless we significantly invest in providing a strong well-rounded STEM education to every child.

The future workforce is not the only reason that ED must prioritize STEM teaching as part of their agenda. Kids and families are voting with their feet. Chronic absenteeism, defined as missing 10 or more days of school, has more than doubled since pre-pandemic rates. We must modernize STEM learning opportunities and ensure they are rigorous, relevant and aligned to what kids and families want.

Most teens report math or science as their favorite subject in school. Seventy-five percent  of Gen Z youth are interested in STEM occupations. Two-thirds of parents think computer science should be required for learning in schools. According to the Afterschool Alliance, More than 7 in 10 parents (72 percent) report that STEM and computer science learning opportunities were important in their selection of an afterschool program, up 19 percentage points from 2014 (53 percent).”

Simply put, students want more STEM opportunities and families want more STEM opportunities for their children.

Yet, we know that despite students’ interest in STEM and natural proclivity towards problem solving, too many students don’t have access to STEM learning experiences both in- and out-of-school. Strategic industries ranging from aerospace to communications and agriculture to energy, and many more presently clamor for and compete unproductively to chase talented new employees. The federal government owes it to them to take any and all actions to meet their employment needs, prominently including casting a wider net across the nation’s entire young population for talent. 

For example, across the board, NAEP results consistently show that students of color, students who are eligible for free and reduced-price lunch, students with disabilities and English language learners are not well served by our current system. On the 2018 NAEP Technology and Engineering Literacy Assessment, 13% of 8th grade students with disabilities scored at or above proficient compared to 53% of students without a disability. Fifty-nine percent of 8th grade White students scored at or above proficient compared to 23% Black students, 31% Hispanic, 29% of American Indian/Alaska Native. On the 2018 TEL assessment, 30% of students who are eligible for free or reduced-priced lunch scored at or above proficient compared to 60% who are not eligible for the program. These gaps also play out in Math and Science leading to just 6% of Black 12th graders, 9% of Hispanic 12th graders, 13% of American Indian/Alaska Native 12th graders, 7% of 12th graders with disabilities, and 1% of English Learners leaving high school proficient in science. The reality in math is just as stark with only 8% of Black 12th graders, 11% of Hispanic 12th graders, 9% of American Indian/Alaska Native 12th graders, 7% of 12th graders with disabilities, and 3% of English Learners finishing high school proficient in mathematics. The United States can ill afford to half-heartedly serve the educational needs of many of our students in this era of great demand. It is a profound responsibility of the federal government.

While progress is being made to provide more students with high-quality STEM learning during out-of-school time, we know that access is unequal. Children whose families have lower incomes are often the ones missing out on these engaging and enriching opportunities. It is estimated that there are 25 million children who would like access to an afterschool program, but are not able to access any program, let alone a STEM focused program.

We must change this reality quickly. Prioritizing STEM education must be an urgent priority for the Federal government. Luckily, the Federal government has built up significant infrastructure to try to better align federal resources to support this issue. The Federal Coordination on STEM (FC-STEM) effort aligns agencies to support the implementation of key priorities related to STEM.  

While STEM has been prioritized across Federal Agencies, STEM has not been a consistent priority at ED. ED should be leading. The Department must establish a structure that persists between administrations and can support deploying financial resources, technical assistance and other tools of the Department to support States, Districts and their partners to increase access, participation and success in STEM learning both in and out-of-school.  

In the FY25 President’s Budget, ED called for four full time employees to focus on STEM in the Office of the Deputy Secretary, yet the out-going Administration failed to support this imperative.  We hope that this imperative is funded and staffed by the new Administration.

Plan of Action

There are two logical paths forward to ensuring STEM is a priority at ED both of which require establishing dedicated STEM capacity at ED.

First, the new administration could sign an inaugural executive order, similar to this example, but modified for STEM,  that establishes a new White House Initiative for STEM Education and Workforce (WHISEW) that could stand alongside other White House initiatives and elevate STEM across the Department. This initiative would establish a STEM team at ED and could also name a list of advisors to ensure that ED could benefit from the expertise of non-government organizations.  

Or, a new Congress could appropriate the necessary funds to ensure adequate staffing and direct ED to establish the STEM team as requested in the former President’s FY25 Budget.  

Given the ever changing nature of STEM education and workforce, the STEM structure at ED should be a lean and nimble hub of talent that can staff up or down depending on the high-priority issue areas such as math, data science, computational thinking, AI and other emergent technologies. 

Whatever structure is established, the primary priorities of the Initiative should include:

In the next administration, the team should focus on the following four priorities:

Regardless of pathway, it is estimated that the cost to the Department would be equivalent to four full time employees, one of whom would be appointed (Executive Director) and three of whom would be a GS-15 civil servant. This staff could be bolstered by STEM field leaders through fellowships, reimbursed by ED, or funded through partner institutions. The total cost of this investment would be estimated at ~$2.5M annually.

Conclusion

A relatively modest investment (~$2.5M annually) has the potential to impact generations of children, families and their communities by increasing access, participation and success in STEM learning experiences both in and out-of-school. The time is now to establish a permanent and consistent focus on STEM education and workforce at the U.S. Department of Education.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
How much will this proposal cost?

It is estimated that to support a small team (3 FTEs plus Fellows) it would cost approximately $5M annually. This cost would cover salary, benefits, travel, technology needs and also a modest events and programming budget. 

Why should ED play a larger role in STEM Education?

The US Department of Education’s mission is to “promote student achievement and preparation for global competitiveness by fostering educational excellence and ensuring equal access.” STEM education is critical for supporting students’ global competitiveness.  As outlined above, STEM education is not equally accessible to all students. The Department has a critical role to play in supporting STEM education and closing persistent access gaps in STEM.

Why a White House Initiative versus staffing a team or office within the Office of the Deputy Secretary or Office of the Undersecretary?

STEM education cuts across PreK-12 and higher education priorities.  Existing White House Initiatives have prior experience coordinating efforts across the department and across student learning experiences from cradle to career.  Standing up a new White House Initiative would enable a more holistic and crosscutting view of STEM at the Department.  It would also support further coordination between the other White House Initiatives as well. STEM is a priority in the governing documents of many of the current White House Initiatives and it would support collaboration and coherence to have a White House STEM Initiative with the same reporting structure.

How could STEM E3 be sustained across administrations?

One of the critical structure elements of STEM E3 is that the Executive Director of the Initiative is a politically appointed role, enabling each administration to select someone that aligns with their priorities and campaign promises.  There should be at least one career staff member to provide continuity and sustainability across administrations.  The flexible capacity of Fellows or IPAs allows the team to bring in expertise aligned to the priorities of each administration.

Modernizing AI Fairness Analysis in Education Contexts

The 2022 release of ChatGPT and subsequent foundation models sparked a generative AI (GenAI) explosion in American society, driving rapid adoption of AI-powered tools in schools, colleges, and universities nationwide. Education technology was one of the first applications used to develop and test ChatGPT in a real-world context. A recent national survey indicated that nearly 50% of teachers, students, and parents use GenAI Chatbots in school, and over 66% of parents and teachers believe that GenAI Chatbots can help students learn more and faster. While this innovation is exciting and holds tremendous promise to personalize education, educators, families, and researchers are concerned that AI-powered solutions may not be equally useful, accurate, and effective for all students, in particular students from minoritized populations. It is possible that as this technology further develops that bias will be addressed; however, to ensure that students are not harmed as these tools become more widespread it is critical for the Department of Education to provide guidance for education decision-makers to evaluate AI solutions during procurement, to support EdTech developers to detect and mitigate bias in their applications, and to develop new fairness methods to ensure that these solutions serve the students with the most to gain from our educational systems. Creating this guidance will require leadership from the Department of Education to declare this issue as a priority and to resource an independent organization with the expertise needed to deliver these services.  

Challenge and Opportunity

Known Bias and  Potential Harm

There are many examples of the use of AI-based systems introducing more bias into an already-biased system. One example with widely varying results for different student groups is the use of GenAI tools to detect AI-generated text as a form of plagiarism. Liang et. al  found that several GPT-based plagiarism checkers frequently identified the writing of students for whom English is not their first language as AI-generated, even though their work was written before ChatGPT was available. The same errors did not occur with text generated by native English speakers. However, in a publication by Jiang (2024), no bias against non-native English speakers was encountered in the detection of plagiarism between human-authored essays and ChatGPT-generated essays written in response to analytical writing prompts from the GRE, which is an example of how thoughtful AI tool design and representative sampling in the training set can achieve fairer outcomes and mitigate bias. 

Beyond bias, researchers have raised additional concerns about the overall efficacy of these tools for all students; however, more understanding around different results for subpopulations and potential instances of bias(es) is a critical aspect of deciding whether or not these tools should be used by teachers in classrooms. For AI-based tools to be usable in high-stakes educational contexts such as testing, detecting and mitigating bias is critical, particularly when the consequences of being incorrect are so high, such as for students from minoritized populations who may not have the resources to recover from an error (e.g., failing a course, being prevented from graduating school). 

Another example of algorithmic bias before the widespread emergence of GenAI which illustrates potential harms is found in the Wisconsin Dropout Early Warning System. This AI-based tool was designed to flag students who may be at risk of dropping out of school; however, an analysis of the outcomes of these predictions found that the system disproportionately flagged African American and Hispanic students as being likely to drop out of school when most of these students were not at risk of dropping out). When teachers learn that one of their students is at risk, this may change how they approach that student, which can cause further negative treatment and consequences for that student, creating a self-fulfilling prophecy and not providing that student with the education opportunities and confidence that they deserve. These examples are only two of many consequences of using systems that have underlying bias and demonstrate the criticality of conducting fairness analysis before these systems are used with actual students. 

Existing Guidance on Fair AI & Standards for Education Technology Applications

Guidance for Education Technology Applications

Given the harms that algorithmic bias can cause in educational settings, there is an opportunity to provide national guidelines and best practices that help educators avoid these harms. The Department of Education is already responsible for protecting student privacy and provides guidelines via the Every Student Succeeds Act (ESSA) Evidence Levels to evaluate the quality of EdTech solution evidence. The Office of Educational Technology, through support of a private non-profit organization (Digital Promise) has developed guidance documents for teachers and administrators, and another for education technology developers (U.S. Department of Education, 2023, 2024). In particular, “Designing for Education with Artificial Intelligence” includes guidance for EdTech developers including an entire section called “Advancing Equity and Protecting Civil Rights” that describes algorithmic bias and suggests that, “Developers should proactively and continuously test AI products or services in education to mitigate the risk of algorithmic discrimination.” (p 28). While this is a good overall guideline, the document critically is not sufficient to help developers conduct these tests

Similarly, the National Institute of Standards and Technology has released a publication on identifying and managing bias in AI . While this publication highlights some areas of the development process and several fairness metrics, it does not provide specific guidelines to use these fairness metrics, nor is it exhaustive. Finally demonstrating the interest of industry partners, the EDSAFE AI Alliance, a philanthropically-funded alliance representing a diverse group of companies in educational technology, has also created guidance in the form of the 2024 SAFE (Safety, Accountability, Fairness, and Efficacy) Framework. Within the Fairness section of the framework, the authors highlight the importance of using fair training data, monitoring for bias, and ensuring accessibility of any AI-based tool. But again, this framework does not provide specific actions that education administrators, teachers, or EdTech developers can take to ensure these tools are fair and are not biased against specific populations. The risk to these populations and existing efforts demonstrate the need for further work to develop new approaches that can be used in the field. 

Fairness in Education Measurement

As AI is becoming increasingly used in education, the field of educational measurement has begun creating a set of analytic approaches for finding examples of algorithmic bias, many of which are based on existing approaches to uncovering bias in educational testing. One common tool is called Differential Item Functioning (DIF), which checks that test questions are fair for all students regardless of their background. For example, it ensures that native English speakers and students learning English have an equal chance to succeed on a question if they have the same level of knowledge . When differences are found, this indicates that a student’s performance on that question is not based on their knowledge of the content. 

While DIF checks have been used for several decades as a best practice in standardized testing, a comparable process in the use of AI for assessment purposes does not yet exist. There also is little historical precedent indicating that for-profit educational companies will self-govern and self-regulate without a larger set of guidelines and expectations from a governing body, such as the federal government. 

We are at a critical juncture as school districts begin adopting AI tools with minimal guidance or guardrails, and all signs point to an increase of AI in education. The US Department of Education has an opportunity to take a proactive approach to ensuring AI fairness through strategic programs of support for school leadership, developers in educational technology, and experts in the field. It is important for the larger federal government to support all educational stakeholders under a common vision for AI fairness while the field is still at the relative beginning of being adopted for educational use. 

Plan of Action 

To address this situation, the Department of Education’s Office of the Chief Data Officer should lead development of a national resource that provides direct technical assistance to school leadership, supports software developers and vendors of AI tools in creating quality tech, and invests resources to create solutions that can be used by both school leaders and application developers. This office is already responsible for data management and asset policies, and provides resources on grants and artificial intelligence for the field. The implementation of these resources would likely be carried out via grants to external actors with sufficient technical expertise, given the rapid pace of innovation in the private and academic research sectors. Leading the effort from this office ensures that these advances are answering the most important questions and can integrate them into policy standards and requirements for education solutions. Congress should allocate additional funding to the Department of Education to support the development of a technical assistance program for school districts, establish new grants for fairness evaluation tools that span the full development lifecycle, and pursue an R&D agenda for AI fairness in education. While it is hard to provide an exact estimate, similar existing programs currently cost the Department of Education between $4 and $30 million a year. 

Action 1. The Department of Education Should Provide Independent Support for School Leadership Through a Fair AI Technical Assistance Center (FAIR-AI-TAC) 

School administrators are hearing about the promise and concerns of AI solutions in the popular press, from parents, and from students. They are also being bombarded by education technology providers with new applications of AI within existing tools and through new solutions. 

These busy school leaders do not have time to learn the details of AI and bias analysis, nor do they have the technical background required to conduct deep technical evaluations of fairness within AI applications. Leaders are forced to either reject these innovations or implement them and expose their students to significant potential risk with the promise of improved learning. This is not an acceptable status quo.  

To address these issues, the Department of Education should create an AI Technical Assistance Center (the Center) that is tasked with providing direct guidance to state and local education leaders who want to incorporate AI tools fairly and effectively. The Center should be staffed by a team of professionals with expertise in data science, data safety, ethics, education, and AI system evaluation. Additionally, the Center should operate independently of AI tool vendors to maintain objectivity.

There is precedent for this type of technical support. The U.S. Department of Education’s Privacy Technical Assistance Center (PTAC) provides guidance related to data privacy and security procedures and processes to meet FERPA guidelines; they operate a help desk via phone or email, develop training materials for broad use, and provide targeted training and technical assistance for leaders. A similar kind of center could be stood up to support leaders in education who need support evaluating proposed policy or procurement decisions.  

This Center should provide a structured consulting service offering a variety of levels of expertise based on the individual stakeholder’s needs and the variety of levels of potential impact of the system/tool being evaluated on learners; this should include everything from basic levels of AI literacy to active support in choosing technological solutions for educational purposes. The Center should partner with external organizations to develop a certification system for high-quality AI educational tools that have passed a series of fairness checks. Creating a fairness certification (operationalized by third party evaluators)  would make it much easier for school leaders to recognize and adopt fair AI solutions that meet student needs. 

Action 2. The Department of Education Should Provide Expert Services, Data, and Grants for EdTech Developers 

There are many educational technology developers with AI-powered innovations. Even when well-intentioned, some of these tools do not achieve their desired impacts or may be unintentionally unsafe due to a lack of processes and tests for fairness and safety.

Educational Technology developers generally operate under significant constraints when incorporating AI models into their tools and applications. Student data is often highly detailed and deeply personal, potentially containing financial, disability, and educational status information that is currently protected by FERPA, which makes it unavailable for use in AI model training or testing. 

Developers need safe, legal, and quality datasets that they can use for testing for bias, as well as appropriate bias evaluation tools. There are several promising examples of these types of applications and new approaches to data security, such as the recently awarded NSF SafeInsights project, which allows analysis without disclosing the underlying data. In addition, philanthropically-funded organizations such as the Allen Institute for AI have released LLM evaluation tools that could be adapted and provided to Education Technology developers for testing. A vetted set of evaluation tools, along with more detailed technical resources and instructions for how to use them would encourage developers to incorporate bias evaluations early and often. Currently, there are very few market incentives or existing requirements that push developers to invest the necessary time or resources into this type of fairness analysis. Thus, the government has a key role to play here.

The Department of Education should also fund a new grant program that tasks grantees with developing a robust and independently validated third-party evaluation system that checks for fairness violations and biases throughout the model development process from pre-processing of data, to the actual AI use, to testing after AI results are created. This approach would support developers in ensuring that the tools they are publishing meet an agreed-upon minimum threshold for safe and fair use and could provide additional justification for the adoption of AI tools by school administrators.

Action 3. The Department of Education Should Develop Better Fairness R&D Tools with Researchers 

There is still no consensus on best practices for how to ensure that AI tools are fair. As AI capabilities evolve, the field needs an ongoing vetted set of analyses and approaches that will ensure that any tools being used in an educational context are safe and fair for use with no unintended consequences.

The Department of Education should lead the creation of a a working group or task force comprised of subject matter experts from education, educational technology, educational measurement, and the larger AI field to identify the state of the art in existing fairness approaches for education technology and assessment applications, with a focus on modernized conceptions of identity. This proposed task force would be an inter-organizational group that would include representatives from several different federal government offices, such as the Office of Educational Technology and the Chief Data Office as well as prominent experts from industry and academia. An initial convening could be conducted alongside leading national conferences that already attract thousands of attendees conducting cutting-edge education research (such as the American Education Research Association and National Council for Measurement in Education).

The working group’s mandate should include creating a set of recommendations for federal funding to advance research on evaluating AI educational tools for fairness and efficacy. This research agenda would likely span multiple agencies including NIST, the Institute of Education Sciences of the U.S. Department of Education, and the National Science Foundation. There are existing models for funding early stage research and development with applied approaches, including the IES “Accelerate, Transform, Scale” programs that integrate learning sciences theory with efforts to scale theories through applied education technology program and Generative AI research centers that have the existing infrastructure and mandates to conduct this type of applied research. 

Additionally, the working group should recommend the selection of a specialized group of researchers who would contribute ongoing research into new empirically-based approaches to AI fairness that would continue to be used by the larger field. This innovative work might look like developing new datasets that deliberately look for instances of bias and stereotypes, such as the CrowS-Pairs dataset. It may build on current cutting edge research into the specific contributions of variables and elements of LLM models that directly contribute to biased AI scores, such as the work being done by the AI company Anthropic. It may compare different foundation LLMs and demonstrate specific areas of bias within their output. It may also look like a collaborative effort between organizations, such as the development of the RSM-Tool, which looks for biased scoring. Finally, it may be an improved auditing tool for any portion of the model development pipeline. In general, the field does not yet have a set of universally agreed upon actionable tools and approaches that can be used across contexts and applications; this research team would help create these for the field.

Finally, the working group should recommend policies and standards that would incentivize vendors and developers working on AI education tools to adopt fairness evaluations and share their results.

Conclusion

As AI-based tools continue being used for educational purposes, there is an urgent need to develop new approaches to evaluating these solutions to fairness that include modern conceptions of student belonging and identity. This effort should be led by the Department of Education, through the Office of the Chief Data Officer, given the technical nature of the services and the relationship with sensitive data sources. While the Chief Data Officer should provide direction and leadership for the project, partnering with external organizations through federal grant processes would provide necessary capacity boosts to fulfill the mandate described in this memo.As we move into an age of widespread AI adoption, AI tools for education will be increasingly used in classrooms and in homes. Thus, it is imperative that robust fairness approaches are deployed before a new tool is used in order to protect our students, and also to protect the developers and administrators from potential litigation, loss of reputation, and other negative outcomes.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
What are some examples of what is currently being done to ensure fairness in AI applications for educational purposes?

When AI is used to grade student work, fairness is evaluated by comparing the scores assigned by AI to those assigned by human graders across different demographic groups. This is often done using statistical metrics, such as the standardized mean difference (SMD), to detect any additional bias introduced by the AI. A common benchmark for SMD is 0.15, which suggests the presence of potential machine bias compared to human scores. However, there is a need for more guidance on how to address cases where SMD values exceed this threshold.


In addition to SMD, other metrics like exact agreement, exact + adjacent agreement, correlation, and Quadratic Weighted Kappa are often used to assess the consistency and alignment between human and AI-generated scores. While these methods provide valuable insights, further research is needed to ensure these metrics are robust, resistant to manipulation, and appropriately tailored to specific use cases, data types, and varying levels of importance.

What are some concerns about using AI in education for students with diverse and overlapping identities?

Existing approaches to demographic post hoc analysis of fairness assume that there are two discrete populations that can be compared, for example students from African-American families vs. those not from African-American families, students from an English language learner family background vs. those that are not, and other known family characteristics. However in practice, people do not experience these discrete identities. Since at least the 1980s, contemporary sociological theories have emphasized that a person’s identity is contextual, hybrid, and fluid/changing. One current approach to identity that integrates concerns of equity that has been applied to AI is “intersectional identity” theory . This approach has begun to develop promising new methods that bring contemporary approaches to identity into evaluating fairness of AI using automated methods. Measuring all interactions between variables results in too small a sample; these interactions can be prioritized using theory or design principles or more advanced statistical techniques (e.g., dimensional data reduction techniques).

Elevate and Strengthen the Presidential Management Fellows Program

Founded in 1977, the Presidential Management Fellows (PMF) program is intended to be “the Federal Government’s premier leadership development program for advanced degree holders across all academic disciplines” with a mission “to recruit and develop a cadre of future government leaders from all segments of society.” The challenges facing our country require a robust pipeline of talented and representative rising leaders across federal agencies. The PMF program has historically been a leading source of such talent. 

The next Administration should leverage this storied program to reinvigorate recruitment for a small, highly-skilled management corps of upwardly-mobile public servants and ensure that the PMF program retains its role as the government’s premier pipeline for early-career talent. It should do so by committing to placing all PMF Finalists in federal jobs (rather than only half, as has been common in recent years), creating new incentives for agencies to engage, and enhancing user experience for all PMF stakeholders. 

Challenge and Opportunity

Bearing the Presidential Seal, the Presidential Management Fellows (PMF) Program is the Federal Government’s premier leadership development program for advanced degree holders across all academic disciplines. Appropriately for a program created in the President’s name, the application process for the PMF program is rigorous and competitive. Following a resume and transcript review, two assessments, and a structured interview, the Office of Personnel Management (OPM) selects and announces PMF Finalists. 

Selection as a Finalist is only the first step in a PMF applicant’s journey to a federal position. After they are announced, PMF Finalists have 12 months to find an agency posting by completing a second round of applications to specific positions that agencies have designated as eligible for PMFs. OPM reports that “over the past ten years, on average, 50% of Finalists obtain appointments as Fellows.” Most Finalists who are placed are not appointed until late in the eligibility period: halfway through the 2024 eligibility window, only 85 of 825 finalists (10%) had been appointed to positions in agencies.

For applicants and universities, this reality can be dispiriting and damage the reputation of the program, especially for those not placed. The yearlong waiting period ending without a job offer for about half of Finalists belies the magnitude of the accomplishment of rising to the top of such a competitive pool of candidates eager to serve their country. Additionally, Finalists who are not placed in a timely manner will be likelier to pursue job opportunities outside of federal service.  At a moment when the federal government is facing an extraordinary talent crisis with an aging workforce and large-scale retirements, the PMF program must better serve its purpose as a trusted source of high-level, early-career talent.

zThe current program design also affects the experience of agency leaders—such as hiring managers and Chief Human Capital Officers (CHCOs)—as they consider hiring PMFs. When agencies hire a PMF for a 2-year placement, they cover the candidate’s salary plus an $8,000 fee to OPM’s PMF program office to support its operations. Agencies consider hiring PMF Finalists with the knowledge that the PMF has the option to complete a 6-month rotational assignment outside of their hiring unit. These factors may create the impression that hiring a PMF is “costlier” than other staffing options.

Despite these challenges, the reasons for agencies to invest in the PMF program remain numerous:

The PMF is still correctly understood as the government’s premier onramp program for early career managerial talent. With some thoughtful realignment, it can sustain and strengthen this role and improve experience for all its core stakeholders.  

Plan of Action

The next Administration should take a direct hand in supporting the PMF Program. As the President’s appointee overseeing the program, the OPM Director should begin by publicly setting an ambitious placement percentage goal and then driving the below reforms to advance that goal. 

Recommendation 1. Increase the Finalist placement rate by reducing the Finalist pool.

The status quo reveals misalignment between the pool of PMF Finalists and demand for PMFs across government. This may be in part due to the scale of demand, but is also a consequence of PMF candidates and finalists with ever-broader skill sets, which makes placement more challenging and complex. Along with the 50% placement rates, the existing imbalance between finalists and placements is reflected in the decision to contract the finalist pool from 1100 in 2022 to 850 in 2023 and 825 in 2024. The next Administration should adjust the size of the Finalist pool further to ensure a near-100% placement rate and double down on its focus on general managerial talent to simplify disciplinary matching. Initially, this might mean shrinking the pool from the 825 advanced in 2024 to 500 or even fewer. 

The core principle is simple: PMF Finalists should be a valuable resource for which agencies compete. There should be (modestly) fewer Finalists than realistic agency demand, not more. Critically, this change would not aim to reduce the number of PMFs serving in government. Rather, it seeks to sustain the current numbers while dramatically reducing the number of Finalists not placed and creating a healthier set of incentives for all parties.

When the program can reliably boast high placement rates, then the Federal government can strategize on ways to meaningfully increase the pool of Fellows and use the program to zero in on priority hard-to-hire disciplines outside of general managerial talent.

Recommendation 2. Attach a financial incentive to hiring and retaining a PMF while improving accountability. 

To underscore the singular value of PMFs and their role in the hiring ecosystem, the next Administration should attach a financial incentive to hiring a PMF. 

Because of the $8,000 placement fee, PMFs are seen as a costlier route than other sources of talent. A financial incentive to hire PMFs would reverse this dynamic. The next Administration might implement a large incentive of $50,000 per Fellow, half of which would be granted when a Fellow is placed and the other half to be granted when the Fellow accepts a permanent full-time job offer in the Federal government. This split payment would signal an investment in Fellows as the future leaders of the federal government. 

Assuming an initial cohort of 400 placed Fellows at $50,000 each, OPM would require $20 million plus operating costs for the PMF program office. To secure funds, the Administration could seek appropriations, repurpose funds through normal budget channels, or pursue an agency pass-the-hat model like the financing of the Federal Executive Board and Hiring Experience program offices. 

To parallel this incentive, the Administration should also implement accountability measures to ensure agencies more accurately project their PMF needs by assigning a cost to failing to place some minimum proportion–perhaps 70%–of the Finalists projected in a given cycle. This would avoid too many unplaced Finalists. Agencies that fail to meet the threshold should have reduced or delayed access to the PMF pool in subsequent years. 

Recommendation 3. Build a Stronger Support Ecosystem 

In support of these implementation changes, the next Administration should pursue a series of actions to elevate the program and strengthen the PMF ecosystem. 

Even if the Administration pursues the above recommendations, some Finalists would remain unpaired. The PMF program office should embrace the role of a talent concierge for a smaller, more manageably-sized cohort of yet-unpaired Finalists, leveraging relationships across the government, including with PMF Alumni and the Presidential Management Alumni Association (PMAA) and OPM’s position as the government’s strategic talent lead to encourage agencies to consider specific PMF Finalists in a bespoke way. The Federal government should also consider ways to privilege applications from unplaced Finalists who meet criteria for a specific posting.

To strengthen key PMF partnerships in agencies, the Administration should elevate the role of PMF Coordinators beyond “other duties as assigned” to a GS-14 “PMF Director.” With new incentives to encourage placement and consistent strategic orientation from agency partners, agencies will be in a better position to project their placement needs by volume and role and hire PMF Finalists who meet them. PMF Coordinators would have explicit performance measures that reflect ownership over the success of the program.

The Administration must commit and sustain senior-level engagement—in the White House and at the senior levels of OMB, OPM, and in senior agency roles including Deputy Secretaries, Assistant Secretaries for Management, and Chief Human Capital Officers—to drive forward these changes. It must seize key leverage points throughout the budget and strategic management cycle, including OPM’s Human Capital Operating Plan process, OMB’s Strategic Reviews process, and the Cross-Agency Priority Goal setting rhythms. And it must sustain focus, recognizing that these new design elements may not succeed in their first cycle, and should provide support for experimentation and innovation.

Current PMF Program Compared to Proposed Reform
Status QuoProposed Change
Size of Finalist Pool800-1100400-500
Placement Rate~50%Target 100%, achieve 80-90%
Total Placements400-550320-450
Number of Unplaced Finalists400-550<100
Financial modelAgencies carry salary and benefits and pay a premium of $8,000 to OPM in cost recovery to fund program officeEach Fellow carries a financial incentive encouraging speedy placement; program office and incentive funded centrally
Experience for FinalistsFrustrating waits are typical; many hundreds of potential public servants left unplaced; experience of being a Finalist does not always reflect magnitude of accomplishmentFinalists are a valuable, scarce commodity; they have more potential matches with agencies and experience shorter waits
Experience for AgenciesLarge pool of Finalists is difficult to navigate; agencies harbor concerns about quality of Fellows waiting for placement; little urgency to act; PMFs seen as one talent pool among many; Program Coordination is often an “other duty as assigned”Smaller pool that is easier to navigate; even higher quality finalist pool; significant urgency to act to capture financial incentive and meet talent needs; clear understanding of role PMFs play in talent strategy; coordination and needs forecasting resides in a higher-graded, strategically-oriented role
Experience for Program OfficeCost-recovery model creates significant uncertainty in budgeting and operations planning; difficult to make selections due to inconsistent agency need forecastingProgram office manages access to a valuable asset; with less “selling,” staff focuses on bespoke pairing for smaller number of unpaired applicants and shaping each year’s finalist pool to reflect improved needs forecasts

Conclusion

For decades, the PMF program has consistently delivered top-tier talent to the federal government. However, the past few years have revealed a need for reform to improve the experience of PMF hopefuls and the agencies that will undoubtedly benefit from their skills. With a smaller Finalist pool, healthier incentives, and a more supportive ecosystem, agencies would compete for a subsidized pool of high-quality talent available to them at lower cost than alternative route, and Fellows who clear the significant barrier of the rigorous selection process would have far stronger assurance of a placement. If these reforms are successfully implemented, esteem for the government’s premier onramp for rising managerial talent will rise, contributing to the impression that the Federal government is a leading and prestigious employer of our nation’s rising leaders. 

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
What is the role of the PMF rotation?

The PMF program is a 2-year placement with an optional 6-month rotation in another office within the appointing agency or another agency. The rotation is an important and longstanding design element of a program aiming to build a rising cohort of managerial talent with a broad purview. While the current program requires agencies pay OPM the full salary, benefits, and a placement fee for placing a PMF, the one quarter rotation may act as a barrier to embracing PMF talent. This can be addressed by adding a significant subsidy to balance this concern.

How does shrinking the size of the Finalist pool enhance the program?

In the current program, OPM uses a rule of thumb to set the number of Finalists at approximately 80% of anticipated demand to minimize the number of unplaced Finalists. This is a prudent approach, reflected in shifting Finalist numbers in recent years: from 1100 in 2022 to 850 in 2023 and 825 in 2024. Despite adjusting the Finalist pool, unfortunately placement rates have remained near 50%. Agencies are failing to follow-through on their projected demand for PMFs, which has unfortunate consequences for Finalists and presents management challenges for the PMF program office.


This reform proposal would take a large step by reducing the Finalist pool to well below the stated demand–500 or less–and focus on general managerial talent to make the pairing process simpler. This would be, fundamentally, a temporary reset to raise placement rates and improve user experience for candidates, agencies, and the program management team. As placement rhythms strengthen along the lines described above, there is every reason for the program to grow.

Is a subsidy for PMF Finalists going to cost the government more money?

The subsidy proposed for placing a PMF candidate would not require a net increase in federal expenditures. In the status quo, all costs of the PMF program are borne by the government: agencies pay salaries and benefits, and pay a fee to OPM at the point of appointment. This proposal would surface and centralize these costs and create an agency incentive through the subsidy to hire PMFs, either by “recouping” funds collected from agencies through a pass-the-hat revolving fund or “capitalizing” on a central investment from another source. In either case, it would ensure that PMF Finalists are a scarce asset to be competed for, as the program was envisioned, and that the PMF program office manages thoughtful access to this asset for the whole government, rather than needing to be “selling” to recover operational costs.

A Quantitative Imaging Infrastructure to Revolutionize AI-Enabled Precision Medicine

Medical imaging, a non-invasive method to detect and characterize disease, stands at a crossroads. With the explosive growth of artificial intelligence (AI), medical imaging offers extraordinary potential for precision medicine yet lacks adequate quality standards to safely and effectively fulfill the promise of AI. Now is the time to create a quantitative imaging (QI) infrastructure to drive the development of precise, data-driven solutions that enhance patient care, reduce costs, and unlock the full potential of AI in modern medicine.

Medical imaging plays a major role in healthcare delivery and is an essential tool in diagnosing numerous health issues and diseases (e.g., oncology, neurology, cardiology, hepatology, nephrology, pulmonary, and musculoskeletal). In 2023, there were more than 607 million imaging procedures in the United States and, per a 2021 study, $66 billion (8.9% of the U.S. healthcare budget) is spent on imaging.  

Despite the importance and widespread use of medical imaging like magnetic resonance imaging (MRI), X-ray, ultrasound, computed tomography (CT), it is rarely standardized or quantitative. This leads to unnecessary costs due to repeat scans to achieve adequate image quality, and unharmonized and uncalibrated imaging datasets, which are often unsuitable for AI/machine learning (ML) applications. In the nascent yet exponentially expanding world of AI in medical imaging, a well-defined standards and metrology framework is required to establish robust imaging datasets for true precision medicine, thereby improving patient outcomes and reducing spiraling healthcare costs.

Challenge and Opportunity 

The U.S. spends more on healthcare than any other high-income country yet performs worse on measures of health and healthcare. Research has demonstrated that medical imaging could help save money for the health system with every $1 spent on inpatient imaging resulting in approximately $3 total savings in healthcare delivered. However, to generate healthcare savings and improve outcomes, rigorous quality assurance (QA)/quality control(QC) standards are required for true QI and data integrity.   

Today, medical imaging suffers two shortcomings inhibiting AI: 

Both result in variability impacting assessments and reducing the generalizability of, and confidence in, imaging test results and compromise data quality required for AI applications.

The growing field of QI, however, provides accurate and precise (repeatable and reproducible) quantitative-image-based metrics that are consistent across different imaging devices and over time. This benefits patients (fewer scans, biopsies), doctors, researchers, insurers, and hospitals and enables safe, viable development and use of AI/ML tools.  

Quantitative imaging metrology and standards are required as a foundation for clinically relevant and useful QI. A change from “this might be a stage 3 tumor” to “this is a stage 3 tumor” will affect how oncologists can treat a patient. Quantitative imaging also has the potential to remove the need for an invasive biopsy and, in some cases, provide valuable and objective information before even the most expert radiologist’s qualitative assessment. This can mean the difference between taking a nonresponding patient off a toxic chemotherapeutic agent or recognizing a strong positive treatment response before a traditional assessment. 

Plan of Action 

The incoming administration should develop and fund a Quantitative Imaging Infrastructure to provide medical imaging with a foundation of rigorous QA/QC methodologies, metrology, and standards—all essential for AI applications.

Coordinated leadership is essential to achieve such standardization. Numerous medical, radiological, and standards organizations support and recognize the power of QI and the need for rigorous QA/QC and metrology standards (see FAQs). Currently, no single U.S. organization has the oversight capabilities, breadth, mandate, or funding to effectively implement and regulate QI or a standards and metrology framework.

As set forth below, earlier successful approaches to quality and standards in other realms offer inspiration and guidance for medical imaging and this proposal:

Recommendation 1. Create a Medical Metrology Center of Excellence for Quantitative Imaging. 

Establishing a QI infrastructure would transform all medical imaging modalities and clinical applications. Our recommendation is that an autonomous organization be formed, possibly appended to existing infrastructure, with the mandate and responsibility to develop and operationally support the implementation of quantitative QA/QC methodologies for medical imaging in the age of AI. Specifically this fully integrated QI Metrology Center of Excellence would need federal funding to:

Once implemented, the Center could focus on self-sustaining approaches such as testing and services provided for a fee to users.

Similar programs and efforts have resulted in funding (public and private) ranging from $90 million (e.g., Pathogen Genomics Centers of Excellence Network) to $150 million (e.g., Biology and Machine Learning – Broad Institute). Importantly, implementing a QI Center of Excellence would augment and complement federal funding currently being awarded through ARPA-H and the Cancer Moonshot, as neither have an overarching imaging framework for intercomparability between projects.  

While this list is by no means exhaustive, any organization would need input and buy-in from:

International organizations also have relevant programs, guidance, and insight, including:

Recommendation 2. Implement legislation and/or regulation providing incentives for standardizing all medical imaging. 

The variability of current standard-of-care medical imaging (whether acquired across different sites or over a period of time) creates different “appearances.” This variability can result in different diagnoses or treatment response measurements, even though the underlying pathology for a given patient is unchanged. Real-world examples abound, such as one study that found 10 MRI studies over three weeks resulted in 10 different reports. This heterogeneity of imaging data can lead to a variable assessment by a radiologist (inter-reader variability), AI interpretation (“garbage-in-garbage-out”), or treatment recommendations from clinicians. Efforts are underway to develop “vendor-neutral sequences” for MRI and other methods (such as quantitative ground truth references, metrological standards, etc.) to improve data quality and ensure intercomparable results across vendors and over time. 

To do so, however, requires coordination by all original equipment manufacturers (OEMs) or legislation to incentivize standards. The 1992 Mammography Quality Standards Act (MQSA) provides an analogous roadmap. MQSA’s passage implemented rigorous standards for mammography, and similar legislation focused on quality assurance of quantitative imaging, reducing or eliminating machine bias, and improved standards would reduce the need for repeat scans and improve datasets. 

In addition, regulatory initiatives could also advance quantitative imaging. For example, in 2022, the Food and Drug Administration (FDA) issued Technical Performance Assessment of Quantitative Imaging in Radiological Device Premarket Submissions, recognizing the importance of ground truth references with respect to quantitative imaging algorithms. A mandate requiring the use of ground truth reference standards would change standard practice and be a significant step to improving quantitative imaging algorithms.

Recommendation 3. Ensure a funded QA component for federally funded research using medical imaging. 

All federal medical research grant or contract awards should contain QA funds and require rigorous QA methodologies. The quality system aspects of such grants would fit the scope of the project; for example, a multiyear, multisite project would have a different scope than single-site, short-term work.

NIH spends the majority of its $48 billion budget on medical research. Projects include multiyear, multisite studies with imaging components. While NIH does have guidelines on research and grant funding (e.g., Guidance: Rigor and Reproducibility in Grant Applications), this guidance falls short in multisite, multiyear projects where clinical scanning is a component of the study.  

To the extent NIH-funded programs fail to include ground truth references where clinical imaging is used, the resulting data cannot be accurately compared over time or across sites. Lack of standardization and failure to require rigorous and reproducible methods compromises the long-term use and applicability of the funded research. 

By contrast, implementation of rigorous standards regarding QA/QC, standardization, etc. improve research in terms of reproducibility, repeatability, and ultimate outcomes. Further, confidence in imaging datasets enables the use of existing and qualified research in future NIH-funded work and/or imaging dataset repositories that are being leveraged for AI research and development, such as the Medical Imaging and Resource Center (MIDRC). (See also: Open Access Medical Imaging Repositories.)  

Recommendation 4. Implement a Clinical Standardization Program (CSP) for quantitative imaging. 

While not focused on medical imaging, the CDC’s CSPs have been incredibly successful and “improve the accuracy and reliability of laboratory tests for key chronic biomarkers, such as those for diabetes, cancer, and kidney, bone, heart, and thyroid disease.” By way of example, the CSP for Lipids Standardization has “resulted in an estimated benefit of $338M at a cost of $1.7M.” Given the breadth of use of medical imaging, implementing such a program for QI would have even greater benefits.  

Although many people think of the images derived from clinical imaging scans as “pictures,” the pixel and voxel numbers that make up those images contain meaningful biological information. The objective biological information that is extracted by QI is conceptually the same as the biological information that is extracted from tissue or fluids by laboratory assay techniques. Thus, quantitative imaging biomarkers can be understood to be “imaging assays.” 

The QA/QC standards that have been developed for laboratory assays can and should be adapted to quantitative imaging.  (See also regulations, history, and standards of the Clinical Laboratory Improvement Amendment (CLIA) ensuring quality laboratory testing.)

Recommendation 5. Implement an accreditation program and reimbursement code for quantitative imaging starting with qMRI.

The American College of Radiology currently provides basic accreditation for clinical imaging scanners and concomitant QA for MRI. These requirements, however, have been in place for nearly two decades and do not address many newer quantitative aspects (e.g., relaxometry and ADC) nor account for the impact of image variability in effective AI use. Several new Current Procedural Terminology (CPT) codes have been recently adopted focused on quantitative imaging. An expansion of reimbursement codes for quantitative imaging could drive more widespread clinical adoption.

QI is analogous to the quantitative blood, serum and tissue assays done in clinical laboratories, subject to CLIA, one of the most impactful programs for improving the accuracy and reliability of laboratory assays. This CMS-administered mandatory accreditation program promulgates quality standards for all laboratory testing to ensure the accuracy, reliability, and timeliness of patient test results, regardless of where the test was performed. 

Conclusion

These five proposals provide a range of actionable opportunities to modernize the approach to medical imaging to fit the age of AI, data integrity, and precision patient health. A comprehensive, metrology-based quantitative imaging infrastructure will transform medical imaging through:

With robust metrological underpinnings and a funded infrastructure, the medical community will have confidence in the QI data, unlocking powerful health insights only imaginable until now.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
Is scanner variability and lack of standardization really an issue?

Yes. Using MRI as an example, numerous articles, papers, and publications acknowledge qMRI variability in scanner output can vary between manufacturers, over time, and after software or hardware maintenance or upgrades.

What is in-vivo imaging metrology, and why is it the future?

With in-vivo metrology, measurements are performed on the “body of living subjects (human or animal) without taking the sample out of the living subject (biopsy).” True in-vivo metrology will enable the diagnosis or understanding of tissue state before a radiologist’s visual inspection. Such measurement capabilities are objective, in contrast to the subjective, qualitative interpretation by a human observer. In-vivo metrology will enhance and support the practice of radiology in addition to reducing unnecessary procedures and associated costs.

What are the essential aspects of QI?

Current digital imaging modalities provide the ability to measure a variety of biological and physical quantities with accuracy and reliability, e.g., tissue characterization, physical dimensions, temperature, body mass components, etc. However, consensus standards and corresponding certification or accreditation programs are essential to bring the benefits of these objective QI parameters to patient care. The CSP follows this paradigm as does the earlier CLIA, both of which have been instrumental in improving the accuracy and consistency of laboratory assays. This proposal aims to bring the same rigor to immediately improve the quality, safety and effectiveness of medical imaging in clinical care and to advance the input data needed to create, as well as safely and responsibly use, robust imaging AI tools for the benefit of all patients.

What are “phantoms,” or ground truth references, and why are they important?

Phantoms are specialized test objects used as ground truth references for quantitative imaging and analysis. NIST plays a central role in measuring and testing solutions for phantoms. Phantoms are used in ultrasound, CT, MRI, and other imaging modalities for routine QA/QC and machine testing. Phantoms are key to harmonizing and standardizing data and improve data quality needed for AI applications.

What do you mean by “precision medicine”? Don’t we already have it?

Precision medicine is a popular term with many definitions/approaches applying to genetics, oncology, pharmacogenetics, oncology, etc. (See, e.g., NCI, FDA, NIH, National Human Genome Research Institute.) Generally, precision (or personalized) medicine focuses on the idea that treatment can be individualized (rather than generalized). While there have been exciting advances in personalized medicine (such as gene testing), the variability of medical imaging is a major limitation in realizing the full potential of precision medicine. Recognizing that medical imaging is a fundamental measurement tool from diagnosis through measurement of treatment response and toxicity assessment, this proposal aims to transition medical imaging practices to quantitative imaging to enable the realization of precision medicine and timely personalized approaches to patient care.

How does standardized imaging data and QI help radiology and support healthcare practitioners?

Radiologists need accurate and reliable data to make informed decisions. Improving standardization and advancing QI metrology will support radiologists by improving data quality. To the extent radiologists are relying on AI platforms, data quality is even more essential when it is used to drive AI applications, as the outputs of AI models rely on sound acquisition methods and accurate quantitative datasets.


Standardized data also helps patients by reducing the need for repeat scans, which saves time, money, and unnecessary radiation (for ionizing methods).

Does quantitative imaging improve accessibility to healthcare?

Yes! Using MRI as an example, qMRI can advance and support efforts to make MRI more accessible. Historically, MRI systems cost millions of dollars and are located in high-resource hospital settings. Numerous healthcare and policy providers are making efforts to create “accessible” MRI systems, which include portable systems at lower field strengths and to address organ-specific diseases. New low-field systems can reach patient populations historically absent from high-resource hospital settings. However, robust and reliable quantitative data are needed to ensure data collected in rural, nonhospital settings, or in Low and Middle Income Countries, can be objectively compared to data from high-resource hospital settings.


Further, accessibility can be limited by a lack of local expertise. AI could help fill the gap.
However, a QI infrastructure is needed for safe and responsible use of AI tools, ensuring adequate quality of the input imaging data.

What is a specific example of the benefits of standardization?

The I-SPY 2 Clinical Breast Trials provide a prime example of the need for rigorous QA and scanner standardization. The I-SPY 2 trial is a novel approach to breast cancer treatment that closely monitors treatment response to neoadjuvant therapy. If there is no immediate/early response, the patient is switched to a different drug. MR imaging is acquired at various points during the treatment to determine the initial tumor size and functional characteristics and then to measure any tumor shrinkage/response over the course of treatment. One quantitative MRI tumor characteristic that has shown promise for evaluation of treatment response and is being evaluated in the trial is ADC, a measure of tissue water mobility which is calculated from diffusion-weighted imaging. It is essential for the trial that MR results can be compared over time as well as across sites. To truly know whether a patient is responding, the radiologist must have confidence that any change in the MR reading or measurement is due to a physiological change and not due to a scanner change such as drift, gradient failure, or software upgrade.


For the I-SPY 2 trial, breast MRI phantoms and a standardized imaging protocol are used to test and harmonize scanner performance and evaluate measurement bias over time and across sites. This approach then provides clear data/information on image quality and quantitative measurement (e.g., ADC) for both the trial (comparing data from all sites is possible) as well as for the individual imaging sites.

What are the benefits of a metrological and standards-based framework for medical imaging in the age of AI?

Nonstandardized imaging results in variation that requires orders of magnitude more data to train an algorithm. More importantly, without reliable and standardized datasets, AI algorithms drift, resulting in degradation of both protocols and performance. Creating and supporting a standards-based framework for medical imaging will mitigate these issues as well as lead to:



  • Integrated and coordinated system for establishing QIBs, screening, and treatment planning.

  • Cost savings: Standardizing data and implementing quantitative results in superior datasets for clinical use or as part of large datasets for AI applications. Clinical Standardization Programs have focused on standardizing tests and have been shown to save “millions in health care costs.”

  • Better health outcomes: Standardization reduces reader error and enables new AI applications to support current radiology practices.

  • Support for radiologists’ diagnoses.

  • Fewer incorrect diagnoses (false positives and false negatives).

  • Elimination of millions of unnecessary invasive biopsies.

  • Fewer repeat scans.

  • Robust and reliable datasets for AI applications (e.g., preventing model collapse).


It benefits federal organizations such as the National Institutes of Health, Centers for Medicare and Medicaid Services, and Veterans Affairs as well as the private and nonprofit sectors (insurers, hospital systems, pharmaceutical, imaging software, and AI companies). The ultimate beneficiary, however, is the patient, who will receive an objective, reliable quantitative measure of their health—relevant for a point-in-time assessment as well as longitudinal follow-up.

Who is likely to push back on this proposal, and how can that hurdle be overcome?

Possible pushback from such a program may come from: (1) radiologists who are unfamiliar with the power of quantitative imaging for precision health and/or the importance and incredible benefits of clean datasets for AI applications; or (2) manufacturers (OEMs) who aim to improve output through differentiation and are focused on customers who are more interested in their qualitative practice.


Radiology practices: Radiology practices’ main objective is to provide the most accurate diagnosis possible in the least amount of time, as cost-effectively as possible. Standardization and calibration are generally perceived as requiring additional time and increased costs; however, these perceptions are often not true, and the variability in imaging introduces more time consumption and challenges. The existing standard of care relies on qualitative assessments of medical images.


While excellent for understanding a patient’s health at a single point in time (though even in these cases subtle abnormalities can be missed), longitudinal monitoring is impossible without robust metrological standards for reproducibility and quantitative assessment of tissue health. While a move from qualitative to quantitative imaging may require additional education, understanding, and time, such an infrastructure will provide radiologists with improved capabilities and an opportunity to supplement and augment the existing standard of care.


Further, AI is undeniably being incorporated into numerous radiology applications, which will require accurate and reliable datasets. As such, it will be important to work with radiology practices to demonstrate a move to standardization will, ultimately, reduce time and increase the ability to accurately diagnose patients.


OEMs: Imaging device manufacturers work diligently to improve their outputs. To the extent differentiation is seen as a business advantage, a move toward vendor-neutral and scanner-agnostic metrics may initially be met with resistance. However, all OEMs are investing resources to improve AI applications and patient health. All benefit from input data that is standard and robust and provides enough transparency to ensure FAIR data principles (findability, accessibility, interoperability, and reusability).


OEMs have plenty of areas for differentiation including improving the patient experience and shortening scan times. We believe OEMs, as part of their move to embrace AI, will find clear metrology and standards-based framework a positive for their own business and the field as a whole.

What is the first step to get this proposal off the ground? Could there be a pilot project?

The first step is to convene a meeting of leaders in the field within three months to establish priorities and timelines for successful implementation and adoption of a Center of Excellence. Any Center must be well-funded with experienced leadership and will need the support and collaboration across the relevant agencies and organizations.


There are numerous potential pilots. The key is to identify an actionable study where results could be achieved within a reasonable time. For example, a pilot study to demonstrate the importance of quantitative MRI and sound datasets for AI could be implemented at the Veterans Administration hospital system. This study could focus on quantifying benefits from standardization and implementation of quantitative diffusion MRI, an “imaging biopsy” modality as well as mirror advances and knowledge identified in the existing I-SPY 2 clinical breast trials.

Why have similar efforts failed in the past? How will your proposal avoid those pitfalls?

The timing is right for three reasons: (1) quantitative imaging is doable; (2) AI is upon us; and (3) there is a desire and need to reduce healthcare costs and improve patient outcomes.


There is widespread agreement that QI methodologies have enormous potential benefits, and many government agencies and industry organizations have acknowledged this. Unfortunately, there has been no unifying entity with sufficient resources and professional leadership to coordinate and focus these efforts. Many organizations have been organized and run by volunteers. Finally, some previously funded efforts to support quantitative imaging (e.g., QIN and QIBA) have recently lost dedicated funding.


With rapid advances in technology, including the promise of AI, there is new and shared motivation across communities to revise our approach to data generation and collection at-large—focused on standardization, precision, and transparency. By leveraging the existing widespread support, along with dedicated resources for implementation and enforcement, this proposal will drive the necessary change.

Is there an effort or need for an international component?

Yes. Human health has no geographical boundaries, so a global approach to quantitative imaging would benefit all. QI is being studied, implemented, and adopted globally.


However, as is the case in the U.S., while standards have been proposed, there is no international body to govern the implementation, coordination, and maturation of this process. The initiatives put forth here could provide a roadmap for global collaboration (ever-more important with AI) and standards that would speed up development and implementation both in the U.S. and abroad.

Work-based Learning for All: Aligning K-12 Education and the Workplace for both Students and Teachers

The incoming presidential administration of 2025 should champion a policy position calling for strengthening of the connection between K-12 schools and community workplaces. Such connections result in a number of benefits including modernized curricula, more meaningful lessons, more motivated students, more college and career readiness, more qualified applicants for local jobs, more vibrant communities, and a stronger nation. The gains associated with education-workplace partnerships are certainly not exclusive to STEM disciplines of study but given the high-demand for talent in STEM business and industry, the imperative may be greatest in science and mathematics, and the applied domains of engineering and technology. 

The rationale for a policy priority around K-12 and workplace partnerships centers around waning public confidence in the ability of schools to prepare tomorrow’s workforce. A perceived disconnect between what gets taught and what learners need in order to thrive on the job threatens individual livelihoods, family and community stability, and national competitiveness in an ever-more rapidly evolving global economy. Bridges are needed that unite education and workplaces, putting students and their teachers to work beyond the classroom. A new administration should:

  1. Expand externships for teachers in community workplaces. The best way to help every student to explore and to be inspired about career horizons is to prepare and inspire their teachers to represent to them the opportunities that await. Externships in community workplaces sharpen teachers’ content knowledge and skills and equip them to portray the exciting careers that await students. The existing Research Experiences for Teachers (RET) federal infrastructure can be adapted for supporting externships. 
  2. Deploy Competency-Based Education (CBE) at scale. America’s prevailing school model inhibits the expansion of experiential, or Work-Based Learning (WBL) in workplaces. The school day is a regimented sequence of seat-time tallies toward a seven-period stack of classes yielding little if any time to immerse learners in relevant experiences at workplaces. Or as one advocacy organization phrased it, “Today’s high school transcript is a record of time and activity, but not a very good measure of knowledge, skills, and dispositions. It doesn’t capture experiences or work products that provide evidence of growth and accomplishment.” An internet search of Work-based Learning nets over 3 billion hits. It’s one of the hottest topics in education. But those hits reveal a weakness to the WBL “movement”: it is almost entirely focused on career and technical education, a branch of general education serving about one-fourth of all students. Going forward, core area teachers and classes must take part. To do so, mathematics, science and other required and college preparatory courses need flexibility from seat time and content delivery. When teachers, schools and districts adopt Competency-Based Education, this allows more time for the other 75% of learners to earn credits by acquiring the knowledge and skills of a subject area while doing, making and working. Models exist for doing so.  

Concerted federal policy promoting the connection between K-12 schools and community workplaces sends a strong, bipartisan message to both education and employer sectors of the nation that the myriad advantages to learners, employers, and communities of cross-sector collaboration will now be the norm, not the exception. Moreover, it requires no new or novel and untested programmatic priorities – they are already at play in forward-thinking communities. Teacher externships dot the American landscape and will fit neatly into a new RET mold (coupling Research Experiences for Teachers with Regional Externships for Teachers as menu options). Competency-Based Education, with guidelines for Work-Based Learning, is already on paper in most U.S. states. Now is prime time to expand these life-changing educational reforms for all young Americans. 

Such expansions would fit neatly into existing federal structures; federal agencies have long supported competency-based education (U.S. Department of Education), Work-based Learning (U.S. Department of Labor), and Teacher-Externships (U.S. Department of Energy, and National Science Foundation). The current national landscape of teacher-externships, while promising, is  fraught with inconsistency and low participation: presently there are thousands of local teacher-externship models of wide variation in duration and rigor operated by school districts, local business organizations, higher education institutions, and regional education groups. Federal research-based guidelines and example-setting is a desperately needed function for standardizing high-quality experiences. Federal guidance and promotion could also help expand those experiences from the present low-capacity  (estimating 10 teachers per year in 5,000 local programs equates to 50,000 teacher-externs annually while there are over 3 million K-12 educators nationwide, meaning 60 years to reach all practitioners) to greater volume through more workplace and educator involvement.

Similarly, the national portrait for competency-based education leading to work-based learning presents a golden opportunity to usher educational transformation. At present, many schools and districts implement CBE to limited degrees in specific courses (typically Career and Technology Education, or CTE) for certain students (non-college bound). The potential for far greater impact across courses and the entire student spectrum awaits federal guidance and support.   

Challenge and Opportunity  

Urgency for Action

Thousands of businesses in towns and cities across the United States use science, mathematics and technology to engineer global goods while struggling to find and employ local talent. Thousands of schools across the U.S. teach science, mathematics, engineering and technology yet struggle to inspire their students toward local career opportunities. These two seemingly parallel universes overlap like the acetate pages of an anatomy textbook—muscle over bone—while largely failing to unite for mutual benefit. Iowa for example, is home to 4,273 global manufacturers depending on 263,870 employees to move product out the door. Pella Window, John Deere, Vermeer, Diamond-Vogel, Collins Aerospace, Winnebago, Tyson and others scramble to fill roughly 15,000 STEM job openings (p. 61) at any given time. The good news is that 75% of the state’s high school graduates profess interest (p. 29) in STEM careers. The bad news is that just 37% of graduates (p. 30) intend to live and work in Iowa. That is unless they’ve enjoyed a work-based learning experience and/or had a teacher who had spent a summer in industry. The Iowa experience parallels that of many rural and urban regions across the country: students whose teacher externed find more relevance in STEM classes applied to local jobs, And students who enjoy work-based learning are more likely to pursue careers locally after graduation. In combination, these two programs serve up a culture of connectedness between the world of work and the world of education, generating a win-win outcome for educators, employers, families, communities, and most importantly, for students.       

Opportunity for Impact

Immersing students and their teachers in workplace experiences is not a new idea. Career and technology education (CTE) has been a driving force for WBL for over 100 years. More recently, federal policy during the Obama administration re-shaped the blueprint for Perkins reauthorization by encouraging models that “better focus on real world experiences” (p. 3). And under the Trump administration the federal STEM education strategic plan called for a new and renewed emphasis on “…education-employer partnerships leading to work-based learning…” (p. 4). The key word here is “new”, and it’s not being emphasized enough: the status quo remains centered on CTE when it comes to teachers and students connecting with the work world, leaving out nearly three-quarters of all students. High school internships, for example, are completed by only about two percent of U.S. students, and CTE programs are completed by approximately 22 percent of white students but 18 percent of Black and 16 percent of Hispanic students. The national standards upon which states and districts base their mathematics and science curricula, including the Common Core and the Next Generation Science Standards, are not much help. They urge applied classroom problem-solving but fail to promote WBL for students or teachers. Today, the vast majority of K-12 student WBL opportunities—internships, apprenticeships, job shadows, collaborative projects, etc., take place through the CTE wing of schools. Likewise, most teacher-externship programs engage CTE educators almost exclusively. 

The potent WBL tools of career-technical education transposed over to core subject area students and teachers can invigorate mathematics, science and computing class, too. 

Impact Opportunity for Externships

As one former extern put it, “If you send one kid on an internship, it affects that one kid. If you send a teacher, the impact reaches their 200 students!” Especially for today’s rapidly growing and economically vital career sectors including Health Science, Information Technology, Biotech, Manufacturing, Agriculture, Data Analytics, Food, and Nature Resources, teacher externships can fuel the talent pipeline. Iowa has been conducting just such an experiment for a decade, making this type of professional development available to core discipline teachers. “Surveyed teacher-externs agreed or strongly agreed that it affected the way they taught, their understanding of 21st century [transportable] skills through math and science, and they agreed or strongly agreed that more students expressed an interest in STEM careers as a result of their having participated in the externship (p. 12). Nearly all participating teachers (93%) described the externship as “more valuable than any other PD in which they had ever taken part” (p. 13).

Specific impacts on teachers included the following: 

Specific impacts on their students include the following: 

Beyond the direct effects upon students and their teachers, externships in local workplaces leave lasting relationships that manifest year after year in tours, projects, mentorships, equipment support, summer jobs, etc. Teacher testimonials speak to the lasting effects. 

Impact Opportunity for CBE and WBL

Although rarely implemented, every U.S. state now allows Competency-Based Education. Broadly defined, CBE is an education model where students demonstrate mastery of concepts and skills of a subject to advance and graduate, rather than log a set number of hours seat-time and pass tests. Students move at individualized pace, concepts are accrued at variable rates and sequences, teachers operate as facilitators, and the work is more often projects-based—much of it occurring outside classroom walls. CBE solves the top inhibitor to Work-Based Learning for non-CTE, core content areas of study including science, mathematics, and computing: it frees up time. 

Utah, Washington, and Wyoming are considered leaders in the CBE arena for crafting policy guidelines sufficient for a few schools to pilot the model. In Washington, 28 school districts are collaborating to establish at least one CBE school in each, the Mastery-Based Learning Collaborative (MBLC). 

Another trailblazer in CBE, North Dakota, was recently recognized by the Education Commission of the States for legislating a series of changes to school rules to disinhibit CBE and WBL: (a) A competency-based student graduation pathway and allowance for outside work to count for course credit; (b) Level state support per student whether credits are earned inside or outside the classroom; and  (c) Scholarships that honor demonstrated competency equally to the standard credits and grades criterion.  

Finally, a school that typifies the power of CBE across subject areas, supported by the influential XQ Institute, is a metropolitan magnet model called Iowa BIG in Cedar Rapids. Enrollees choose local projects in partnership with an industry partner. Projects, like real life, are necessarily transdisciplinary. And project outcomes (i.e., mastery) determine grades. Outcomes include:

Yet, for all its impact and promise, Iowa BIG, like many CBE pilots, struggles to broaden offerings (currently limited to English, social studies, and business credits), and enrollment (roughly 100 students out of a grade 11-12 regional population over ten-times that amount). As discussed in the next section, CBE programs can be significantly constrained by local, state, and federal policies (or lack thereof).

Challenges Limiting Impact

The limited exposure of American K-12 students to teachers who enjoy an Externship, or to Competency-Based Education leading to Work-Based Learning testifies to the multiple layers of challenge to be navigated. At the local district level, school schedules and the lack of communication across school   – business boundaries are chief inhibitors to WBL, while educator professional development and crediting/graduation rules suppress CBE. At the state level, the inhibitors reveal themselves to be systemic: funding of and priority needs for educator professional development, a lack of a coherent and unifying profile of a graduate, standardized assessments, and graduation requirements retard forward movement on experiential partnerships. Logically, federal challenges have enormous influence on state and local conditions: the paucity of research and development on innovative instructional and assessment practices, inadequate communication of existent resources to drive WBL and other national education imperatives, insufficient support for the establishment of state and regional intermediary structures to drive local innovation, and non-complimentary funding programs that if coordinated could significantly advance K-12 –workplace alignment.  

The pace of progress at the local school level is ultimately most strongly influenced by federal policy priority. The policy is well-established by the federal STEM education strategic plan Charting a Course for Success: America’s Strategy for STEM Education, a report by the Committee on STEM Education of the National Science and Technology Council, Pathway 1: Develop and Enrich Strategic Partnerships (p. 9). The plan was developed through and embraced for its bipartisan approach. Refocusing on its fulfillment will make the United States a stronger and more prosperous nation.

Plan of Action

The federal government’s leadership is paramount in driving policy toward education-workplace alignment. Specific roles range from investment to asset allocation to communication, specific to both teacher externships and CBE leading to WBL.

(1) Congress should legislate that all federal agencies involved in STEM education outreach (those represented on the Committee on STEM Education [Co-STEM] and on the Subcommittee on Federal Coordination in STEM Education [FC-STEM]) establish teacher-externship programs at their facilities as capacity and security permit. The FC-STEM should designate an Inter-agency Working Group on Teacher-Externships [IWG-TE]  to be charged with developing a standard protocol consistent with evidence-based practice (e.g., minimum four-week, maximum eight-week summer immersion, authentic work experience applying knowledge and skills of their teaching discipline, close mentorship and supervision, the production of a translational teaching product such as a lesson, unit, or career exploratory component, compensation commensurate with qualifications, awareness and promotion activities, etc.). The IWG-TE will provide an annual report of externships activity across agencies to the FC-STEM and Co-STEM. 

(2) Within two years of enactment, all agencies participating in teacher externships shall develop and implement an expansion of the externships model to localities nationwide through a grant program by which eligible LEAs, AEAs, and SEAs may compete for funding to administer local teacher-externship programs in partnership with local employers (industry, nonprofit, public sector, etc.) pertinent to the mission and scope of the respective agency. For example, EPA may fund externs in state natural resource offices, and NASA may fund externs in aerospace industry facilities. The IWG-TE will include progress and participation in the grant program as part of their annual report.

(3) The IWG-TE shall design and administer an assessment instrument for components (1) and (2) that details participation rates by agency, demographics of participants, impact on participants’ teaching, and evidence of impact on the students of participants related to interest in and capability for high-demand career pursuit. An external expert in teacher-externships administration may be contracted for guidance in the establishment of the externships program and its assessment. 

As to funding, the agencies charged with implementation are those already conducting outreach, so it could be that initially no new dollars accompany the mandate. However, for the second component (grants), new funding would definitely be needed. A budget line request in 2027 seeking $10 million to be distributed proportionally to agencies based on numbers of externs – determined by the Office of Science and Technology Policy in close consult with FC-STEM – such that a goal of 1500 total externs be supported nationwide at an estimated cost of $6,000 each, plus administrative costs. In summary:

Teacher Externships

Competency-based Education leading to Work-Based Learning

Recommendations supporting both innovations

Conclusion 

Teachers prepared to connect what happens between 8:00 am and 3:00 pm to real life beyond school walls reflect the future of education. Learners whose classrooms expand to workplaces hold our best hopes as tomorrow’s innovators. Studying forces and vectors at the amusement park make Physics come alive. Embryo care at the local hatchery enlivens biology lessons. Pricing insurance against actuarial tables adds up in Algebra. Crime lab forensics gives chemistry a courtroom. Designing video games that use AI to up the action puts a byte in computer study. And all such experiences fuel passions and ignite dreams for STEM study and careers. Let America put learners and their teachers to work beyond classrooms to bridge the chasm between classrooms and careers. This federal policy priority will be a win-win for learners, their families and communities, employers, and the nation.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Pursuing A Missile Pre-Launch Notification Agreement with China as a Risk Reduction Measure

While attempts at dialogue and military-to-military communication with China regarding its growing nuclear arsenal have increased, the United States has so far been unable to establish permanent lines of communication on nuclear weapons issues, let alone reach a substantive bilateral arms control agreement with China. Given the simmering tensions between the United States and China, lack of communication can be dangerous. Miscommunication or miscalculation between the two nuclear powers – especially during a crisis – could lead to escalation and increased risk of nuclear weapons use. 

In an effort to prevent this, the next U.S. presidential administration should pursue a Missile Pre-Launch Notification Agreement with China. The agreement should include a commitment by each party to notify the other ahead of all strategic ballistic missile launches. Similar agreements currently exist between the United States and Russia and between China and Russia. One between the United States and China would be a significant confidence-building measure for reducing the risk of nuclear weapons use and establishing a foundation for future arms control negotiations.

Challenge and Opportunity

Between states with fragile relations, missile launches may be seen as provocative. In the absence of proper communication, a surprise missile test launch in the heat of a tense crisis could trigger overreaction and escalate tensions. Early warning systems are made to detect incoming missiles, but experts estimate that the US early-warning system would have just two minutes to determine if the attack is real or serious enough to advise the president on a possible nuclear counterattack. For example, when the Soviet Union test-launched four submarine-launched ballistic missiles (SLBMs) in 1980, the US early warning system projected that one of the missiles appeared to be headed toward the United States, resulting in an emergency threat assessment conference of US officials. 

Establishing regular communications is increasingly important as China grows its nuclear arsenal of quick-launching ballistic missiles, with the Pentagon estimating that China’s arsenal may reach 1,000 warheads by 2030. This is creating increasing concern about China’s intentions for how it might use nuclear weapons. In reaction, some US officials are signaling that it may be necessary for the United States to field new nuclear weapons systems or increase the number of deployed warheads. Defense hawks even advocate curtailing diplomatic communication with China, arguing that talks would allow China leverage and insight into US nuclear thinking.

With tensions and aggressive rhetoric on the rise, the next administration needs to prioritize and reaffirm the necessity of regular communication with China on military and nuclear weapons issues to reduce the risk of misunderstandings and conflict and mitigate the chance of accidental escalation and miscalculation.

The opportunity for negotiating an agreement with China exists despite heightened tensions. Although still inadequate, military-to-military communications between China and the United States have improved since a breakdown in 2022 following Speaker Nancy Pelosi’s visit to Taiwan, to which China responded with military exercises, missile tests, and sanctions on the island.

On November 6, 2023, Chinese Director-General of the Department of Arms Control Sun Xiaobo and US Assistant Secretary of State for Arms Control, Deterrence, and Stability Mallory Stewart discussed nonproliferation and nuclear transparency during the first US-China arms control talk in five years. Days later, Presidents Biden and Xi decided to resume military-to-military conversations and encouraged a follow-up arms control talk. A high-level China-US defense policy talk at the Pentagon in early January 2024 followed this summit. Most recently, Presidents Biden and Xi agreed in Lima, Peru that humans, not artificial intelligence, should have control over the decision to launch nuclear weapons. These meetings show promising signs of improved dialogue, but the United States’ continual emphasis on China as a competitor and China’s recent cancelation of arms control talks with the United States over Taiwan continue to undermine progress.

Policy Models

A Missile Pre-Launch Notification Agreement between China and the United States should include a commitment to provide at least 24 hours of advanced notice of all strategic ballistic missile tests including the planned launch and impact locations. The agreement would build on historical models of risk reduction measures between other states. For example, at the 1988 Moscow Summit, the United States and the Soviet Union signed the Agreement on Notifications of Launches of Ballistic Missiles to notify each other of the planned date, launch area, and area of impact no less than 24 hours in advance of any intercontinental ballistic missile (ICBM) or submarine-launched ballistic missile (SLBM) launches. These notifications were communicated through established Nuclear Risk Reduction Centers. The Strategic Arms Reduction Talks (START), signed in 1991, followed up on the notification agreement by including an agreement to provide more information, such as telemetry broadcast frequencies, in addition to the planned launch date and the launch and reentry area. 

The two countries expanded on this agreement through the Memorandum of Agreement on the Establishment of a Joint Center for the Exchange of Data from Early Warning Systems and Notifications of Missile Launches (also known as JDEC MOA) and the Memorandum of Understanding on Notifications of Missile Launches (PLNS MOU). The purpose of these agreements, signed in 2000, was to prevent a nuclear attack based on a false early warning system notification, and the agreements were carried forward into the New START treaty that entered into force in 2011.

While Russia has suspended its participation in the New START treaty and increased its threatening rhetoric around the potential use of nuclear weapons in its war in Ukraine, the Russian Foreign Ministry said that Russia would continue to provide notification of ballistic missile launches to the United States. This demonstrates the value of communication amid tensions and conventional conflict to prevent misunderstanding. 

In 2009, Russia and China signed a pre-launch notification agreement, marking China’s first bilateral arms control agreement. This agreement was extended in 2020 for another 10 years and covers launches of ballistic missiles with ranges over 2,000 km that are in the direction of the other country. The United States and China have no such arrangement. However, China did notify the United States, Australia, New Zealand, and the Japanese Coast Guard 24 hours before an ICBM launch into the Pacific Ocean on September 25, 2024. This launch appeared to be the first test into the Pacific China has conducted in over thirty years, and the gesture of notifying the United States beforehand was, according to a Pentagon spokesperson, “a step in the right direction to reducing the risks of misperception and miscalculation.” With this notification, the groundwork and precedent for dialogue on a missile pre-launch notification agreement has been laid.

Plan of Action

Create and present a draft agreement

The next administration should direct the State Department Bureau of Arms Control, Deterrence, and Stability to draft a proposal for a missile pre-launch notification agreement requiring mutual pre-launch notifications for missile launches with ranges of 2,000 km or more, as well as the sharing of launch and impact locations.

The US Assistant Secretary of State for Arms Control, Deterrence, and Stability should present the draft proposal to China’s Director-General of the Department of Arms Control of the Foreign Ministry.

Invite President Xi Jinping to participate in talks

The administration should propose a neutral site in the Asian-Pacific region, possibly in Hanoi, Vietnam, for a meeting between the US president and President Xi Jinping to emphasize the shared goal of trade security and discuss a missile test launch agreement. The meeting should include other high-level military commanders, including the Chairman of the Joint Chiefs of Staff and Secretary of Defense, as well as their relevant Chinese counterparts. 

Continue notifying China of all US missile test launches

The next administration should continue the precedent set by China in September 2024 to voluntarily provide advance notification of all ballistic missile test launches even in the absence of a negotiated agreement, like was done in the November 2024 Minuteman launch, and even if done unilaterally going forward. Such action would improve the prospect for reaching a negotiated agreement by demonstrating good faith and commitment to conflict mitigation.  

Raise the topic of missile launch notifications in P5 meetings

China has since assumed the rotating position of Chair of the P5, which could be a useful forum for considering new proposals for risk reduction measures among all nuclear states. After direct engagement with China on an agreement, China may have an interest in working with the United States to lead a multilateral agreement, as China would have more control over the language, international recognition for nuclear risk reduction, and improved security amid global nuclear modernization.

The next administration should direct The Special Representative of the President for Nuclear Nonproliferation under the Bureau of International Security and Nonproliferation to raise the topic of missile launch notifications and a potential launch notification agreement during the P5 process meeting ahead of the 2025 Nonproliferation Treaty (NPT) preparatory conference.

In order to work constructively with China on reducing the risk of nuclear use, a pre-launch notification agreement should, for now, be decoupled from any other arms control measures that would propose limiting China’s nuclear weapons stockpile or any launch capabilities. While comprehensive arms control may be an ultimate goal, linking the two at the outset would complicate talks significantly and likely prevent an agreement from coming to fruition; the United States should start with small steps to foster trust between the two nations and deepen regular military-to-military communication. 

Pursuing and negotiating a Missile Pre-Launch Notification Agreement with China will emphasize common objectives and help prevent escalation by miscommunication.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Unlocking The Future Of Work by Updating Federal Job Classifications

The Standard Occupational Classification (SOC) system contains critical statistical information about occupations, employment levels, trends, pay and benefits, demographic characteristics, and more. The system allows users – including leaders at Federal agencies – to collect, analyze, and disseminate data on employment trends, wages, and workforce demographics, and it enables a consistent analysis of the labor market. However, the rapid evolution of the job market, particularly in the tech sector, is outpacing updates to the SOC system. This misalignment poses challenges for economic measurement and development. The Office of Performance and Personnel Management (OPM) and the Office of Management and Budget (OMB) at the White House should lead a comprehensive effort to update SOC codes through research, collaboration with industry experts, pilot programs, and regulatory adjustments. By acting now, the Administration can create clear career pathways for workers and better equip federal agencies with critical workforce insights to optimize national investments.

Challenge and Opportunity

Outdated SOC classifications hinder efficient workforce planning, as traditional classifications do not reflect emerging tech roles and the energy innovation sector. Accurate SOC codes are necessary to enhance job growth analysis and create an efficient hiring pipeline that meets the demands of a fast-evolving job market. OMB is currently updating the Standard Occupational Classification (SOC) system manual and aims to complete the update by 2028. This is an opportunity to modernize classifications and include new roles that drive economic growth and support workforce development. Newer and emerging roles such as Renewable Energy Technicians, Large Language Model Engineers, Blockchain Developers, and Sustainability Engineers are either absent or not sufficiently detailed within the current SOC system. These emerging positions involve specialized skills like developing AI algorithms, creating decentralized applications, or designing immersive virtual environments, which go beyond the scope of traditional software development or IT security. 

Clear job classifications will allow for the efficient tracking of new, in-demand roles in emerging tech sectors, aligning with recent large federal investments, such as the CHIPS Act and IIJA, which aim to strengthen American industries. Updates to the SOC system will boost local economies by helping communities develop effective workforce training programs tailored to new job trends. They will provide clarity on required skills and competencies, making it easier for employers to develop accurate job descriptions and hire efficiently. Updates will provide workers with access to clear job descriptions and career pathways, allowing them to pursue opportunities and training in emerging fields like renewable energy and AI. SOC updates ensure national workforce strategies are data-driven and align with economic and industrial goals. The updates will ensure policymakers and researchers have accurate measurements of economic impacts and employment trends. 

Plan of Action

To modernize the SOC system and better reflect emerging tech roles, a dual-track plan involving comprehensive research, collaboration with key stakeholders, pilot programs, interagency awareness efforts, and regulatory updates is needed. The Bureau of Labor Statistics (BLS), specifically the SOC policy committee, should lead this work in partnership with the Office of Personnel Management (OPM), and the Office of Management and Budget (OMB). Key partners will include the Department of Energy (DOE), and Department of Labor (DOL), industry experts, academic institutions, and nonprofit organizations focused on workforce development.

Recommendation 1. Update the SOC System. 

The BLS, along with OPM and OMB,  should begin a comprehensive update process, with a focus on defining new roles in the market. Collaborate with industry experts, pilot programs with federal and state agencies, and research with academic institutions to ensure classifications accurately reflect the responsibilities and qualifications of modern roles. 

Recommendation 2. Reinstate Green Job Programs/Develop Frameworks.

OPM and OMB should work to immediately establish classifications for tech occupations. They should establish guidelines that facilitate the inclusion of emerging job categories in federal and state employment databases. Concurrently, advocate for the reinstatement and sustainable funding of job programs impacted by sequestration. These actions align with broader federal priorities on technological innovation and will require ongoing collaboration with Congress for budget approval. For example, before the work was stopped, BLS had $8 million per year for its “measuring green collar” jobs initiative. 

Recommendation 3. Pilot Programs and Interagency Awareness Efforts.

To validate the proposed changes, the BLS can implement pilot programs in collaboration with the broader DOL and selected state workforce agencies. These pilots will test the practical application of updated SOC codes and gather data on their effectiveness and increase awareness of the SOC role. The total estimated budget for implementing these actions is similar to those involved in a rulemaking process, which can vary from $500,000 to upwards of $10 million over two years.  The costs of the updates could be offset by reallocating unspent funds from a previous year’s budget allocation for workforce training and readiness programs or as part of an appropriation from Congress that restores program measurement funding. 

Conclusion

Modernizing the SOC system to reflect new and emerging occupations is essential for efficient workforce planning, economic growth, and national policy implementation. This update will provide local communities, employers, workers, and federal agencies with accurate data, ensuring efficient use of federal resources and alignment with the Administration’s economic priorities. By prioritizing these updates, the Administration can enhance job tracking, workforce strategies, and data accuracy, supporting investments that drive economic competitiveness.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
How will updating SOC codes help American workers compete globally?

Modernized SOC codes will ensure that American workers are trained and prepared for cutting-edge roles in technology and green sectors, helping the U.S. maintain its competitive edge in the global economy.

Why update SOCs for emerging roles when federal hiring doesn’t mandate SOC use?

While SOC codes are not required for federal hiring, they play a crucial role in tracking labor trends, planning workforce programs, and informing grant requirements. Accurate job data from updated SOCs will enhance federal and private sector collaboration, helping to shape initiatives that drive economic growth and efficiency.

What steps will be taken to ensure the updated SOC system supports sustainable job creation?

The proposed updates include advocating for the reinstatement and sustainable funding of job programs impacted by sequestration. Additionally, the updates will encourage the development of certification and training programs aligned with the new SOC classifications, supporting workforce readiness and career advancement in emerging sectors. These steps will contribute to sustainable job creation and economic growth.

Polar Infrastructure and Science for National Security: A Federal Agenda to Promote Glacier Resilience and Strengthen American Competitiveness

Polar regions – both the Arctic and Antarctic – are an important venue for strategic competition and loom as emerging and future national security challenges. As recognized during the first Trump Administration, ceding U.S. leadership at the poles threatens our future and emboldens our adversaries. The recent actions that the People’s Republic of China (PRC) and Russia have taken in the Arctic undermine regional stability as both nations aim to take economic advantage of newly available resources, such as oil, invest in research with dual military-civil applications, and take on an increasingly dominant role in regional governance. 

The Antarctic is the next security frontier. U.S. leadership in the Antarctic is eroding as U.S. investments dwindle and nations, including the PRC, establish new outposts and operations there. Simultaneously, polar change threatens to upend U.S. coastal communities and global security as ice-melt and glacier collapse could lead to catastrophic sea level rise, fueling extreme property loss, conflict, and mass migration. Glacier resilience, defined as the capacity of glacier systems to withstand and adapt to climate-driven stressors while maintaining their critical functions, is essential to mitigating these risks. Despite a longstanding treaty, the United States and our strategic partners have woefully underinvested in the development of tools, technologies, models, and monitoring infrastructure to inform glacial management, enable solutions to mitigate risks, and to shape U.S. security and foreign policy. 

Building on the prior Trump Administration’s plans for additional polar security icebreakers to protect national interests in the Arctic and Antarctic regions, Congress and the incoming Trump Administration should work together to reinforce the U.S. position in the regions, recognizing the role Antarctica in particular may have in a changing global order and its significance for sea-level rise.

We propose a Polar/Antarctic strategy for the incoming Trump Administration to enhance U.S. national security, promote American leadership, deter our adversaries, and prevent disastrous ice sheet collapse. This strategy involves research and development of engineering methods to slow the loss of glaciers and rates of sea-level rise by reducing the forces that drive glacier change and sensitivity of glaciers to those forces. Consistent with and reinforcing of the Antarctic Treaty System, this plan would focus investment across four areas:

Challenge and Opportunity 

The threat of sea-level rise is often seen as manageable, with increases of centimeters or inches. However, projections indicate that the collapse of the Thwaites Glacier and West Antarctic Ice Sheet could result in“doomsday scenarios” characterized by sea-level rises of as much as 10 feet worldwide. The probabilities of these occurrences have increased recently. If these possibilities became reality, then sea level would inundate major U.S. coastal regions and cities that are home to 12 million people and trillions of dollars of property and infrastructure. Globally, hundreds of millions of people would be at risk, fueling mass migration, refugee crises, and security challenges that threaten U.S. interests. Protecting Thwaites and the Antarctic Ice Sheet from collapse is crucial for a manageable future, making glacial resilience essential in any domestic and international national security strategy.  

There are many ideas about how to slow glacial collapse and protect the ice to hold back sea level rise; however, this research and technology development receives almost no federal funding. We must take this threat seriously and dramatically ramp up our infrastructure at the poles to monitor glaciers and demonstrate new technologies to protect the West Antarctic Ice Sheet.

While the current Antarctic treaty prohibits military activity in the region, it allows scientific research and other activities that could have military applications. At the same time, U.S. polar research infrastructure and funding is woefully insufficient to support the necessary innovation and operations required to address the sea-level rise challenge and maintain American leadership. Federal science funding agencies including the National Science Foundation (NSF), National Oceanic and Atmospheric Administration (NOAA), and National Aeronautics and Space Administration (NASA) play a critical role in supporting research in the Antarctic and on glacial processes. While these efforts have yielded some tools and understanding of glacial dynamics, there is no comprehensive, sustained approach to learn about and monitor changes to the ice sheets over time or to develop and test new strategies for glacial resilience. As a result, U.S. scientific infrastructure in the Antarctic has been largely neglected. The data produced by prior funded Antarctic studies have been insufficient to build an authoritative projection model of sea-level rise, a necessity to inform Antarctic management and to inform adaptation measures required by decision makers, coastal communities, and other stakeholders. 

A glacial resilience initiative that leverages space-based commercial and governmental satellite systems, long-duration unmanned aerial radar capabilities, and other observational capabilities would revitalize American leadership in polar regions at a critical time, as the PRC and other adversaries increase their polar presence – particularly in the Antarctic.

Plan of Action 

To strengthen glacial resilience and U.S. polar security, the next Administration should launch a comprehensive initiative to build critical world-leading infrastructure, promote innovation in glacial resilience technologies, enhance research on glacial dynamics and monitoring, and pursue policies that preserve U.S. national security interests. The recommendations below address each of these areas.

Develop and maintain world-leading critical infrastructure for glacial monitoring and resilience research and innovation.

NSF and the Air Force currently maintain operations for the U.S. in the Antarctic, but these facilities are in such a deplorable state that NSF has recently canceled all new field research and indefinitely delayed high priority experiments slated to be built at the South Pole. As the primary physical presence for the U.S. government, this infrastructure must be upgraded so that NSF can support scaled research and monitoring efforts. 

Expand glacial monitoring capabilities, utilizing space, air, and on-ice methods through NASA, NOAA, DOD, and NSF.

This effort should maximally leverage existing commercial and governmental space-based assets and deploy other air-based, long-duration unmanned aerial capabilities. The next  administration should also create national glacier models to provide detailed and timely information about glacier dynamics and sea-level rise to inform coastal planning and glacial resilience field efforts.

Pilot development and demonstration of glacier resilience technologies.

There is currently extremely limited investment in technology development to enhance glacier resilience. Agencies such as NSF and the Defense Advanced Research Projects Agency (DARPA) should support innovation and grand challenges to spur development of new ideas and technologies. The PRC is already investing in this kind of research and the United States and our strategic partners are far behind in ensuring we are the ones to develop the technology and set the standards for its use. 

Support a robust research program to improve understanding of glacier dynamics.

To address critical gaps and develop a coordinated, sustained approach to glacier research, the U.S. must invest in basic science to better understand ice sheet dynamics and destabilization. Investments should include field research as well as artificial intelligence (AI), modeling, and forecasting capabilities through NSF, NASA, DOD, and NOAA. These efforts rely on the infrastructure discussed above and will be used to better develop future infrastructure, creating a cycle of innovation that supports the U.S. operational presence and leadership and giving us a comparative advantage over our adversaries.  

Protect national security interests and maintain American leadership by promoting glacial resilience in international contexts.

There is a major void in international polar discussions about the importance of glacial resilience and extremely limited attention to developing technologies that would prevent ice sheet collapse and catastrophic sea level rise. The next  administration should play a leadership role in advancing global investment, ensuring that our allies contribute to this effort and the U.S. alone does not bear its costs. International research collaboration with our strategic allies will prevent the PRC and other competitors from expanding their influence and from surpassing the United States as the leader in Antarctic and polar research and innovation.

Support a new legislative package focused on advancing critical Antarctic research.

The Arctic Research and Policy Act of 1982 provides “for a comprehensive national policy dealing with national research needs and objectives in the Arctic.” Modeled on the Arctic Research and Policy Act, a new legislative package could include:

This legislation would elevate Antarctic research as a crucial part of a national security strategy and ensure the United States is prepared to confront the risks and consequences of Antarctic ice sheet collapse.

Conclusion 

The U.S. faces an important moment to address polar challenges that threaten both national security and global stability. As adversaries like PRC and Russia expand their presence and influence in the Arctic and Antarctic, the U.S. must reclaim leadership. Glacial resilience is a strategic imperative, given the catastrophic risks of sea-level rise and its impacts on coastal communities, migration, and security. By prioritizing investment in polar infrastructure, advancing cutting-edge technologies to mitigate glacial collapse, and strengthening international collaboration, the U.S. can lead a global effort to safeguard polar regions. A robust, coordinated strategy will bolster American interests, deter adversaries, and build resilience against one of the most pressing challenges we face today.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
How much will this proposal cost? Why is it worth the investment?

We estimate a budget of $100 million annually for the full approach, including investments in observational technologies, modeling efforts, and infrastructure improvements. This number includes funding for critical satellite programs, field research campaigns, and enhanced data modeling. The investment supports national security by addressing one of the most pressing threats to U.S. stability – sea-level rise. Accelerating glacier melt and the resulting sea-level rise could displace millions of people, destabilize coastal economies, and threaten critical infrastructure, including military bases and ports. This work enhances our nation’s ability to forecast and prepare for these threats, as well as our ability to mitigate glacial melt in ways that safeguard lives, property, and national interests.

What justifies forecasting and mitigating the risk of catastrophic sea-level rise vs. other possible options?

This course of action prioritizes early investment in observational technology, predictive modeling, and infrastructure development because these elements form the foundation of any meaningful response to the threat of catastrophic sea-level rise. The policy aligns with national security priorities by focusing on capabilities that enable accurate forecasting and risk assessment. Waiting to implement risks missing critical warning signs of glacial destabilization and puts the nation’s preparedness at risk. The recommended approach emphasizes proactive investment, which is far less expensive than responding to catastrophic sea-level rise.

How does this proposal enhance U.S. national security?

This proposal addresses the risks posed by catastrophic sea-level rise, which threaten critical infrastructure, economic stability, and global geopolitical stability. Specifically:



  • Many U.S. military installations, including naval bases and strategic ports, are located in coastal areas or on low-lying islands vulnerable to sea-level rise. Improved forecasting will allow the DoD to proactively adapt to sea-level rise.

  • Sudden and severe sea-level rise could force millions of people to migrate, creating humanitarian crises and destabilizing regions critical to U.S. interests. Early warning and mitigation strategies could reduce the likelihood of mass displacement and conflict.

  • The Arctic and Antarctic are becoming areas of increased geopolitical competition. This proposal is an opportunity for the U.S. to strengthen global influence while maintaining strategic advantages in these regions.

Why focus on glaciers specifically, rather than other climate-related risks?

Glaciers, particularly the Thwaites Glacier in West Antarctica, represent one of the most immediate and uncontrollable contributors to sea-level rise. Destabilized marine ice sheets are capable of causing rapid sea-level rise, threatening millions of coastal residents and vital infrastructure. Unlike other areas of climate science, the dynamics of glacial flow and melt are poorly understood and underfunded. With targeted investments, we can significantly improve our ability to monitor, model, and mitigate glacial contributions to sea-level rise.

What lessons can we learn from past initiatives addressing climate threats?

  • Initiatives like hurricane forecasting and flood mitigation have demonstrated that early investments in forecasting technologies save billions in recovery costs and reduce loss of life.

  • Programs such as NASA’s Earth Observing system and NOAA’s disaster resilience initiatives show that partnerships between federal agencies, academia, and the private sector drive innovation and amplify impact.

  • Delays in addressing risks like wildfires and droughts have highlighted the high cost of inaction, underscoring the need to move quickly and decisively in tackling sea-level rise threats.

Micro-ARPAs: Enhancing Scientific Innovation Through Small Grant Programs

The National Science Foundation (NSF) has long supported innovative scientific research through grant programs. Among these, the EAGER (Early-concept Grants for Exploratory Research) and RAPID (Rapid Response Research) grants are crucial in fostering early-stage questions and ideas. This memo proposes expanding and improving these programs by addressing their current limitations and leveraging the successful aspects of their predecessor program, the Small Grants for Exploratory Research (SGER) program, and other innovative funding models like the Defense Advanced Research Projects Agency (DARPA).

Current Challenges and Opportunities

The landscape of scientific funding has always been a balancing act between supporting established research and nurturing new ideas. Over the years, the NSF has played a pivotal role in maintaining this balance through various grant programs. One way they support new ideas is through small, fast grants. The SGER program, active from 1990 to 2006, provided nearly 5,000 grants, with an average size of about $54,000. This program laid the groundwork for the current EAGER and RAPID grants, which took SGER’s place and were designed to support exploratory and urgent research, respectively. Using the historical data, researchers analyzed the effectiveness of the SGER program and found it wildly effective, with “transformative research results tied to more than 10% of projects.” The paper also found that the program was underutilized by NSF program officers, leaving open questions about how such an effective and relatively inexpensive mechanism was being overlooked.

Did the NSF learn anything from the paper? Probably not enough, according to the data.

In 2013, the year the SGER paper was published, roughly 2% of total NSF grant funding went towards EAGER and RAPID grants (which translated to more than 4% of the total NSF-funded projects that year). Except for a spike in RAPID grants in 2020 in response to the COVID-19 pandemic, there has been a steady decline in the volume, amount, and percentage of EAGER and RAPID grants over the ensuing decade. Over the past few years, EAGER and RAPID have barely exceeded 1% of the award budget. Despite the proven effectiveness of these funding mechanisms and their relative affordability, the rate of small, fast grantmaking has stagnated over the past decade.

There is a pressing need to support more high-risk, high-reward research through more flexible and efficient funding mechanisms. Increasing the small, fast grant capacity of the national research programs is an obvious place to improve, given the results of the SGER study and the fact that small grants are easier on the budget.

The current EAGER and RAPID grant programs, while effective, face administrative and cultural challenges that limit their scalability and impact. The reasons for their underuse remain poorly understood, but anecdotal insights from NSF program officers offer clues. The most plausible explanation is also the simplest: It’s difficult to prioritize small grants while juggling larger ones that carry higher stakes and greater visibility. While deeper, formal studies could further pinpoint the barriers, the lack of such research should not hinder the pursuit of bold, alternative strategies—especially when small grant programs offer a rare blend of impact and affordability.

Drawing inspiration from the ARPA model, which empowers program managers with funding discretion and contracting authority, there is an opportunity to revolutionize how small grants are administered. The ARPA approach, characterized by high degrees of autonomy and focus on high-risk, high-reward projects, has already inspired successful initiatives beyond its initial form in the Department of Defense (DARPA), like ARPA-E for energy and ARPA-H for health. A similar “Micro-ARPA” approach — in which dedicated, empowered personnel manage these funds — could be transformative for ensuring that small grant programs within NSF reach their full potential. 

Plan of Action

To enhance the volume, impact, and efficiency of small, fast grant programs, we propose the following:

  1. Establish a Micro-ARPA program with dedicated funding for small, flexible grants: The NSF should allocate 50% of the typical yearly funding for EAGER/RAPID grants — roughly $50–100 million per year — to a separate dedicated fund. This fund would use the existing EAGER/RAPID mechanisms for disbursing awards but be implemented through a programmatically distinct Micro-ARPA model that empowers dedicated project managers with more discretion and reduces the inherent tension between use of these streamlined mechanisms and traditional applications.
    1. By allocating approximately 50% of the current spend to this fund and using the existing EAGER/RAPID mechanisms within it, this fund would be unlikely to pull resources from other programs. It would instead set a floor for the use of these flexible frameworks while continuing to allow for their use in the traditional program-level manner when desired.
  2. Establish a Micro-ARPA program manager (PM) role: As compared to the current model, in which the allocation of EAGER/RAPID grants is a small subset of broader NSF program director responsibilities, Micro-ARPA PMs (who could be lovingly nicknamed “Micro-Managers”) should be hired or assigned within each directorate to manage the dedicated Micro-ARPA budgets. Allocating these small, fast grants should be their only job in the directorate, though it can and should be a part-time position per the needs of the directorate.
    1. Given the diversity of awards and domains that this officer may consider, they should be empowered to seek the advice of program-specific staff within their directorate as well as external reviewers when they see fit, but should not be required to make funding decisions in alignment with programmatic feedback. 
    2. Applications to the Micro-ARPA PM role should be competitive and open to scientists and researchers at all career levels. Based on our experience managing these programs at the Experiment Foundation, there is every reason to suspect that early-career researchers, community-based researchers, or other innovators from nontraditional backgrounds could be as good or better than experienced program officers. Given the relatively low cost of the program, the NSF should open this role to a wide variety of participants to learn and study the outcomes.
  3. Evaluate: The agency should work with academic partners to design and implement clear metrics—similar to those used in the paper that evaluated the SGER program—to assess the programs’ decision-making and impacts. Findings should be regularly compiled and circulated to PMs to facilitate rapid learning and improvement. Based on evaluation of this program, and comparison to the existing approach to allocating EAGER/RAPID grants, relative funding quantities between the two can be reallocated to maximize scientific and social impact. 

Benefits

The proposed enhancements to the small grant programs will yield several key benefits:

  1. Increased innovation: By funding more early-stage, high-risk projects, we can accelerate scientific breakthroughs and technological advancements, addressing global challenges more effectively.
  2. Support for early-career scientists: Expanded grant opportunities will empower more early-career researchers to pursue innovative ideas, fostering a new generation of scientific leaders.
  3. Experience opportunity for program managers: Running Micro-ARPAs will provide an opportunity for new and emerging program manager talent to train and develop their skills with relatively smaller amounts of money.
  4. Platform for metascience research: The high volume of new Micro-ARPA PMs will create an opportunity to study the effective characteristics of program managers and translate them into insights for larger ARPA programs.
  5. Administrative efficiency: A streamlined, decentralized approach will reduce the administrative burden on both applicants and program officers, making the grant process more agile and responsive. Speedier grants could also help the NSF achieve its stated dwell time goal of 75% (response rate within six months), which they have failed to do consistently in recent years.

Conclusion

Small, fast grant programs are vital to supporting transformative research. By adopting a more flexible, decentralized model, we can significantly enhance their impact. The proposed changes will foster a more dynamic and innovative scientific ecosystem, ultimately driving progress and addressing urgent global challenges.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
Do small grants really matter?

Absolutely. The research supports it, but the stories bring it to life. Ask any scientist about the first grant they received for their own work, and you’ll often hear about a small, pivotal award that changed everything. These grants may not make headlines, but they ignite careers, foster innovation, and open doors to discovery.

Can this be done with reallocating existing budget and under existing authority?

Almost certainly within the existing budget. As for authority, it’s theoretically possible but politically fraught. NSF program officers already have the discretion to use RAPID and EAGER grants as they see fit, so in principle, a program officer could be directed to use only those mechanisms. That mandate would essentially transform their role into a Micro-ARPA program manager. The real challenge lies in the culture and practice of grant-making. There’s a reason that DARPA operates independently from the rest of the military branches’ research and development infrastructure.

Why would dedicated staffing and a Micro-ARPA program structure overcome administrative challenges?

In a word: focus. Program officers juggle large, complex grants that demand significant time and resources. Small grants, though impactful, can get lost in the shuffle. By dedicating staff to exclusively manage these smaller, fast grants, we create the conditions to test an important hypothesis: that administrative burden and competing priorities, not lack of interest, are the primary barriers to scaling small grant programs. It’s about clearing the runway so these grants can truly take off.

Why not just set goals for greater usage of EAGER and RAPID?

Encouraging greater use of EAGER and RAPID is a good start, but it’s not enough. We need to think bigger, trying alternative structures and dedicated programs that push the boundaries of what’s possible. Incremental change can help, but bold experiments are what transform systems.