Saving Billions on the US Nuclear Deterrent
The United States Air Force is replacing its current arsenal of Minuteman III intercontinental ballistic missiles (ICBMs) with an entirely new type of ICBM, known as Sentinel (previously known as the Ground-Based Strategic Deterrent or GBSD). Sentinel’s price tag continues to grow beyond initial expectations, with the program on track to become one of the country’s most expensive nuclear modernization projects ever.
As it stands, the Sentinel program is risky, draws funding away from more urgent priorities, and will exacerbate the Pentagon’s budget crisis. A better approach would be to life-extend a portion of the current ICBM force (the Minuteman III) in the near term in order to spread the costs of nuclear modernization out over the longer term. This approach will ensure that the United States can field a capable ICBM force on a continuous basis without compromising other critical security priorities.
Challenge and Opportunity
The Sentinel ICBM program involves (1) a like-for-like replacement of the 400 Minuteman III ICBMs currently deployed across Colorado, Montana, Nebraska, North Dakota, and Wyoming, (2) the creation of a full set of test-launch missiles, and (3) upgrades to launch facilities, launch control centers, and other supporting infrastructure. Sentinel would keep ICBMs in the United States’ nuclear arsenal until at least 2075.
Unfortunately, the Sentinel program is riddled with challenges and flawed assumptions that have significantly increased both its cost and risk, and that will continue to do so over the coming years, as described below.
Sentinel’s price tag continues to grow beyond initial expectations
The Sentinel program’s ever-increasing price tag indicates that the program is not nearly as cost-effective as initially projected. In 2015, the Air Force issued a preliminary estimate that Sentinel (then “GBSD”) would cost $62.3 billion to acquire. One year later, the Pentagon’s Cost Analysis & Program Evaluation (CAPE) office projected that Sentinel could more realistically cost $85 billion, a 37% increase from the Air Force’s estimate. In August 2020, CAPE’s projected Sentinel acquisition cost jumped again to $95.8 billion, with total life-cycle costs reaching as high as $263.9 billion1. In October 2020, the Pentagon reported that CAPE’s latest life-cycle estimate was $1.9 billion greater than its 2016 estimate, but did not explain why the estimate had grown. In January 2024, the Air Force notified Congress that the Sentinel program would cost 37 percent more than projected and take at least two years longer than estimated–an overrun in “critical” breach of Congress’ Nunn-McCurdy Act. The overrun put Sentinel’s anticipated cost at approximately $130 billion. In July 2024, upon certifying the Sentinel program to continue after its Nunn-McCurdy breach, the Pentagon announced a new CAPE estimate of $140.9 billion, constituting an 81% increase compared to the 2020 estimate.
As Sentinel matures over the coming years and schedule delays compound these cost issues, it will likely incur further cost increases. Sentinel is on track to become one of the country’s most expensive nuclear-related line items over the next decade.
Sentinel draws funding away from more urgent priorities
By its own admission, the Pentagon cannot afford all the weapons it wants to buy. In July 2020, the then-Air Force Chief of Staff, General Dave Goldfein, remarked that the Sentinel program represents “the first time that the nation has tried to simultaneously modernize the nuclear enterprise while it’s trying to modernize an aging conventional enterprise,” and added that “[t]he current budget does not allow you to do both.”
Funding tradeoffs at the Pentagon have already become apparent. In early 2020, for example, a decision to dramatically increase the budget of the National Nuclear Security Administration directly led to a Virginia-class submarine being cut from the Navy’s budget plan. Compounding the problem is the fact that the Pentagon is currently facing a “bow wave” of major expenditures. The bills for several big-ticket procurement projects—including Sentinel, the Long-Range Standoff Weapon, the F-35 fighter, the B-21 bomber, the Columbia-class ballistic missile submarine, and the KC-46A tanker—will all come due over the next decade. With growing recognition that the Pentagon simply cannot afford to foot so many major bills simultaneously, these large procurement projects have been characterized as “fiscal time bombs”,
The Sentinel program is already impacting funding of other defense programs with its latest batch of cost overruns. The Pentagon admitted in a July 2024 press release that they certified the Sentinel program to continue despite its critical cost and schedule overruns, partly because Sentinel “is a higher priority than programs whose funding must be reduced to accommodate the growth in cost of the program.” In reality, however, the Air Force does not yet know which programs will face funding reductions to offset Sentinel’s increase. General James Slife, Vice Chief of Staff of the Air Force, stated in July 2024 that because Sentinel’s cost growth will be realized several years from now, “it is a decision for down the road to decide what trade-offs we’re going to need to make in order to be able to continue to pursue the Sentinel program.”
With these funding issues in mind, it is imperative to think carefully about whether spending $141 billion to acquire the Sentinel right now makes sense. It may well be a better use of funds to focus on pressing security objectives––such as hardening U.S. command-and-control systems against cyber threats.
Life-extending part or all of the Minuteman III ICBM force—instead of moving to acquire Sentinel as quickly as possible—would constitute a cheaper and less risky option for the United States to field a viable ICBM force at New START levels for at least the next two decades. The Pentagon’s primary justification for pursuing the Sentinel program was the assumption that building an entirely new missile force from scratch would be cheaper than life-extending the Minuteman III force. This assumption stands in stark contrast to an Air Force-sponsored analysis that “[a]ny new ICBM alternative will very likely cost almost two times—and perhaps even three times—more than incremental modernization of the current Minuteman III system.”
The Pentagon’s assumption also does not match historical precedent. In 2012, after the completion of a comprehensive round of Minuteman III life-extension programs, the Air Force admitted that it cost only $7 billion to turn the Minuteman III ICBMs into “basically new missiles except for the shell.” There is little public evidence to suggest that a similar round of life-extension programs would cost significantly more. Even if the programs were more expensive, the added expense is unlikely to come anywhere close to Sentinel’s projected $141 billion acquisition fee; tripling the previous $7 billion price tag for Minuteman III upgrades would still amount to less than one-sixth of the acquisition price of Sentinel.
If a life-extension option were pursued in lieu of Sentinel, it is likely that the Minuteman III’s critical subsystems would eventually need to be replaced. Replacement appears to be technologically feasible. Lieutenant General Richard Clark, the Air Force’s deputy chief of staff for strategic deterrence and nuclear integration, testified to the House Armed Services Committee in March 2019 that it would be possible to extend the lives of the Minuteman III’s propulsion and guidance systems one more time, despite his stated preference for proceeding with the GBSD. Furthermore, a 2014 RAND report commissioned by Air Force Global Strike Command found “no evidence that would necessarily preclude the possibility of long-term sustainment.” In fact, the report noted, “we found many who believed the default approach for the future is incremental modernization, that is, updating the sustainability and capability of the Minuteman III system as needed and in perpetuity.”
Plan of Action
The next administration should revise its nuclear employment guidance to accept a slightly higher threshold for risk with regard to its ICBM force. This action is critical for enabling a life-extended Minuteman III force because the Pentagon’s interest in pursuing Sentinel is largely driven by its own interpretation of presidential nuclear-employment guidance. If the Air Force believes that the Minuteman III might dip below a preset reliability threshold, then the service will push for Sentinel in order to meet the current nuclear-employment guidance.
Revising the guidance to accept a slightly higher threshold for risk would reduce the need to pursue Sentinel immediately. This revision would first be publicly reflected in the next administration’s Nuclear Posture Review, and would then be translated into policy by the Pentagon.
It is important to emphasize that (1) presidential revisions to the nuclear-employment guidance are not unusual, and (2) revising the nuclear-employment guidance would have little bearing on strategic stability. In a nuclear first strike, an adversary would still be forced to target every silo. This means that a life-extended Minuteman III force would theoretically produce the same deterrence effect as a brand-new Sentinel force.
To provide additional support for the guidance revision, the next administration could launch a National Security Council-led review of the role of ICBMs in U.S. nuclear strategy. In particular, this review would assess the feasibility and cost of a Minuteman III life-extension program. The review would also consider whether such a program could be further enabled by reducing the number of deployed ICBMs or the number of annual flight tests, or by pursuing new forms of nondestructive booster reliability testing (see FAQ for more details).
Conclusion
Life-extending the nation’s existing arsenal of Minuteman III missiles instead of immediately pursuing the Sentinel program is the best way to ensure that the United States can continue to field a capable ICBM force without sacrificing funding other critical national-security priorities.
This course of action could buy the United States as much as twenty years of additional time to decide whether to pursue or cancel a follow-on Sentinel program, thus allowing the United States to further spread out costs and reconsider the future role of ICBMs in U.S. nuclear posture.
This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.
While accepting a higher threshold for risk with the ICBM force may sound politically difficult, in reality it has little bearing on strategic stability. The Air Force projects that a 30-year-old missile core has an estimated failure probability of 1.3%, which increases exponentially each year. As long as the expected failure rate did not climb too high, though, an adversary conducting a nuclear first strike would still have to target every silo because there would be no way of knowing which missiles were functional and which were duds. This means that a life-extended Minuteman III force would theoretically produce the same deterrence effect as a brand-new Sentinel force. Additionally, it is extremely unlikely that the United States would ever elect to launch only a small number of ICBMs in a crisis. As a result, even a 10% failure rate across all 400 launched ICBMs would still enable approximately 360 fully functional missiles to reach their targets.
Testing is critical to ensure that the Minuteman III missiles continue to function as designed if they are life-extended. However, there is a limited quantity of Minuteman III boosters that can be used as test assets. This problem was identified early in the Sentinel acquisition process by both internal and external analysts, who noted that increasing the average ICBM test rate from three to four and a half test firings per year – as was done in 2017 – would inevitably exhaust the surplus boosters and lead to a depletion of the currently-deployed ICBM force around 2040., There are several ways to overcome this obstacle without building a brand-new missile force.
One option would be to lower the average test rate from four and a half tests per year back down to three. If the Air Force was prepared to accept a slight additional risk of booster failure––given the fact that, as discussed above, doing so would have no discernible effect on strategic stability––then the number of tests per year could realistically be decreased. To that end, a 2017 Center for Strategic and International Studies report estimated that if the United States chose to re-core its ICBMs and move the firing rate back to three tests per year, then it would be possible to maintain the Minuteman III force at New START levels (400 deployed ICBMs) until 2050.
Another option would be to reduce the number of deployed ICBMs. Again, doing so would not meaningfully affect deterrence but would make a significant quantity of additional missiles available for testing purposes. For example, if the Pentagon reduced its deployed ICBM force from 400 to 300 missiles, it could maintain the current testing rate of four and a half tests per year without the missile inventory dropping below 300 until approximately 2060. A portion of the missiles used for testing could also be converted into commercial or governmental space launch vehicles, thus eliminating the requirement to eventually “re-core” them to ICBM standards.
A third option would be for the Air Force to explore nondestructive methods for testing the reliability of their solid rocket motors. George Perkovich and Pranay Vaddi suggest in their 2021 “Model Nuclear Posture Review” that this could be achieved through technological advances in ultrasound and computed tomography. The Air Force could also consider adapting the Navy’s nondestructive-testing techniques – which involve sending a probe into the bore to measure the elasticity of the propellant – to evaluate the reliability of the Minuteman III force. As Steve Fetter and Kingston Reif noted in 2019, these types of nondestructive testing methodologies “would permit the lifetime of each motor to be estimated on an individual basis. Rather than retire all motors at an age when a small percentage are believed to be no longer reliable, only those particular motors with measurements indicating unacceptable aging could be retired.” Nondestructive testing may be the most effective option, because if successful it would eliminate the attrition problem altogether.
Despite the Pentagon’s repeated claims that the Minuteman III ICBM will become “unviable” after 2030, the Minuteman III’s critical subsystems remain highly reliable with age. There is little evidence to suggest that this will change within the next decade. The Minuteman III’s guidance and propulsion modules were modernized during the 2000s and continue to perform successfully during tests.
A March 2020 Air Force Nuclear Weapons Center briefing to industry partners also acknowledged that the useful life of the Minuteman III force could be extended with “better NS-50 [guidance module] failure data,” because “current age-out on guidance is an engineering ‘best guess’ with no current data.” This suggests that the Air Force’s prediction about the post-2030 “unviability” of these subsystems is based on little actual evidence.
Importantly, the 2030 benchmark for the Minuteman III’s “unviability” appears to have been selected by Congress, not by the Air Force. A consequential amendment inserted into the FY 2007 National Defense Authorization Act directed the Secretary of the Air Force to “modernize Minuteman III intercontinental ballistic missiles in the United States inventory as required to maintain a sufficient supply of launch test assets and spares to sustain the deployed force of such missiles through 2030.” This amendment ultimately had a significant impact on the timeline of Sentinel because, as Air Force historian David N. Spires describes, “Although Air Force leaders had asserted that incremental upgrades, as prescribed in the analysis of land-based strategic deterrent alternatives, could extend the Minuteman’s life span to 2040, the congressionally mandated target year of 2030 became the new standard.”
It is telling that the Navy is not currently contemplating the purchase of a brand-new missile to replace its current arsenal of Trident submarine-launched ballistic missiles, and instead plans to conduct a second life-extension to keep them in service until 2084. This life-extension is enabled in large part by the Navy’s unique nondestructive method of testing its boosters, described above. In January 2021, Vice Admiral Johnny Wolfe Jr., the Navy’s Director for Strategic Systems Programs, remarked that “solid rocket motors, the age of those we can extend quite a while, we understand that very well.”
To demonstrate this fact, in 2015 the Navy conducted a successful Trident SLBM flight test using the oldest 1st-stage solid rocket motor flown to date (over 26 years old), as well as 2nd- and 3rd-stage motors that were 22 years old. Rather than replace these missiles as they exceed the planned design life of 25 years, the Navy stated in 2015 that they “are carefully monitoring the effects of age on our strategic weapons system and continue to perform life extension and maintenance efforts to ensure reliability.”
Rather than conduct similar life-extension operations, the Air Force has elected to completely replace its Minuteman III force with the brand-new, highly expensive Sentinel.
A 2016 report to Congress reveals that the Air Force baked multiple flawed assumptions into its cost-assessment process, the most influential of which was the presumption that the United States would continue deploying 400 ICBMs until 2075. However, as researchers from the Carnegie Endowment for International Peace explained in a January 2021 report, “Basing analysis on a straight-line requirement projected all the way to 2075 practically predetermines the outcome.” Rather than prematurely selecting these benchmarks, the Pentagon’s analysis could have considered which options were most cost-effective under a variety of circumstances.
In reality, ICBM force posture is neither sacred nor immutable, and there is little security rationale behind the Pentagon’s selections of the number 400 and the year 2075. The year 2075 a relatively arbitrary timeframe that is not codified in either the Nuclear Posture Review or in other key strategic documents. Moreover, a 2013 inter-agency review—featuring the participation of the Department of State, the Department of Defense, the National Security Council, the intelligence community, the Joint Chiefs of Staff, U.S. Strategic Command, and then-Vice President Joe Biden’s office—ultimately found that U.S. deterrence requirements could be met by reducing U.S. nuclear forces by up to one-third.
Yet despite their lack of strategic rationale, these pre-selected force requirements and exaggerated timelines heavily bias the Pentagon’s cost-assessment process in favor of Sentinel. In particular, if the Pentagon had selected a different ICBM retention timeline – 2050, for example, or even 2100 – then a revised cost assessment would have suggested that life-extending the Minuteman III force would be significantly more cost-effective than building an entirely new Sentinel missile force from scratch.
How to Prompt New Cross-Agency and Cross-Sector Collaboration to Advance Learning Agendas
The 2018 Foundations for Evidence-Based Policymaking Act (Evidence Act) promotes a culture of evidence within federal agencies. A central part of that culture entails new collaboration between decision-makers and those with diverse forms of expertise inside and outside of the federal government. The challenge, however, is that new cross-agency and cross-sector collaborative relationships don’t always arise on their own. To overcome these challenges, federal evaluation staff can use “unmet desire surveys,” an outreach tool that prompts agency staff to reflect on how the success of their programs relates to what is happening in other agencies and outside government and how engaging with these other programs and organizations would help their work be more effective. It also prompts them to consider the situation from the perspective of potential collaborators—why should they want to engage?
The unmet desire survey is an important data-gathering mechanism that provides actionable information to create new connections between agency staff and people—such as those in other federal agencies, along with researchers, community stakeholders, and others outside the federal government—who have the information they desire. Then, armed with that information, evaluation staff can use the new Evidence Project Portal on Evaluation.gov (to connect with outside researchers) and/or other mechanisms (to connect with other potential collaborators) to conduct matchmaking that will foster new collaborative relationships. Using existing authorities and resources, agencies can pilot unmet desire surveys as a concrete mechanism for advancing federal learning agendas in a way that builds buy-in by directly meeting the needs of agency staff.
Challenge and Opportunity
A core mission of the Evidence Act is to foster a culture of evidence-based decision-making within federal agencies. Since the problems agencies tackle are often complex and multidimensional, new collaborative relationships between decision-makers in the federal government and those in other agencies and in organizations outside the federal government are essential to realizing the Evidence Act’s vision. Along these lines, Office of Management and Budget (OMB) implementation guidance stresses that learning agendas are “an opportunity to align efforts and promote interagency collaboration in areas of joint focus or shared populations or goals” (OMB M-19-23), and that more generally a culture of evidence “cannot happen solely at the top or in isolated analytical offices, but rather must be embedded throughout each agency…and adopted by the hardworking civil servants who serve on behalf of the American people” (OMB M-21-27).
New cross-agency and cross-sector collaborative relationships rarely arise on their own. They are voluntary, and between people who often start off as strangers to one another. Limited resources, lack of explicit permission, poor prior experiences, differing incentives, and stereotypes are all challenges to persuading strangers to engage with each other. In addition, agency staff may not previously have spent much time thinking about how new collaborative relationships could help answer questions posed by their learning agenda, or even that accessible mechanisms exist to form new relationships. This presents an opportunity for new outreach by evaluation staff, to expand a sense of what kinds of collaborative relationships would be both valuable and possible.
For instance, the Department of the Interior (DOI)’s 2024 Learning Agenda asks: What are the primary challenges to training a diverse, highly skilled workforce capable of delivering the department’s mission? The DOI itself has vital historical and other contextual information for answering this question. Yet officials from other departments likely have faced (or currently face) a similar challenge, and are in a position to share what they’ve tried so far, what has worked well, and what has fallen short. In addition, researchers who study human resource development could share insights from literature, as well as possibly partner on a new study to help answer this question in the DOI context.
Each department and agency is different, with its own learning agenda, decision-making processes, capacity constraints, and personnel needs. And so what is needed are new forms of informal collaboration (knowledge exchange) and/or formal collaboration (projects with shared ownership, decision-making authority, and accountability) that foster back-and-forth interaction. The challenge, however, is that agency staff may not consider such possibilities without being prompted to do so or may be uncertain how to communicate the opportunity to potential collaborators in a way that resonates with their goals.
This memo proposes a flexible tool that evaluation staff (e.g., evaluation officers at federal agencies) can use to generate buy-in among agency staff and leadership while also promoting collaboration as emphasized in OMB guidance and in the Evidence Act. The tool, which has already proven valuable in the federal government (see FAQs) , local government, and in the nonprofit sector, is called an “unmet desire survey.” The survey measures unmet desires for collaboration by prompting staff to consider the following types of questions:
- Which learning agenda question(s) are you focused on? Is there information about other programs within the government and/or information that outside researchers and other stakeholders have that would help answer it? What kinds of people would be helpful to connect with?
- Are you looking for informal collaboration (oriented toward knowledge exchange) or formal collaboration (oriented toward projects with shared ownership, decision-making authority, and accountability)?
- What hesitations (perhaps due to prior experiences, lack of explicit permission, stereotypes, and so on) do you have about interacting with other stakeholders? What hesitations do you think they might have about interacting with you?
- Why should they want to connect with you?
- Why do you think these connections don’t already exist?
These questions elicit critical insights about why agency staff value new connection and are highly flexible. For instance, in the first question posed above, evaluation staff can choose to ask about new information that would be helpful for any program or only about information relevant to programs that are top priorities for their agency. In other words, unmet desire surveys need not add one more thing to the plate; rather, they can be used to accelerate collaboration directly tied to current learning priorities.
Unmet desire surveys also legitimize informal collaborative relationships. Too often, calls for new collaboration in the policy sphere immediately segue into overly structured meetings that fail to uncover promising areas for joint learning and problem-solving. Meetings across government agencies are often scripted presentations about each organization’s activities, providing little insight on ways they could collaborate to achieve better results. Policy discussions with outside research experts tend to focus on formal evaluations and long-term research projects that don’t surface opportunities to accelerate learning in the near term. In contrast, unmet desire surveys explicitly legitimize the idea that diverse thinkers may want to connect only for informal knowledge exchange rather than formal events or partnerships. Indeed, even single conversations can greatly impact decision-makers, and, of course, so can more intensive relationships.
Whether the goal is informal or formal collaboration, the problem that needs to be solved is both factual and relational. In other words, the issue isn’t simply that strangers do not know each other—it’s also that strangers do not always know how to talk to one another. People care about how others relate to them and whether they can successfully relate to others. Uncertainty about relationality prevents people from interacting with others they do not know. This is why unmet desire surveys also include questions that directly measure hesitations about interacting with people from other agencies and organizations, and encourage agency staff to think about interactions from others’ perspectives.
The fact that the barriers to new collaborative relationships are both factual as well as relational underscores why people may not initiate them on their own. That’s why measuring unmet desire is only half the battle—it’s also important to ensure that evaluation staff have a plan in place to conduct matchmaking using the data gathered from the survey. One way is to create a new posting on the Evidence Project Portal (especially if the goal is to engage with outside researchers). A second way is to field the survey as part of a convening, which already has as one of its goals the development of new collaborative relationships. A third option is to directly broker connections. Regardless of which option is pursued, note that large amounts of extra capacity are likely unnecessary, at least at first. The key point is simply to ensure that matchmaking is a valued part of the process.
In sum, by deliberately inquiring about connections with others who have diverse forms of relevant expertise—and then making those connections anew—evaluation staff can generate greater enthusiasm and ownership among people who may not consider evaluation and evidence-building as part of their core responsibilities.
Plan of Action
Using existing authorities and resources, evaluation staff (such as evaluation officers at federal agencies) can take three steps to position unmet desire surveys as a standard component of the government’s evidence toolbox.
Step 1. Design and implement pilot unmet desire surveys.
Evaluation staff are well positioned to conduct outreach to assess unmet desire for new collaborative relationships within their agencies. While individual staff can work independently to design unmet desire surveys, it may be more fruitful to work together, via the Evaluation Officer Council, to design a baseline survey template. Individuals could then work with their teams to adapt the baseline template as needed for each agency, including identifying which agency staff to prioritize as well as the best way to phrase particular questions (e.g., regarding the types of connections that employees want in order to improve the effectiveness of their work or the types of hesitancies to ask about). Given that the question content is highly flexible, unmet desire surveys can directly accelerate learning agendas and build buy-in at the same time. Thus, they can yield tangible, concrete benefits with very little upfront cost.
Step 2. Meet unmet desires by matchmaking.
After the pilot surveys are administered, evaluation staff should act on their results. There are several ways to do this without new appropriations. One way is to create a posting for the Evidence Project Portal, which is explicitly designed to advertise opportunities for new collaborative relationships, especially with researchers outside the federal government. Another way is to field unmet desire surveys in advance of already-planned convenings, which themselves are natural places for matchmaking (e.g., the Agency for Healthcare Research and Quality experience described in the FAQs). Lastly, for new cross-agency collaborative relationships along with other situations, evaluation staff may wish to engage in other low-lift matchmaking on their own. Depending upon the number of people they choose to survey, and the prevalence of unmet desire they uncover, they may also wish to bring on short-term matchmakers through flexible hiring mechanisms (e.g., through the Intergovernmental Personnel Act). Regardless of which option is pursued, the key point is that matchmaking itself must be a valued part of this process. Documenting successes and lessons learned then set the stage for using agency-specific discretionary funds to hire one or more in-house matchmakers as longer-term or staff appointments.
Step 3. Collect information on successes and lessons learned from the pilot.
Unmet desire surveys can be tricky to field because they entail asking employees about topics they may not be used to thinking about. It often takes some trial and error to figure out the best ways to ask about employees’ substantive goals and their hesitations about interacting with people they do not know. Piloting unmet desire surveys and follow-on matchmaking can not only demonstrate value (e.g., the impact of new collaborative relationships fostered through these combined efforts) to justify further investment but also suggest how evaluation leads might best structure future unmet desire surveys and subsequent matchmaking.
Conclusion
An unmet desire survey is an adaptable tool that can reveal fruitful pathways for connection and collaboration. Indeed, unmet desire surveys leverage the science of collaboration by ensuring that efforts to broker connections among strangers consider both substantive goals and uncertainty about relationality. Evaluation staff can pilot unmet desire surveys using existing authorities and resources, and then use the information gathered to identify opportunities for productive matchmaking via the Evidence Project Portal or other methods. Ultimately, positioning the survey as a standard component of the government’s evidence toolbox has great potential to support agency staff in advancing federal learning agendas and building a robust culture of evidence across the U.S. government.
This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.
Yes, the Agency for Healthcare Research and Quality (AHRQ) has used unmet desire surveys several times in 2023 and 2024. Part of AHRQ’s mission is to improve the quality and safety of healthcare delivery. It has prioritized scaling and spreading evidence-based approaches to implementing person-centered care planning for people living with or at risk for multiple chronic conditions. This requires fostering new cross-sector collaborative relationships between clinicians, patients, caregivers, researchers, payers, agency staff and other policymakers, and many others. That’s why, in advance of several recent convenings with these diverse stakeholders, AHRQ fielded unmet desire surveys among the participants. The surveys uncovered several avenues for informal and formal collaboration that stakeholders believed were necessary and, importantly, informed the agenda for their meetings. Relative to many convenings, which are often composed of scripted presentations about individuals’ diverse activities, conducting the surveys in advance and presenting the results during the meeting shaped the agenda in more action-oriented ways.
AHRQ’s experience demonstrates a way to seamlessly incorporate unmet desire surveys into already-planned convenings, which themselves are natural opportunities for matchmaking. While some evaluation staff may wish to hire separate matchmakers or engage in matchmaking using outside mechanisms like the Evidence Project Portal, the AHRQ experience also demonstrates another low-lift, yet powerful, avenue. Lastly, while the majority of this memo and the FAQs focus on measuring unmet desire among agency staff, the AHRQ experience also demonstrates the applicability of this idea to other stakeholders as well.
The best place to start—especially when resources are limited—is with potential evidence champions. These are people who are already committed to answering questions on their agency’s learning agenda and are likely to have an idea of the kinds of cross-agency or cross-sector collaborative relationships that would be helpful. These potential evidence champions may not self-identify as such; rather, they may see themselves as program managers, customer-experience experts, bureaucracy hackers, process innovators, or policy entrepreneurs. Regardless of terminology, the unmet desire survey provides people who are already motivated to collaborate and connect with a clear opportunity to articulate their needs. Evaluation staff can then respond by posting on the Evidence Project portal or other matchmaking on their own to stimulate new and productive relationships for those people.
The administrator should be someone with whom agency staff feel comfortable discussing their needs (e.g., a member of an agency evaluation team) and who is able to effectively facilitate matchmaking—perhaps because of their network, their reputation within the agency, their role in convenings, or their connection to the Evidence Project Portal. The latter criterion helps ensure that staff expect useful follow-up, which in turn motivates survey completion and participation in follow-on activities; it also generates enthusiasm for engaging in new collaborative relationships (as well as creating broader buy-in for the learning agenda). In some cases, it may make the most sense to have multiple people from an evaluation team surveying different agency staff or co-sponsoring the survey with agency innovation offices. Explicit support from agency leadership for the survey and follow-on activities is also crucial for achieving staff buy-in.
Survey content is meant to be tailored and agency-specific, so the sample questions can be adapted as follows:
- Which learning agenda question(s) are you focused on? Is there information about other programs within the government and/or information that outside researchers and other stakeholders have that would help answer it? What kinds of people would be helpful to connect with?
This question can be left entirely open-ended or be focused on particular priorities and/or particular potential collaborators (e.g., only researchers, or only other agency staff, etc.). - Are you looking for informal collaboration (oriented toward knowledge exchange) or formal collaboration (oriented toward projects with shared ownership, decision-making authority, and accountability)?
This question may invite responses related to either informal or formal collaboration, or instead may only ask about knowledge exchange (a relatively lower commitment that may be more palatable to agency leadership). - What hesitations (perhaps due to prior experiences, lack of explicit permission, stereotypes, and so on) do you have about interacting with other stakeholders? What hesitations do you think they might have about interacting with you?
This question should refer to specific types of hesitancy that survey administrators believe are most likely (e.g., ask about a few hesitancies that seem most likely to arise, such as lack of explicit permission, concerns about saying something inappropriate, or concerns about lack of trustworthy information). - Why should they want to connect with you?
- Why do you think these connections don’t already exist?
These last two questions can similarly be left broad or include a few examples to help spark ideas.
Evaluation staff may also choose to only ask a subset of the questions.
Again, the answer is agency-specific. In cases that will use the Evidence Project Portal, agency evaluation staff will take the first stab at crafting postings. In other cases, meeting the unmet desire may occur via already-planned convenings or matchmaking on one’s own. Formalizing this duty as a part of one or more people’s official responsibilities sends a signal about how much this work is valued. Exactly who those people are will depend on the agency’s structure, as well as on whether there are already people in a given agency who see matchmaking as part of their job. The key point is that matchmaking itself should be a valued part of the process.
While unmet desire surveys can be done any time and on a continuous basis, it is best to field them when there is either an upcoming convening (which itself is a natural opportunity for matchmaking) or there is identified staff capacity for follow-on matchmaking and employee willingness to build collaborative relationships.
Many evaluation officers and their staff are already forming collaborative relationships as part of developing and advancing learning agendas. Unmet desire surveys place explicit focus on what kinds of new collaborative relationships agency staff want to have with staff in other programs, either within their agency/department or outside it. These surveys are designed to prompt staff to reflect on how the success of their program relates to what is happening elsewhere and to consider who might have information that is relevant and helpful, as well as any hesitations they have about interacting with those people. Unmet desire surveys measure both substantive goals as well as staff uncertainty about interacting with others.
Better Hires Faster: Leveraging Competencies for Classifications and Assessments
A federal agency takes over 100 days on average to hire a new employee — with significantly longer time frames for some positions — compared to 36 days in the private sector. Factors contributing to extended timelines for federal hiring include (1) difficulties in quickly aligning position descriptions with workforce needs, and (2) opaque and poor processes for screening applicants.
Fortunately, federal hiring managers and HR staffing specialists already have many tools at their disposal to accelerate the hiring process and improve quality outcomes – to achieve better hires faster. Inside and outside their organizations, agencies are already starting to share position descriptions, job opportunity announcements (JOAs), assessment tools, and certificates of eligibles from which they can select candidates. However, these efforts are largely piecemeal and dependent on individual initiative, not a coordinated approach that can overcome the pervasive federal hiring challenges.
The Office of Personnel Management (OPM), Office of Management and Budget (OMB) and the Chief Human Capital Officers (CHCO) Council should integrate these tools into a technology platform that makes it easy to access and implement effective hiring practices. Such a platform would alleviate unnecessary burdens on federal hiring staff, transform the speed and quality of federal hiring, and bring trust back into the federal hiring system.
Challenge and Opportunity
This memo focuses on opportunities to improve two stages in the federal hiring process: (1) developing and posting a position description (PD), and (2) conducting a hiring assessment.
Position Descriptions. Though many agencies require managers to review and revise PDs annually, during performance review time, this requirement often goes unheeded. Furthermore, volatile occupations for which job skills change rapidly – think IT or scientific disciplines with frequent changes to how they practice (e.g., meteorology) or new technologies that upend how analytical skills (e.g., data analytics) are practiced – can result in yet more changes to job skills and competencies embedded in PDs.
When a hiring manager has an open position, a current PD for that job is necessary to proceed with the Job Opportunity Announcement (JOA)/posting. When the PD is not current, the hiring manager must work with an HR staffing specialist to determine the necessary revisions. If the revisions are significant, an agency classification specialist is engaged. The specialist conducts interviews with hiring managers and subject-matter experts and/or performs deeper desk audits, job task analyses, or other evaluations to determine the additional or changed job duties. Because classifiers may apply standards in different ways and rate the complexity of a position differently, a hiring manager can rarely predict how long the revision process will take or what the outcome will be. All this delays and complicates the rest of the hiring process.
Hiring Assessments. Despite a 2020 Executive Order and other directives requiring agencies to engage in skills-based hiring, agencies too often still use applicant self-certification on job skills as a primary screening method. This frequently results in certification lists of candidates who do not meet the qualifications to do the job in the eyes of hiring managers. Indeed, a federal hiring manager cannot find a qualified candidate from a certified list approximately 50% of the time when only a self-assessment questionnaire is used for screening. There are alternatives to self-certification, such as writing samples, multiple-choice questions, exercises that test for particular problem-solving or decision-making skills, and simulated job tryouts. Yet hiring managers and even some HR staffing specialists often don’t understand how assessment specialists decide what methods are best for which positions – or even what assessment options exist.
Both of these stages involve a foundation of occupation- and grade-level competencies – that is, the knowledge, skills, abilities, behaviors, and experiences it takes to do the job. When a classifier recommends PD updates, they apply pre-set classification standards comprising job duties for each position or grade. These job duties are built in turn around competencies. Similarly, an assessment specialist considers competencies when deciding how to evaluate a candidate for a job.
Each agency – and sometimes sub-agency unit – has its own authority to determine job competencies. This has caused different competency analyses, PDs, and assessment methods across agencies to proliferate. Though the job of a marine biologist, Grade 9, at the National Oceanic and Atmospheric Administration (NOAA) is unlikely to be considerably different from the job of a marine biologist, Grade 9 at the Fish and Wildlife Service (FWS), the respective competencies associated with the two positions are unlikely to be aligned. Competency diffusion across agencies is costly, time-consuming, and duplicative.
Plan of Action
An Intergovernmental Platform for Competencies, PDs, Classifications, and Assessment Tools to Accelerate and Improve Hiring
To address the challenges outlined above, the Office of Personnel Management (OPM), Office of Management and Budget (OMB) and the Chief Human Capital Officers (CHCO) should create a web platform that makes it easy for federal agencies to align and exchange competencies, position descriptions, and assessment strategies for common occupations. This platform would help federal hiring managers and staffing specialists quickly compile a unified package that they can use from PD development up to candidate selection when hiring for occupations included on the platform.
To build this platform, the next administration should:
- Invest in creating Position Description libraries starting with the unitary agencies (e.g. the Environmental Protection Agency) then broadening out to the larger, disaggregated ones (e.g., the Department of Health and Human Services). Each agency should assign individuals responsible for keeping PDs in the libraries current at those agencies. Agencies and OPM would look for opportunities to merge common PDs. OPM would then aggregate these libraries into a “master” PD library for use within and across agencies. OPM should also share examples of best-in-class JOAs associated with each PD. This effort could be piloted with the most common occupations by agency.
Adopt competency frameworks and assessment tools already developed by industry associations, professional societies and unions for their professions. These organizations have completed the job task analyses and have developed competency frameworks, definitions, and assessments for the occupations they cover. For example, IEEE has developed competency models and assessment instruments for electrical and computer engineering. Again, this effort could be piloted by starting with the most common occupations by agency, and the occupations for which external organizations have already developed effective competency frameworks and assessment tools. - Create a clearinghouse for assessments at OPM indexed to each occupation associated in the PD Library. Assign responsibility to lead agencies for those occupations responsible for the PDs to keep the assessments current and/or test banks robust to meet the needs of the agencies. Expand USA Hire and funding to provide open access by agencies, hiring managers, HR professionals and program leaders.
- Standardize classification determinations for occupations/grade levels included in the master PD library. This will reduce interagency variation in classification changes by occupation and grade level, increase transparency for hiring managers, and reduce burden on staffing specialists and classifiers.
- Delegate authority to CHCOs to mandate use of shared, common PDs, assessments, competencies, and classification determinations. This means cleaning up the many regulatory mandates that do not already designate the agency-level CHCOs with this delegated authority. The workforce policy and oversight agencies (OPM, OMB, Merit Systems Protection Board (MSPB), Equal Employment Opportunity Commission (EEOC)) need to change the regulations, policies, and practices to reduce duplication, delegate decision making, and lower variation (For example, allow the classifiers and assessment professionals to default to external, standardized occupation and grade-level competencies instead of creating/re-creating them in each instance.)
- Share decision frameworks that determine assessment strategy/tool selection. Clear, public, transparent, and shared decision criteria for determining the best fit assessment strategy will help hiring managers and HR staffing specialists participate more effectively in executing assessments.
- Agree to and implement common data elements for interoperability. Many agencies will need to integrate this platform into their own talent acquisition systems such as ServiceNow, Monster, and USA Staffing. To be able to transfer data between them, the agencies will need to accelerate their work on common HR data elements in these areas of position descriptions, competencies, and assessments.
Data analytics from this platform and other HR talent acquisition systems will provide insights on the effectiveness of competency development, classification determinations, effectiveness of common PDs and joint JOAs, assessment quality, and effectiveness of shared certification of eligible lists. This will help HR leaders and program managers improve how agency staff are using common PDs, shared certs, classification consistency, assessment tool effectiveness, and other insights.
Finally, hiring managers, HR specialists, and applicants need to collaborate and share information better to implement any of these ideas well. Too often, siloed responsibilities and opaque specialization set back mutual accountability, effective communications, and trust. These actions entail a significant cultural and behavior change on the part of hiring managers, HR specialists, Industrial/Organizational psychologists, classifiers, and leaders. OPM and the agencies need to support hiring managers and HR specialists in finding assessments, easing the processes that can support adoption of skills-based assessments, agreeing to common PDs, and accelerating an effective hiring process.
Conclusion
The Executive Order on skills-based hiring, recent training from OPM, OMB and the CHCO Council on the federal hiring experience, and potential legislative action (e.g. Chance to Compete Act) are drivers that can improve the hiring process. Though some agencies are using PD libraries, joint postings, and shared referral certificates to improve hiring, these are far from common practice. A common platform for competencies, classifications, PDs, JOAs, and assessment tools, will make it easier for HR specialists, hiring managers and others to adopt these actions – to make hiring better and faster.
Opportunities to move promising hiring practices to habit abound. Position management, predictive workforce planning, workload modeling, hiring flexibilities and authorities, engaging candidates before, during, and after the hiring process are just some of these. Making these practices everyday habits throughout agency regions, states and programs rather than the exception will improve hiring. Looking to the future, greater delegation of human capital authorities to agencies, streamlining the regulations that support merit systems principles, and stronger commitments to customer experience in hiring, will help remove systemic barriers to an effective customer-/and user-oriented federal hiring process.
Taking the above actions on a common platform for competency development, position descriptions, and assessments will make hiring faster and better. With some of these other actions, this can change the relationship of the federal workforce to their jobs and change how the American people feel about opportunities in their government.
This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.
The Medicare Advance Healthcare Directive Enrollment (MAHDE) Initiative: Supporting Advance Care Planning for Older Medicare Beneficiaries
Taking time to plan and document a loved one’s preferences for medical treatment and end-of-life care helps respect and communicate their wishes to doctors while reducing unnecessary costs and anxiety. There is currently no federal policy requiring anyone, including Medicare beneficiaries, to complete an Advance Healthcare Directive (AHCD), which documents an individual’s preferences for medical treatment and end-of-life care. At least 40% of Medicare beneficiaries do not have a documented AHCD. In the absence of one, medical professionals may perform major and costly interventions unknowingly against a patient’s wishes.
To address this gap, the Centers for Medicare and Medicaid Services (CMS) should launch the Medicare Advance Healthcare Directive Enrollment (MAHDE) Initiative to support all adults over age 65 who are enrolled in Medicare or Medicare Advantage plans to complete and annually renew, at no extra cost, an electronic AHCD made available and stored on Medicare.gov or an alternative secure digital platform. MAHDE would streamline the process and make it easier for Medicare enrollees to complete and store directives and for healthcare providers to access them when needed. CMS could also work with the National Committee for Quality Assurance (NCQA) to expand Advance Care Planning (ACP) Healthcare Effectiveness Data and Information Set (HEDIS) measures to include all Medicare Advantage plans caring for beneficiaries aged 65 and older.
AHCDs save families unnecessary heartache and confusion at times of great pain and vulnerability. They also aim to improve healthcare decision-making, patient autonomy, and function as a long-term cost-saving strategy by limiting undesired medical interventions among older adults.
Challenge and Opportunity
Advance healthcare directives document an individual’s preferences for medical treatment in medical emergencies or at the end of life.
AHCDs typically include two parts:
- Identifying a healthcare proxy or durable power of attorney, who will make decisions about an individual’s health when they are unable to.
- A living will, which describes the treatments an individual wants to receive in emergencies—such as CPR, breathing machines, and dialysis—as well as decisions on organ and tissue donation.
Other documents complement AHCDs and help communicate treatment wishes during emergencies or at the end of life. These include do-not-resuscitate orders, do-not-hospitalize orders, and Physician (or Medical) Orders for Life-Sustaining Treatment forms, along with similar portable medical order forms for seriously ill or frail individuals (e.g., Medical Orders for Scope of Treatment, Physician Orders for Scope of Treatment, and Transportable Physician Orders for Patient Preferences). These forms are all designed to honor an individual’s healthcare preferences in a future medical emergency.
With the U.S. aging population projected to reach more than 20% of the total population by 2030, addressing end-of-life care challenges is increasingly urgent. As people age, their healthcare needs become more complex and expensive. Notably, 25% of all Medicare spending goes toward treating people in the last 12 months of their life. However, despite commonly receiving more aggressive treatments, many older adults prefer less intensive medical interventions and prioritize quality of life over prolonging life. This discrepancy between care received and a patient’s wishes is common, highlighting the need for clear and proactive communication and planning around medical preferences. Research shows patients with ACPs are less likely to receive unwanted and aggressive treatments in their last weeks of life, are more likely to enroll in hospice for comfort-focused care, and are less likely to die in hospitals or intensive care units.
Established ACP Policies and Support Mechanisms
Historically, some federal policies have underscored the importance of patient decision-making rights and the role of AHCDs in helping patients receive their desired care. These policies reflect the ongoing effort to empower patients to make informed decisions about their healthcare, particularly in end-of-life situations.
The Patient Self-Determination Act (PSDA), a federal law introduced in 1990 as a part of the Omnibus Budget Reconciliation Act, was created to ensure that patients are informed of their rights regarding medical care and their ability to make decisions about that care, especially in situations where they are no longer able to make decisions for themselves.
The Act requires hospitals, skilled nursing facilities (SNFs), home health agencies, hospice programs, and health maintenance organizations to:
- Inform patients of their rights to make decisions under state law about their medical care, including accepting or refusing treatment.
- Periodically inquire whether a patient has completed a legally valid AHCD and make note in their medical record.
- Not discriminate against patients who do or do not have an advance directive.
- Ensure AHCDs and other documented care wishes are carried out, as permitted by state law.
- Provide education to staff, patients, and the community about AHCDs and the right to make their own medical decisions.
It also directs the Secretary of Health and Human Services to research and assess the implementation of this law and its impact on health decision-making. Additionally, to encourage physicians and qualified health professionals to facilitate ACP conversations and complete AHCDs, CMS introduced and approved two new billing codes in 2016, allowing qualified health providers to bill CMS for advance care planning as a separate service regardless of diagnosis, place of service, or how often services are needed (Figure 1). These codes were expanded in 2017 with the temporary Healthcare Common Procedure Coding System code G0505, followed by CPT code 99483, to offer care planning and cognitive assessment services that include advance care planning for Medicare beneficiaries with cognitive impairment.
In 2022, ACP was introduced as one of four key components of the Care for Older Adults (COA) initiative within the Healthcare Effectiveness Data and Information Set measures. HEDIS is a proprietary set of clinical care performance measures developed by the National Committee for Quality Assurance (NCQA), a private, nonprofit accreditation organization that creates standardized measures to help health plans assess and report on the quality of care and services. HEDIS evaluates areas such as chronic disease management, preventive care, and care utilization, enabling reliable comparisons of health plan performance and identifying areas for improvement. It is reported that 235 million Americans are enrolled in plans that report HEDIS results, and reporting HEDIS measures is mandatory for Medicare Advantage plans.
The COA initiative includes the ACP measure as a reporting requirement for Medicare Special Needs Plans (SNPs), which are plans designed for individuals who have complex care needs, are eligible for Medicare and Medicaid, have disabling or chronic conditions, and/or live in an institution. The report includes the percentage of Medicare Advantage members within a specified population who participate in ACP discussions each year. This population currently includes:
- Adults aged 65–80 with advanced illness, signs of frailty, or those receiving palliative care.
- All adults aged 81 and older.
HEDIS measures contribute to the overall STAR rating of Medicare Advantage and other health plans, which helps beneficiaries choose high-quality plans, enables highly rated plans to attract more members, and influences the funding and bonuses CMS provides to these plans.
Despite the PSDA, CMS provider reimbursement codes that incentivize physicians and qualified health professionals to facilitate advance care planning, and its recent inclusion in HEDIS measures for Medicare Special Needs Plans, there remain many barriers to completing AHCDs.
Barriers to AHCD Completion
Although Medicare provides health and financial security to nearly all Americans aged 65 and older, completing a comprehensive AHCD is not universally expected within this population. Conversations about treatment decisions in future emergencies and end-of-life care are often avoided for various cultural, religious, financial, and mental health reasons. When they do happen, preferences are more often shared with loved ones but not documented or communicated to healthcare professionals. It is perhaps unsurprising, then, that only about half of Medicare beneficiaries have completed an AHCD. Studies show that of those who have, most do so in conjunction with estate planning, which may explain increasing cultural and socioeconomic disparities in the completion of AHCDs.
For Medicare beneficiaries who wish to complete an AHCD with a physician or qualified health professional, Medicare only covers planning as part of the annual wellness visit. If ACP is provided outside of this visit, the beneficiary must meet the Medicare Part B deductible, which is $240 in 2024, before coverage begins. If the deductible has not been met through other Part B services (such as doctor visits, preventive care, mental health services, or outpatient procedures), the beneficiary is responsible for the deductible and a 20% coinsurance payment. Additionally, some states may require attorney services or notarization to legally validate an AHCD, which could incur extra costs.
These additional costs can make it challenging for many Medicare beneficiaries to complete an AHCD when they want to. Furthermore, depending on the complexity of their situation and readiness to make decisions, many patients may need more than one visit with their clinical provider to make decisions about critical illness and end of life care, creating more out-of-pocket expenses.
AHCDs can also vary widely, are not uniform across states, and are often stored in paper formats that can be easily lost or damaged, or are embedded in bulky, multipage estate plans. Efforts to centralize AHCDs have been made through state-based AHCD registries, but their availability and management vary significantly and there is limited data on their use and effectiveness. Additionally, private ACP programs through initiatives like the Uniform Health-Care Decisions Act (UHCDA), the Five Wishes program, MyDirectives, and the U.S. Advance Care Plan Registry (USACPR), among others, have contributed to broadening ACP accessibility and awareness. However, data from private ACP programs are not widely published or have shown variable results. The UHCDA, first drafted and approved in 1993 by the Uniform Law Commission—a nonprofit organization focused on promoting consistency in state laws—was updated in 2023 and aims to address these variations in state AHCD policies, however with varying degrees of success. The Five Wishes program reports 40 million copies of their paper and digital advance directives in circulation nationwide, and their digital program recently launched a partnership with MyDirectives, a leader in digital advance care planning, to facilitate electronic access to legally recognized ACP documents. Unfortunately, data on completion and storage of these directives is not consistently reported across all users.
Despite efforts by numerous organizations to improve ACP completion, access, and usability, the lack of updated federal policy supporting advance care planning makes it difficult for patients to complete them and healthcare providers to quickly locate and interpret them in critical situations. When AHCDs are not available, incomplete, or hard to find, medical professionals may be unaware of patients’ care preferences during urgent moments, leading to treatment decisions that may not align with the patients’ wishes.
Plan of Action
To support all Medicare beneficiaries aged 65 and older in documenting their end-of-life care preferences, encourage the completion of AHCDs, and improve accessibility of AHCDs for healthcare professionals, CMS should launch the Medicare Advance Healthcare Directive Enrollment Initiative to focus on the following four interventions.
Recommendation 1. Streamline the process of AHCD completion and electronic storage during open enrollment through Medicare.gov or an alternative CMS-approved secure ACP digital platform.
To provide more clarity and support to fulfill patients’ wishes in their end-of-life care, CMS should empower all adults over the age of 65 enrolled in Medicare and Medicare Advantage plans to complete an electronic AHCD and renew it annually, at no extra cost, during Medicare’s designated open enrollment period. Though electronic completion is preferred, paper options will continue to be available and can be submitted for electronic upload and storage.
Supporting the completion of an AHCD during open enrollment presents a strategic opportunity to integrate AHCD completion into overall discussions about healthcare options. New open enrollment tools can be made easily available on Medicare.gov or in partnership with an existing digital ACP platform such as the USACPR, the newly established Five Wishes and MyDirectives partnership, or a centralized repository of state registries, enabling beneficiaries to complete and safely store their directives electronically. User-friendly tools and resources should be tailored to guide beneficiaries through the process and should be age-appropriate and culturally sensitive.
Building on this approach, some states are also taking steps to integrate electronic ACP completion and storage into healthcare enrollment processes. For example, in 2022, Maryland unanimously passed legislation that mandates payers to offer ACP options to all members during open enrollment and at regular intervals thereafter. It also requires payers to receive notifications on the completion and updates of ACP documents. Additionally, providers are required to use an electronic platform to create, upload, and store AHCDs.
An annual electronic renewal process during open enrollment would allow Medicare beneficiaries to review their own selections and make appropriate changes to ensure their choices are up to date. The annual review will also allow for educational opportunities around the risks and benefits of life-extending efforts through the secure Medicare enrollment portal and is a time interval that accounts for the abrupt changes in health status as individuals age. The electronic enhancements also provide a better fit with the modern technological healthcare landscape and can be completed in person or via a telehealth ACP visit with a physician or qualified health professional. Updates to AHCDs can also be made at any time outside of the open enrollment period.
CMS could also work across state lines and in collaboration with private ACP organizations, the UHCDA, and state-appointed AHCD representatives to develop a universal template for advance directives that would be acceptable nationwide. Alternatively, Medicare.gov could provide tailored, state-specific electronic forms to meet state legal requirements, like the downloadable forms provided by organizations such as AARP, a nonprofit, nonpartisan organization for Americans over 50, and CaringInfo, a program of the National Hospice and Palliative Care Organization. Either approach would ensure AHCDs are legally compliant while centralizing access to the correct forms for easy completion and secure electronic storage.
Recommendation 2. Remove barriers to access advance care planning services.
CMS should remove the deductible and 20% coinsurance when beneficiaries engage in voluntary ACP services with a physician or other qualified health professional outside of their yearly wellness visit.
The current deductible and coinsurance requirements may discourage participants from completing their AHCDs with the guidance of a medical provider, as these costs can be prohibitive. This is similar to how higher cost-sharing and out-of-pocket health expenses often result in cost-related nonadherence, reducing healthcare engagement and prescription medication adherence. When individuals face higher out-of-pocket costs for care, they are more likely to delay treatments, avoid doctor visits, and fill fewer prescriptions, even if they have insurance coverage. Removing deductibles and coinsurance for ACP visits would allow individuals to complete or update their AHCDs as needed, without financial strain and with support from their clinical team, like preventive services.
Additionally, CMS could consider continued health provider education on facilitating ACPs and partnership with organizations like Institute of Healthcare Improvement’s The Conversation Project, which encourages open discussions about end-of-life care preferences. Partnering with Evolent (formerly Vital Decisions) could also support ongoing telehealth discussions between behavioral health specialists and older adults, focusing on late-life and end-of-life care preferences to encourage formal AHCD completion. Internal studies of the Evolent program, aimed at Medicare Advantage beneficiaries, demonstrated an average savings of $13,956 in the final six months of life and projected a potential Medicare spending reduction of up to $8.3 billion.
These enhancements recognize advance care planning as an ongoing process of discussion and documentation that ensures a patient’s care and interventions reflect their values, beliefs, and preferences when unable to make decisions for themselves. It also emphasizes that goals of care are dynamic, and as they evolve, beneficiaries should feel supported and empowered to update their AHCDs affordably and with guidance from educational tools and trained professionals when needed.
Recommendation 3. Ensure electronic accessibility for healthcare providers.
CMS should also integrate the Medicare.gov AHCD storage system or a CMS-approved alternative with existing electronic health records (EHRs).
EHR systems in the United States currently lack full interoperability, meaning that when patients move through the continuum of care—from preventive services to medical treatment, rehabilitation, ongoing care maintenance—and between healthcare systems, their medical records, including AHCDs, may not transfer with them. This makes it challenging for healthcare providers to efficiently access these directives and deliver care that aligns with a patient’s wishes when the patient is incapacitated. To address this, CMS could encourage the integration of the Medicare.gov AHCD storage system or an alternative CMS-approved secure ACP digital platform to interface with all EHRs.
This storage platform could operate as an external add-on feature, allowing AHCDs to be accessible through any EHR, regardless of the healthcare system. Such external add-ons are typically third-party tools or modules that integrate with existing EHR systems to extend functionality, often addressing needs not covered by the core system. These add-ons are commonly used to connect EHRs with tools like clinical decision support systems, telehealth platforms, health information exchanges, and patient communication tools.
Such a universal, electronic system would prevent AHCDs from being misplaced, make them easily accessible across different states and health systems, and allow for easy updates. This would ensure that Medicare beneficiaries’ end-of-life care preferences are consistently honored, regardless of where they receive care.
Recommendation 4. Provide financial incentives for AHCD completion.
CMS should offer financial incentives for completing an AHCD, including options like tax credits, reduced or waived copayments and deductibles, prescription rebates, or other health-related subsidies.
Medicare’s increasing monthly premiums and cost-sharing requirements are often a substantial burden, especially for beneficiaries on fixed incomes. Nearly one in four Medicare enrollees age 65 and older report difficulty affording premiums, and almost 40% of those with incomes below twice the federal poverty level struggle to cover these costs. Additional financial burdens arise from extended care beyond standard coverage limits.
For example, in 2024, Medicare requires beneficiaries to pay $408 per day for inpatient, rehabilitation, inpatient psychiatric, and long-term acute care between days 61–90 of a hospital stay, totaling $11,832. Beyond 90 days, beneficiaries incur $816 per day for up to 60 lifetime reserve days, amounting to $48,720. Once these lifetime reserve days are exhausted, patients bear all inpatient costs, and these reserve days are never replenished again once they are used. Although the average hospital length of stay is typically shorter, inpatient days under Medicare do not need to be consecutive. This means if a patient is discharged and readmitted within a 60-day period, these patient payment responsibilities still apply and will not reset until there has been at least a 60-day break in care.
Medicare coverage for skilled nursing facilities is similarly limited: While Medicare fully covers the first 20 days when transferred from a qualified inpatient stay (at least three consecutive inpatient days, excluding the day of discharge), days 21–100 require a copayment of $204 per day, totaling $16,116. After 100 days, all SNF costs fall to the beneficiary. These costs are significant, and without out-of-pocket maximums, they can create financial hardship.
Some of these costs can be subsidized with Medicare Supplemental Insurance, or Medigap plans, but they come with additional premiums. By regularly educating patients and families of these costs—and offering tax credits, waived or reduced copayments and deductibles, prescription rebates, or account credits—CMS could provide substantial financial relief while encouraging the completion of AHCDs.
Encourage Expansion of the NCQA’s ACP HEDIS Measure
Finally, the MAHDE Initiative can be coupled with the expansion of the HEDIS measure to establish a comprehensive strategy for advancing proactive healthcare planning among Medicare beneficiaries. By encouraging both the accessibility and completion of AHCDs, while also integrating ACP as a quality measure for all Medicare Advantage enrollees aged 65 and older, CMS would embed ACP into standard patient care. This approach would incentivize health plans to prioritize ACP and help align patients’ care goals with the services they receive, fostering a more patient-centered, value-driven model of care within Medicare.
Conclusion
When patients and their families are clear on their goals of care, it is much less challenging for medical staff to navigate crises and stressful clinical situations. In unfortunate cases when these decisions have not been discussed and documented before a patient becomes incapacitated, doctors witness families struggle deeply with these choices, often leading to intense disagreements, conflict, and guilt. This uncertainty can also result in care that may not align with the patient’s goals.
Physicians and other qualified health professionals should continue to be trained on best practices to facilitate ACP with patients, and more importantly, the system should be redesigned to support these conversations early and often for all older Americans. The MAHDE Initiative is feasible, empowers patients to engage in ACP, and will reduce medical costs nationwide by allowing patients to be educated about their options and choose the care they want in future emergencies and at the end of life.
Starting an AHCD enrollment initiative with the Medicare population older than age 65 and achieving success in this group can pave the way for expanding ACP efforts to other high-need groups and, eventually, the general population. This approach fosters a healthcare environment where, as a nation, we become more comfortable discussing and managing healthcare decisions during emergencies and at the end of life.
This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.
No. While the MAHDE Initiative will encourage all adults over age 65 who are enrolled in Medicare or Medicare Advantage plans to complete or renew an electronic AHCD annually through Medicare.gov or an alternative CMS-approved secure ACP digital platform, it will not be a requirement for receiving Medicare benefits or care.
Working alongside state-specific submission guidelines, Medicare beneficiaries can securely complete their AHCD on their own or during a visit with a qualified medical provider or health professional, either in person or through telehealth.
- Online submission: An accessible electronic version will be available on Medicare.gov or an alternative CMS-approved secure ACP digital platform, allowing individuals to complete and submit their AHCD online, with guidance from their care provider as needed.
- Paper version: Alternatively, individuals can also choose to complete a paper version of the AHCD, which can then be submitted to Medicare or a CMS-approved alternative for well-digitized electronic upload, storage, and access by healthcare professionals on Medicare.gov.
Review, updates, or to or confirm “no change” to these directives can be made annually online or by resubmitting updated paper forms during open enrollment or anytime as desired. Flexible options aim to make the process of AHCD completion accessible and convenient for all Medicare beneficiaries.
The author does not endorse specific products, individuals, or organizations. Any references are intended as examples or options for further exploration, not as endorsements or formal recommendations.
Driving Equitable Healthcare Innovations through an AI for Medicaid (AIM) Initiative
Artificial intelligence (AI) has transformative potential in the public health space – in an era when millions of Americans have limited access to high-quality healthcare services, AI-based tools and applications can enable remote diagnostics, drive efficiencies in implementation of public health interventions, and support clinical decision-making in low-resource settings. However, innovation driven primarily by the private sector today may be exacerbating existing disparities by training models on homogenous datasets and building tools that primarily benefit high socioeconomic status (SES) populations.
To address this gap, the Center for Medicare and Medicaid Innovation (CMMI) should create an AI for Medicaid (AIM) Initiative to distribute competitive grants to state Medicaid programs (in partnership with the private sector) for pilot AI solutions that lower costs and improve care delivery for rural and low-income populations covered by Medicaid.
Challenge & Opportunity
In 2022, the United States spent $4.5 trillion on healthcare, accounting for 17.3% of total GDP. Despite spending far more on healthcare per capita compared to other high-income countries, the United States has significantly worse outcomes, including lower life expectancy, higher death rates due to avoidable causes, and lesser access to healthcare services. Further, the 80 million low-income Americans reliant on state-administered Medicaid programs often have below-average health outcomes and the least access to healthcare services.
AI has the potential to transform the healthcare system – but innovation solely driven by the private sector results in the exacerbation of the previously described inequities. Algorithms in general are often trained on datasets that do not represent the underlying population – in many cases, these training biases result in tools and models that perform poorly for racial minorities, people living with comorbidities, and people of low SES. For example, until January 2023, the model used to prioritize patients for kidney transplants systematically ranked Black patients lower than White patients – the race component was identified and removed due to advocacy efforts within the medical community. AI models, while significantly more powerful than traditional predictive algorithms, are also more difficult to understand and engineer, resulting in the likelihood of further perpetuating such biases.
Additionally, startups innovating the digital health space today are not incentivized to develop solutions for marginalized populations. For example, in FY 2022, the top 10 startups focused on Medicaid received only $1.5B in private funding, while their Medicare Advantage (MA)-focused counterparts received over $20B. Medicaid’s lower margins are not attractive to investors, so digital health development targets populations that are already well-insured and have higher degrees of access to care.
The Federal Government is uniquely positioned to bridge the incentive gap between developers of AI-based tools in the private sector and American communities who would benefit most from said tools. Accordingly, the Center for Medicare and Medicaid Innovation (CMMI) should launch the AI for Medicaid (AIM) Initiative to incentivize and pilot novel AI healthcare tools and solutions targeting Medicaid recipients. Precedents in other countries demonstrate early success in state incentives unlocking health AI innovations – in 2023, the United Kingdom’s National Health Service (NHS) partnered with Deep Medical to pilot AI software that streamlines services by predicting and mitigating missed appointment risk. The successful pilot is now being adopted more broadly and is projected to save the NHS over $30M annually in the coming years.
The AIM Initiative, guided by the structure of the former Medicaid Innovation Accelerator Program (IAP), President Biden’s executive order on integrating equity into AI development, and HHS’ Equity Plan (2022), will encourage the private sector to partner with State Medicaid programs on solutions that benefit rural and low-income Americans covered by Medicaid and drive efficiencies in the overall healthcare system.
Plan of Action
CMMI will launch and operate the AIM Initiative within the Department of Health and Human Services (HHS). $20M of HHS’ annual budget request will be allocated towards the program. State Medicaid programs, in partnership with the private sector, will be invited to submit proposals for competitive grants. In addition to funding, CMMI will leverage the former structure of the Medicaid IAP program to provide state Medicaid agencies with technical assistance throughout their participation in the AIM Initiative. The programs ultimately selected for pilot funding will be monitored and evaluated for broader implementation in the future.
Sample Detailed Timeline
- 0-6 months:
- HHS Secretary to announce and launch the AI for Medicaid (AIM) Initiative within CMMI (e.g., delineating personnel responsibilities and engaging with stakeholders to shape the program)
- HHS to include AIM funding in annual budget request to Congress ($20M allocation)
- 6-12 months:
- CMMI to engage directly with state Medicaid agencies to support proposal development and facilitate connections with private sector partners
- CMMI to complete solicitation period and select ~7-10 proposals for pilot funding of ~$2-5M each by end of Year 1
- Year 2-7: Launch and roll out selected AI projects, led by state Medicaid agencies with continued technical assistance from CMMI
- Year 8: CMMI to produce an evaluative report and provide recommendations for broader adoption of AI tools and solutions within Medicaid-covered and other populations
Risks and Limitations
- Participation: Success of the initiative relies on state Medicaid programs and private sector partners’ participation. To mitigate this risk, CMMI will engage early with the National Association of Medicaid Directors (NAMD) to generate interest and provide technical assistance in proposal development. These conversations will also include input and support from the HHS Office of the Chief AI Officer (OCAIO) and its AI Council/Community of Practice. Further, startups in the healthcare AI space will be invited to engage with CMMI on identifying potential partnerships with state Medicaid agencies. A secondary goal of the initiative will be to ensure a number of private sector partners are involved in AIM.
- Oversight: AI is at the frontier of technological development today, and it is critical to ensure guardrails are in place to protect patients using AI technologies from potential adverse outcomes. To mitigate this risk, state Medicaid agencies will be required to submit detailed evaluation plans with their proposals. Additionally, informed consent and the ability to opt-out of data sharing when engaging with personally identifiable information (PII) and diagnostic or therapeutic technologies will be required. Technology partners (whether private, academic, or public sector) will further be required to demonstrate (1) adequate testing to identify and reduce bias in their AI tools to reasonable standards, (2) engagement with beneficiaries in the development process, and (3) leveraging testing environments that reflect the particular context of the Medicaid population. Finally, all proposals must adhere to guidelines published by AI guidelines adopted by HHS and the federal government more broadly, such as the CMS AI Playbook, the HHS Trustworthy AI Playbook, and any imminent regulations.
- Longevity: As a pilot grant program, the initiative does not promise long-term results for the broader population and will only facilitate short-term projects at the state level. Consequently, HHS leadership must remain committed to program evaluation and a long-term outlook on how AI can be integrated to support Americans more broadly. AI technologies or tools considered for acquisition by state Medicaid agencies or federal agencies after pilot implementation should ensure compliance with OMB guidelines.
Conclusion
The AI for Medicaid Initiative is an important step in ensuring the promise of artificial intelligence in healthcare extends to all Americans. The initiative will enable the piloting of a range of solutions at a relatively low cost, engage with stakeholders across the public and private sectors, and position the United States as a leader in healthcare AI technologies. Leveraging state incentives to address a critical market failure in the digital health space can additionally unlock significant efficiencies within the Medicaid program and the broader healthcare system. The rural and low-income Americans reliant on Medicaid have too often been an afterthought in access to healthcare services and technologies – the AIM Initiative provides an opportunity to address this health equity gap.
This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.
Strategies to Accelerate and Expand Access to the U.S. Innovation Economy
In 2020, we outlined a vision for how the incoming presidential administration could strengthen the nation’s innovation ecosystem, encouraging the development and commercialization of science and technology (S&T) based ventures. This vision entailed closing critical gaps from lab to market, with an emphasis on building a broadly inclusive pipeline of entrepreneurial talent while simultaneously providing key support in venture development.
During the intervening years, we have seen extraordinary progress, in good part due to ambitious legislation. Today, we propose innovative ways that the federal government can successfully build on this progress and make the most of new programs. With targeted policy interventions, we can efficiently and effectively support the U.S. innovation economy through the translation of breakthrough scientific research from the lab to the market. The action steps we propose are predicated on three core principles: inclusion, relevance, and sustainability. Accelerating our innovation economy and expanding access to it can make our nation more globally competitive, increase economic development, address climate change, and improve health outcomes. A strong innovation economy benefits everyone.
Challenge
Our Day One 2020 memo began by pitching the importance of innovation and entrepreneurship: “Advances in scientific and technological innovations—and, critically, the ability to efficiently transform breakthroughs into scalable businesses—have contributed enormously to American economic leadership over the past century.” Now, it is widely recognized that innovation and entrepreneurship are key to both global economic leadership and addressing the challenges of changing climate. The question is no longer whether we must innovate but rather how effectively we can stimulate and expand a national innovation economy.
Since 2020, the global and U.S. economies have gone through massive change and uncertainty. The Global Innovation Index (GII) 2023 described the challenges involved in its yearly analysis of monitoring global innovation trends amid uncertainty brought on by a sluggish economic recovery from the COVID-19 pandemic, elevated interest rates, and geopolitical tensions. Innovation indicators like scientific publications, research and development (R&D), venture capital (VC) investments, and the number of patents rose to historic levels, but the value of VC investment declined by close to 40%. As a counterweight to this extensive uncertainty, the GII 2023 described the future of S&T innovation and progress as “the promise of Digital Age and Deep Science innovation waves and technological progress.”
In the face of the pressures of global competitiveness, societal needs, and climate change, the clear way forward is to continue to innovate based on scientific and technical advancements. Meeting the challenges of our moment in history requires a comprehensive and multifaceted effort led by the federal government with many public and private partners.
Grow global competitiveness
Around the world, countries are realizing that investing in innovation is the most efficient way to transform their economies. In 2022, the U.S. had the largest R&D budget internationally, with spending growing by 5.6%, but China’s investment in R&D grew by 9.8%. For the U.S. to remain a global economic leader, we must continue to invest in innovation infrastructure, including the basic research and science, technology, engineering, and math (STEM) education that underpins our leadership, while we grow our investments in translational innovation. This includes reframing how existing resources are used as well as allocating new spending. It will require a systems change orientation and long-term commitments.
Increase economic development
Supporting and growing an innovation economy is one of our best tools for economic development. From place-based innovation programs to investment in emerging research institutions (ERIs) and Minority-Serving Institutions (MSIs) to training S&T innovators to become entrepreneurs in I-Corps™, these initiatives stimulate local economies, create high-quality jobs, and reinvigorate regions of the country left behind for too long.
Address climate change
In 2023, for the first time, global warming exceeded 1.5°C for an entire year. It is likely that all 12 months of 2024 will also exceed 1.5°C above pre-industrial temperatures. Nationally and internationally, we are experiencing the effects of climate change; climate mitigation, adaptation, and resilience solutions are urgently needed and will bring outsized economic and social impact.
Improve U.S. health outcomes
The COVID-19 pandemic was devastating, particularly impacting underserved and underrepresented populations, but it spurred unprecedented medical innovation and commercialization of new diagnostics, vaccines, and treatments. We must build on this momentum by applying what we’ve learned about rapid innovation to continue to improve U.S. health outcomes and to ensure that our nation’s health care needs across regions and demographics are addressed.
Make innovation more inclusive
Representational disparities persist across racial/ethnic and gender lines in both access to and participation in innovation and entrepreneurship. This is a massive loss for our innovation economy. The business case for broader inclusion and diversity is growing even stronger, with compelling data tracking the relationship between leadership diversity and company performance. Inclusive innovation is more effective innovation: a multitude of perspectives and lived experiences are required to fully understand complex problems and create truly useful solutions. To reap the full benefits of innovation and entrepreneurship, we must increase access and pathways for all.
Opportunity
With the new presidential administration in 2025, the federal government has a renewed opportunity to prioritize policies that will generate and activate a wave of powerful, inclusive innovation and entrepreneurship. Implementing such policies and funding the initiatives that result is crucial if we as a nation are to successfully address urgent problems such as the climate crisis and escalating health disparities.
Our proposed action steps are predicated on three core principles: inclusion, relevance, and sustainability.
Inclusion
One of this nation’s greatest and most unique strengths is our heterogeneity. We must leverage our diversity to meet the complexity of the substantial social and economic challenges that we face today. The multiplicity of our people, communities, identities, geographies, and lived experiences gives the U.S. an edge in the global innovation economy: When we bring all of these perspectives to the table, we better understand the challenges that we face, and we are better equipped to innovate to meet them. If we are to harness the fullness of our nation’s capacity for imagination, ingenuity, and creative problem-solving, entrepreneurship pathways must be inclusive, equitable, and accessible to all. Moreover, all innovators must learn to embrace complexity, think expansively and critically, and welcome perspectives beyond their own frame of reference. Collaboration and mutually beneficial partnerships are at the heart of inclusive innovation.
Relevance
Innovators and entrepreneurs have the greatest likelihood of success—and the greatest potential for impact—when their work is purpose-driven, nimble, responsive to consumer needs, and adaptable to different applications and settings. Research suggests that “breakthrough innovation” occurs when different actors bring complementary and independent skills to co-create interesting solutions to existing problems. Place-based innovation is one strategy to make certain that technology development is grounded in regional concerns and aspirations, leading to better outcomes for all concerned.
Sustainability
Multiple layers of sustainability should be integrated into the innovation and entrepreneurship landscape. First and most salient is supporting the development of innovative technologies that respond to the climate crisis and bolster national resilience. Second is encouraging innovators to incorporate sustainable materials and processes in all stages of research and development so that products benefit the planet and risks to the environment are mitigated through the manufacturing process, whether or not climate change is the focus of the technology. Third, it is vital to prioritize helping ventures develop sustainable business models that will result in long-term viability in the marketplace. Fourth, working with innovators to incorporate the potential impact of climate change into their business planning and projections ensures they are equipped to adapt to changing needs. All of these layers contribute to sustaining America’s social well-being and economic prosperity, ensuring that technological breakthroughs are accessible to all.
Proposed Action
Recommendation 1. Supply and prepare talent.
Continuing to grow the nation’s pipeline of S&T innovators and entrepreneurs is essential. Specifically, creating accessible entrepreneurial pathways in STEM will ensure equitable participation. Incentivizing individuals to become innovators-entrepreneurs, especially those from underrepresented groups, will strengthen national competitiveness by leveraging new, untapped potential across innovation ecosystems.
Expand the I-Corps model
By bringing together experienced industry mentors, commercial experts, research talent, and promising technologies, I-Corps teaches scientific innovators how to evaluate whether their innovation can be commercialized and how to take the first practical steps of bringing their product to market. Ten new I-Corps Hubs, launched in 2022, have expanded the network of engaged universities and collaborators, an important step toward growing an inclusive innovation ecosystem across the U.S.
Interest in I-Corps far outpaces current capacity, and increasing access will create more expansive pathways for underrepresented entrepreneurs. New federal initiatives to support place-based innovation and to grow investment at ERIs and MSIs will be more successful if they also include lab-to-market training programs such as I-Corps. Federal entities should institute policies and programs that increase awareness about and access to sequenced venture support opportunities for S&T innovators. These opportunities should include intentional “de-risking” strategies through training, advising, and mentoring.
Specifically, we recommend expanding I-Corps capacity so that all interested participants can be accommodated. We should also strive to increase access to I-Corps so that programs reach diverse students and researchers. This is essential given the U.S. culture of entrepreneurship that remains insufficiently inclusive of women, people of color, and those from low-income backgrounds, as well as international students and researchers, who often face barriers such as visa issues or a lack of institutional support needed to remain in the U.S. to develop their innovations. Finally, we should expand the scope of what I-Corps offers, so that programs provide follow-on support, funding, and access to mentor and investor networks even beyond the conclusion of initial entrepreneurial training.
I-Corps has already expanded beyond the National Science Foundation (NSF) to I-Corps at National Institutes of Health (NIH), to empower biomedical entrepreneurs, and Energy I-Corps, established by the Department of Energy (DOE) to accelerate the deployment of energy technologies. We see the opportunity to grow I-Corps further by building on this existing infrastructure and creating cohorts funded by additional science agencies so that more basic research is translated into commercially viable businesses.
Close opportunity gaps by supporting emerging research institutions (ERIs) and Minority-Serving Institutions (MSIs)
ERIs and MSIs provide pathways to S&T innovation and entrepreneurship, especially for individuals from underrepresented groups. In particular, a VentureWell-commissioned report identified that “MSIs are centers of research that address the unique challenges and opportunities faced by BIPOC communities. The research that takes place at MSIs offers solutions that benefit a broad and diverse audience; it contributes to a deeper understanding of societal issues and drives innovation that addresses these issues.”
The recent codification of ERIs in the 2022 CHIPS and Science Act pulls this category into focus. Defining this group, which comprises thousands of higher education institutions, was the first step in addressing the inequitable distribution of federal research funding. That imbalance has perpetuated regional disparities and impacted students from underrepresented groups, low-income students, and rural students in particular. Further investment in ERIs will result in more STEM-trained students, who can become innovators and entrepreneurs with training and engagement. Additional support that could be provided to ERIs includes increased research funding, access to capital/investment, capacity building (faculty development, student support services), industry partnerships, access to networks, data collection/benchmarking, and implementing effective translation policies, incentives, and curricula.
Supporting these institutions—many of which are located in underserved rural or urban communities that experience underinvestment—provides an anchor for sustained talent development and economic growth.
Recommendation 2. Support place-based innovation.
Place-based innovation not only spurs innovation but also builds resilience in vulnerable communities, enhancing both U.S. economic and national security. Communities that are underserved and underinvested in present vulnerabilities that hostile actors outside of the U.S. can exploit. Place-based innovation builds resilience: innovation creates high-quality jobs and brings energy and hope to communities that have been left behind, leveraging the unique strengths, ecosystems, assets, and needs of specific regions to drive economic growth and address local challenges.
Evaluate and learn from transformative new investments
There have been historic levels of government investment in place-based innovation, funding the NSF’s Regional Innovation Engines awards and two U.S. Department of Commerce Economic Development Administration (EDA) programs: the Build Back Better Regional Challenge and Regional Technology and Innovation Hubs awards. The next steps are to refine, improve, and evaluate these initiatives as we move forward.
Unify the evaluation framework, paired with local solutions
Currently, evaluating the effectiveness and outcomes of place-based initiatives is challenging, as benchmarks and metrics can vary by region. We propose a unified framework paired with solutions locally identified by and tailored to the specific needs of the regional innovation ecosystem. A functioning ecosystem cannot be simply overlaid upon a community but must be built by and for that community. The success of these initiatives requires active evaluation and incorporation of these learnings into effective solutions, as well as deep strategic collaboration at the local level, with support and time built into processes.
Recommendation 3. Increase access to financing and capital.
Funding is the lifeblood of innovation. S&T innovation requires more investment and more time to bring to market than other types of ventures, and early-stage investments in S&T startups are often perceived as risky by those who seek a financial return. Bringing large quantities of early-stage S&T innovations to the point in the commercialization process where substantial private capital takes an interest requires nondilutive and patient government support. The return on investment that the federal government seeks is measured in companies successfully launched, jobs created, and useful technologies brought to market.
Disparities in access to capital by companies owned by women and underrepresented minority founders are well documented. The federal government has an interest in funding innovators and entrepreneurs from many backgrounds: they bring deep and varied knowledge and a multitude of perspectives to their innovations and to their ventures. This results in improved solutions and better products at a cheaper price for consumers. Increasing access to financing and capital is essential to our national economic well-being and to our efforts to build climate resilience.
Expand SBIR/STTR access and commercial impact
The SBIR and STTR programs spur innovation, bolster U.S. economic competitiveness, and strengthen the small business sector, but barriers persist. In a recent third-party assessment of the SBIR/STTR program at NIH, the second largest administrator of SBIR/STTR funds, the committee found outreach from the SBIR/STTR programs to underserved groups is not coordinated, and there has been little improvement in the share of applications from or awards to these groups in the past 20 years. Further, NIH follows the same processes used for awarding R01 research grants, using the same review criteria and typically the same reviewers, omitting important commercialization considerations.
To expand access and increase the commercialization potential of the SBIR/STTR program, funding agencies should foster partnerships with a broader group of organizations, conduct targeted outreach to potential applicants, offer additional application assistance to potential applicants, work with partners to develop mentorship and entrepreneur training programs, and increase the percentage of private-sector reviewers with entrepreneurial experience. Successful example programs of SBIR/STTR support programs include the NSF Beat-The-Odds Boot Camp, Michigan’s Emerging Technologies Fund, and the SBIR/STTR Innovation Summit.
Provide entrepreneurship education and training
Initiatives like NSF Engines, Tech Hubs, Build-Back-Better Regional Challenge, the Minority Business Development Agency (MBDA) Capital Challenge, and the Small Business Administration (SBA) Growth Accelerator Fund expansion will all achieve more substantial results with supplemental training for participants in how to develop and launch a technology-based business. As an example of the potential impact, more than 2,500 teams have participated in I-Corps since the program’s inception in 2012. More than half of these teams, nearly 1,400, have launched startups that have cumulatively raised $3.16 billion in subsequent funding, creating over 11,000 jobs. Now is an opportune moment to widely apply similarly effective approaches.
Launch a local investment education initiative
Angel investors are typically providing the first private funding available to S&T innovators and entrepreneurs. These very early-stage funders give innovators access to needed capital, networks, and advice to get their ventures off the ground. We recommend that the federal government expand the definition of an accredited investor and incentivize regionally focused initiatives to educate policymakers and other regional stakeholders about best practices to foster more diverse and inclusive angel investment networks. With the right approach and support, there is the potential to engage thousands more high-net-worth individuals in early-stage investing, contributing their expertise and networks as well as their wealth.
Encourage investment in climate solutions
Extreme climate-change-attributed weather events such as floods, hurricanes, drought, wildfire, and heat waves cost the global economy an average of $143 billion annually. S&T innovations have the potential to help address the impacts of climate change at every level:
- Mitigation. Promising new ideas and technologies can slow or even prevent further climate change by reducing or removing greenhouse gasses.
- Adaptation. We can adapt processes and systems to better respond to adverse events, reducing the impacts of climate change.
- Resilience. By anticipating, preparing for, and responding to hazardous events, trends, or disturbances caused by climate change, we can continue to thrive on our changing planet.
Given the global scope of the problem and the shared resources of affected communities, the federal government can be a leader in prioritizing, collaborating, and investing in solutions to direct and encourage S&T innovation for climate solutions. There is no question whether climate adaptation technologies will be needed, but we must ensure that these solutions are technologies that create economic opportunity in the U.S. We encourage the expansion and regular appropriations of funding for successful climate programs across federal agencies, including the DoE Office of Technology Transitions’ Energy Program for Innovation Clusters, the National Oceanic and Atmospheric Administration’s (NOAA) Ocean-Based Climate Resilience Accelerators program, and the U.S. Department of Agriculture’s Climate Hubs.
Recommendation 4. Shift to a systems change orientation.
To truly stimulate a national innovation economy, we need long-term commitments in policy, practice, and regulations. Leadership and coordination from the executive branch of the federal government are essential to continue the positive actions already begun by the Biden-Harris Administration.
These initiatives include:
- Scientific integrity and evidence-based policy-making memo
- Catalyzing Clean Energy Industry Executive Order
- Implementation of the Infrastructure Investment and Jobs Act
- Advancing Biotechnology and Biomanufacturing Innovation for a Sustainable, Safe, and Secure American Bioeconomy
- Implementation of the CHIPS Act of 2022
- Advancing Women’s Health Research and Innovation
Policy
Signature initiatives like the CHIPS and Science Act, Infrastructure Investment and Jobs Act, and the National Quantum Initiative Act are already threatened by looming appropriations shortfalls. We need to fully fund existing legislation, with a focus on innovative and translational R&D. According to a report by PricewaterhouseCoopers, if the U.S. increased federal R&D spending to 1% of GDP by 2030, the nation could support 3.4 million jobs and add $301 billion in labor income, $478 billion in economic value, and $81 billion in tax revenue. Beyond funding, we propose supporting innovative policies to bolster U.S. innovation capacity at the local and national levels. This includes providing R&D tax credits to spur research collaboration between industry and universities and labs, providing federal matching funds for state and regional technology transfer and commercialization efforts, and revising the tax code to support innovation by research-intensive, pre-revenue companies.
Practice
The University and Small Business Patent Procedures Act of 1980, commonly known as the Bayh-Dole Act, allows recipients of federal research funding to retain rights to inventions conceived or developed with that funding. The academic tech transfer system created by the Bayh-Dole Act (codified as amended at 35 U.S.C. §§ 200-212) generated nearly $1.3 trillion in economic output, supported over 4.2 million jobs, and launched over 11,000 startups. We should preserve the Bayh-Dole Act as a means to promote commercialization and prohibit the consideration of specific factors, such as price, in march-in determinations.
In addition to the continual practice and implementation of successful laws such as Bayh-Dole, we must repurpose resources to support innovation and the high-value jobs that result from S&T innovation. We believe the new administration should allocate a share of federal funding to promote technology transfer and commercialization and better incentivize commercialization activities at federal labs and research institutes. This could include new programs such as mentoring programs for researcher entrepreneurs and student entrepreneurship training programs. Incentives include evaluating the economic impact of lab-developed technology by measuring commercialization outcomes in the annual Performance Evaluation and Management Plans of federal labs, establishing stronger university entrepreneurship reporting requirements to track and reward universities that create new businesses and startups, and incentivizing universities to focus more on commercialization activities as part of promotion and tenure of faculty,
Regulations
A common cause of lab-to-market failure is the inability to secure regulatory approval, particularly for novel technologies in nascent industries. Regulation can limit potentially innovative paths, increase innovation costs, and create a compliance burden on businesses that stifle innovation. Regulation can also spur innovation by enabling the management of risk. In 1976 the Cambridge (Massachusetts) City Council became the first jurisdiction to regulate recombinant DNA, issuing the first genetic engineering license and creating the first biotech company. Now Boston/Cambridge is the world’s largest biotech hub: home to over 1,000 biotech companies, 21% of all VC biotech investments, and 15% of the U.S. drug development pipeline.
To advance innovation, we propose two specific regulatory actions:
- Climate. We recommend the Environmental Protection Agency (EPA) adopt market-based strategies to help fight climate change by monitoring and regulating CO2 emissions, putting an explicit price on carbon emissions, and incentivizing businesses to find cost-effective and innovative ways to reduce those emissions.
- Health. We recommend strengthening regulatory collaboration between the Food and Drug Administration (FDA) and the Centers for Medicare & Medicaid Services (CMS) to establish a more efficient and timely reimbursement process for novel FDA-authorized medical devices and diagnostics. This includes refining the Medicare Coverage of Innovative Technologies rule and fully implementing the new Transitional Coverage for Emerging Technologies pathway to expedite the review, coverage determination, and reimbursement of novel medical technologies.
Conclusion
To maintain its global leadership role, the United States must invest in the individuals, institutions, and ecosystems critical to a thriving, inclusive innovation economy. This includes mobilizing access, inclusion, and talent through novel entrepreneurship training programs; investing, incentivizing, and building the capacity of our research institutions; and enabling innovation pathways by increasing access to capital, networks, and resources.
Fortunately, there are several important pieces of legislation recommitting the U.S. leadership to bold S&T goals, although much of the necessary resources are yet to be committed to those efforts. As a society, we benefit when federally supported innovation efforts tackle big problems that are beyond the scope of single ventures; notably, the many challenges arising from climate change. A stronger, more inclusive innovation economy benefits the users of S&T-based innovations, individual innovators, and the nation as a whole.
When we intentionally create pathways to innovation and entrepreneurship for underrepresented individuals, we build on our strengths. In the United States, our strength has always been our people, who bring problem-solving abilities from a multitude of perspectives and settings. We must unleash their entrepreneurial power and become, even more, a country of innovators..
Earlier memo contributors Heath Naquin and Shaheen Mamawala (2020) were not involved with this 2024 memo.
This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.
Collaborative Intelligence: Harnessing Crowd Forecasting for National Security
“The decisions that humans make can be extraordinarily costly. The wars in Iraq and Afghanistan were multi-trillion dollar decisions. If you can improve the accuracy of forecasting individual strategies by just a percentage point, that would be worth tens of billions of dollars.” – Jason Matheny, CEO, RAND Corporation
Predicting the future—a notoriously hard problem—is a core function of the Office of the Director of National Intelligence (ODNI). Crowd forecasting methods offer a systematic approach to quantifying the U.S. intelligence community’s uncertainty about the future and predicting the impact of interventions, allowing decision-makers to strategize effectively and allocate resources by outlining risks and tradeoffs in a legible format. We propose that ODNI leverage its earlier investments in crowd-forecasting research to enhance intelligence analysis and interagency coordination. Specifically, ODNI should develop a next-generation crowd-forecasting program that balances academic rigor with policy relevance. To do this, we propose partnering a Federally Funded Research and Development Center (FFRDC) with crowd forecasting experience with executive branch agencies to generate high-value forecasting questions and integrate targeted forecasts into existing briefing and decision-making processes. Crucially, end users (e.g. from the NSC, DoD, etc.) should be embedded in the question-generation process in order to ensure that the forecasts are policy-relevant. This approach has the potential to significantly enhance the quality and impact of intelligence analysis, leading to more robust and informed national security decisions.
Challenge & Opportunity
ODNI is responsible for the daunting task of delivering insightful, actionable intelligence in a world of rapidly evolving threats and unprecedented complexity. Traditional analytical methods, while valuable, struggle to keep pace with the speed and intricacy of global events where dynamic reports are necessary. Crowd forecasting provides infrastructure for building shared understanding across the Intelligence Community (IC) with a very low barrier to entry. Through the process, each agency can share their assessments of likely outcomes and planned actions based on their intelligence, to be aggregated alongside other agencies. These techniques can serve as powerful tools for interagency coordination within the IC, quickly surfacing areas of consensus and disagreement. By building upon the foundation of existing Intelligence Advanced Research Projects Activity (IARPA) crowd forecasting research — including IARPA’s Aggregative Contingent Estimation (ACE) tournament and Hybrid Forecasting Competition (HFC) — ODNI has within its reach significant low-hanging fruit for improving the quality of its intelligence analysis and the use of this analysis to inform decision-making.
Despite the IC’s significant investment in research demonstrating the potential of crowd forecasting, integrating these approaches into decision-making processes has proven difficult. The first-generation forecasting competitions showed significant returns from basic cognitive debiasing training, above and beyond the benefits of crowd forecast aggregation. Yet, attempts to incorporate forecasting training and probabilistic estimates into intelligence analysis have fallen flat due in large part to internal politics. Accordingly, the incentives within and among agencies must be considered in order for any forecasting program to deliver value. Importantly, any new crowd forecasting initiative should be explicitly rolled out as a complement, not a substitute, to traditional intelligence analysis.
Plan of Action
The incoming administration should direct the Office of the Director of National Intelligence (ODNI) to resume its study and implementation of crowd forecasting methods for intelligence analysis. The following recommendations illustrate how this can be done effectively.
Recommendation 1. Develop a Next-Generation Crowd Forecasting Program
Direct a Federally Funded Research and Development Center (FFRDC) experienced with crowd forecasting methods, such as MITRE’s National Security Engineering Center (NSEC) or the RAND Forecasting Initiative (RFI), to develop a next-generation pilot program.
Prior IARPA studies of crowd-sourced intelligence were focused on the question: How accurate is the wisdom of the crowds on geopolitical questions? To answer this, the IARPA tournaments posed many forecasting questions, rapid-fire, over a relatively short period of time, and these questions were optimized for easy generation and resolution (i.e. straightforward data-driven questions) — at the expense of policy relevance. A next-generation forecasting program should build upon recent research on eliciting from experts the crucial questions that illuminate key uncertainties, point to important areas of disagreement, and estimate the impact of interventions under consideration.
This program should:
- Incorporate lessons learned from previous IARPA forecasting tournaments, including difficulties with getting buy-in from leadership to incentivize the participation of busy analysts and decision-makers at ODNI.
- Develop a framework for generating questions that balance rigor, resolvability, and policy relevance.
- Implement advanced aggregation and scoring methods, leveraging recent academic research and machine learning methods.
Recommendation 2. Embed the Decision-Maker in the Question Generation Process
Direct the FFRDC to work directly with one or more executive branch partners to embed end users in the process of eliciting policy-relevant forecasting questions. Potential executive branch partners could include the National Security Council, Department of Defense, Department of State, and Department of Homeland Security, among others.
A formal process for question generation and refinement should be established, which could include:
- A structured methodology for transforming policy questions of interest into specific, quantifiable forecasting questions.
- A review process to ensure that questions meet criteria for both forecasting suitability and policy relevance.
- Mechanisms for rapid question development in response to emerging crises or sudden shifts.
- Feedback mechanisms to refine and improve question quality over time, with a focus on policy relevance and decision-maker user experience.
Recommendation 3. Integrate Forecasts into Decision-Making Processes
Ensure that resulting forecasts are actively reviewed by decision-makers and integrated into existing intelligence and policy-making processes.
This could involve:
- Incorporating forecast results into regular intelligence briefings, as a quantitative supplement to traditional qualitative assessments.
- Developing visualizations/dashboards (Figure 1) to enable decision-makers to explore the reasoning, drivers of disagreement, unresolved uncertainties and changes in forecasts over time.
- Organizing training sessions for senior leadership on how to interpret and use probabilistic forecasts in decision-making.
- Establishing a simple, formal process by which policymakers can request forecasts on questions relevant to their work.
- Creating a review process to assess how forecasts influenced decisions and their outcomes.
- Using forecast as a tool for interagency coordination, to surface ideas and concerns that people may be hesitant to bring up in front of their superiors.
Conclusion
ODNI’s mission to “deliver the most insightful intelligence possible” demands continuous innovation. The next-generation forecasting program outlined in this document is the natural next step in advancing the science of forecasting to serve the public interest. Crowd forecasting has proven itself as a generator of reliable predictions, more accurate than any individual forecaster. In an increasingly complex information environment, our intelligence community needs to use every tool at its disposal to identify and address its most pressing questions about the future. By establishing a transparent and rigorous crowd-forecasting process, ODNI can harness the collective wisdom of diverse experts and analysts and foster better interagency collaboration, strengthening our nation’s ability to anticipate and respond to emerging global challenges.
This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.
The Energy Transition Workforce Initiative
The energy transition underway in the United States continues to present a unique set of opportunities to put Americans back to work through the deployment of new technologies, infrastructure, energy efficiency, and expansion of the electricity system to meet our carbon goals. Unlike many previous industrial transitions, the U.S. can directly influence the pace of change, promote greater social equity, and create new jobs to replace those that are phasing out.
Since 2021, significant policies have been enacted to support this transition, including the Infrastructure Investment and Jobs Act (IIJA), CHIPS and Science Act, and the Inflation Reduction Act. The most recent Congressional Budget Office estimates of the energy-related spending of these three pieces of legislation was at least $956 billion over a 10-year period.
Despite these historic investments, additional work remains to be done. To supplement the accomplishments of the last four years, the next administration should:
- Establish the Energy Workforce and Economic Development Extension Program inside the Department of Energy (DOE).
- Restore the interagency Energy and Advanced Manufacturing Workforce Initiative.
- Initiate the Energy Transition Community Benefits Training Program.
- Establish a national public-private commission on steel decarbonization.
- Restore the DOE Labor Working Group under the direction of a senior advisor to the Secretary of Energy.
Challenge and Opportunity
In 2023, the energy sector added over 250,000 jobs, with clean energy accounting for 56% of jobs. Energy efficiency jobs, such as the manufacture and installation of heat pumps, added 74,700 jobs, the most of any technology area. While energy jobs are found in every state in America, fossil fuel production jobs and the infrastructure associated with them are highly concentrated. In 2020, 73% of the roughly one million oil, coal, and natural gas production jobs were in just 10 states. By 2023, 70,000 of those jobs were lost in the same 10 states, leaving the communities that host them at risk of economic decline. The Interagency Working Group on Coal and Power Plant Communities was established by Executive Order in 2021 to address this issue and provide new incentives for clean energy production such as the Sparkz and Form Energy battery plants in West Virginia. To date, over $538 billion of competitive and formula funding has been provided to “revitalize America’s energy communities.”
Plan of Action
On day one, the next administration should announce the expansion of the DOE Office of Energy Jobs to lead the following efforts.
Recommendation 1. Establish the Energy Workforce and Economic Development Extension Program (EWEDEP) inside the DOE.
Modeled after the Agricultural Extension Program, and in partnership with the National Laboratories, the EWEDEP should provide technical advice to the state decarbonization plans funded by the Environmental Protection Agency, as well as to municipalities, regional entities, tribal governments, and private-sector businesses. Led by the Office of Energy Jobs, this program should also assist regional, state, local, and tribal governments in developing and implementing technical decarbonization strategies that simultaneously create good local jobs. State and regional support staff for the Office of Energy Jobs should be located in each of the national laboratories.
Recommendation 2. Restore the interagency Energy and Advanced Manufacturing Workforce Initiative (EAMWI).
During the Obama Administration, EAMWI, run by the Department of Energy, coordinated activities between the Departments of Energy, Labor, Education, Commerce, and Defense and the National Science Foundation to harmonize planning, training, and curriculum development for the new energy workforce. In addition to resuming those coordinative activities, the next administration should mandate that the EAMWI produce quarterly assessments of the needs and opportunities in workforce training in response to the requirements of the energy transition. Based on updated USEER data from 2024 and ongoing job occupational needs’ assessments, EAMWI should provide annual reports on state energy workforce needs to the appropriate federal and state agencies in charge of energy, education, and economic development strategies.
Recommendation 3. Initiate the Energy Transition Community Benefits Training Program.
Community Benefit Plans (CBPs) and Community Benefit Agreements (CBAs) have emerged as the primary tools for monitoring job quality metrics in the energy transition, particularly those that are supported by federal government grants and loans. This program should provide expert training in the design and performance of CBPs and CBAs for company executives, community organizations and advocates, labor unions, and local government employees. This program should be informed by an advisory board of experts from business schools, trade associations, labor unions, and community stakeholders.
Recommendation 4. Establish a national public-private commission on steel decarbonization.
Decarbonizing the steel industry will be one of the most difficult and expensive challenges posed on the energy transition. Appointing a national commission of industry stakeholders, including business, labor, communities, and federal agencies, will be critical for developing a model for managing hard-to-decarbonize, industrial sectors of the economy in ways that create quality jobs, protect communities, and build broad consensus among the American people. DOE should also establish an Office of Steel Decarbonization to implement the commission’s recommendations.
Recommendation 5. Restore the DOE Labor Working Group under the direction of a senior advisor to the Secretary of Energy.
The DOE Labor Working Group provided monthly guidance on how to implement high wage strategies in the energy sector while preserving jobs and reducing greenhouse gas emissions. Member organizations included energy sector unions involved in the mining, extraction, manufacturing, construction, utility, and transportation industry sectors.
After initiating these actions on day one, the next administration should prioritize legislation establishing an Energy Transition Adjustment Assistance Program (ETAAP). In some cases, the loss of fossil fuel jobs in concentrated parts of the country will require retraining of current employees to prepare them for new careers with new employers. The U.S. will need a program to provide income support greater than extended unemployment to recipients undergoing retraining. Such a program should learn from the shortcomings of the Trade Adjustment Assistance (TAA) program by providing more supportive services. Based on two-year training costs and average participation rates of TAA-certified beneficiaries, a minimum of $20 billion for worker retraining should be allocated as part of this effort.
In addition, the Interagency Working Group on Coal and Power Plant Communities should be consulted to design standards for broad eligibility to participate in the ETAAP, including energy-intensive manufacturing businesses impacted by the energy transition. Finally, as existing energy companies transition to producing cleaner forms of energy, the program should consider subsidizing the retraining of existing energy-sector employees to provide new skills for the transition.
Conclusion
Unlike many previous industrial transitions, which were driven by new technologies and market forces, decarbonization is driven largely by social policy interventions. Thus, well-planned responses, based on timely clean-energy economic development investments, can provide good jobs and economic opportunity for displaced workers and affected communities. The clean energy tax credits included in the IRA should be maintained and extended. Labor standards and domestic content rules should be attached to both grants and formula spending. Finally, the lending authorities for the DOE Loan Program Office should be expanded to include energy infrastructure, energy-intensive manufacturing, and energy efficiency projects. With such an approach, the U.S. and its workers can benefit from the global push to decarbonize.
This idea was originally published on February 1, 2021. We’ve republished this updated version on November 27, 2024.
This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.
The main challenge is providing a timely economic development response to impacted communities before the most serious job losses have occurred. Our goal is to create a Federal Emergency Management Agency (FEMA)-like response in advance of the economic storm devastating some communities because of the loss of fossil fuel jobs. However, unlike FEMA, most federal economic development programs are not designed to respond to emergency job loss, and they require annual appropriations and lengthy preparations.
The overall success of the Energy Transition Workforce Initiative will be measured by the number and quality of jobs created in the communities expected to be hardest hit by the energy transition, the timeliness of the intervention, and the stability of the communities. Utilization rates of EWEDEP technical support for regions, state, local and tribal governments to develop implementation plans will also be a primary measure.
Promoting Fusion Energy Leadership with U.S. Tritium Production Capacity
As a fusion energy future becomes increasingly tangible, the United States should proactively prepare for it if/when it arrives. A single, commercial-scale fusion reactor will require more tritium fuel than is currently available from global civilian-use inventories. For fusion to be viable, greater-than-replacement tritium breeding technologies will be essential. Before the cycle of net tritium gain can begin, however, the world needs sufficient tritium to complete R&D and successfully commission first-of-a-kind (FOAK) fusion reactors. The United States has the only proven and scalable tritium production supply chain, but it is largely reserved for nuclear weapons. Excess tritium production capacity should be leveraged to ensure the success of and U.S. leadership in fusion energy.
The Trump administration should reinforce U.S. investments and leadership in commercial fusion with game changing innovation in the provision of tritium fuel. The Congressional Fusion Energy Caucus has growing support in the House with 92 members and an emerging Senate counterpart chaired by Sen. Martin Heinrich. Energy security and independence are important areas of bipartisan cooperation, but strong leadership from the White House will be needed to set a bold, America-first agenda.
Challenge and Opportunity
Fusion energy R&D currently relies on limited reserves of tritium from non-scalable production streams. These reserves reduce by ~5% each year due to radioactive decay, which makes stockpiling difficult. One recent estimate suggests that global stocks of civilian-use tritium are just 25–30kg, while commissioning and startup of a single commercial fusion reactor may require up to 10kg. The largest source of civilian-use tritium is Canada, which produces ~2kg/yr as a byproduct of heavy water reactor operation, but most of that material is intended to fuel the International Thermonuclear Experimental Reactor (ITER) in the next decade. This tritium production is directly coupled to the power generation rate of its fleet of Canadian Deuterium Uranium (CANDU) reactors; therefore, the only way to increase the tritium production rate is to build more CANDU power reactors.
The National Nuclear Security Administration (NNSA) (an Office of the U.S. Department of Energy (DOE)) – in cooperation with the Tennessee Valley Authority (TVA) – will produce up-to ~4kg of tritium over the next fuel cycles (i.e., ~18-month cycles offset by 6 months) for the two Watts Bar nuclear (WBN) reactors. This would exceed the current, combined 2.8kg production goal, which could be further outstripped if the reactors were operated at their maximum licensed limit, producing ~4.7kg of tritium. All this tritium is designated for military use. However, the NNSA and DOE could leverage production capacities in excess of defense requirements to promote the deployment of FOAK reactors and support U.S. leadership in fusion energy. The DOE could build off the success of its current Milestone-Based Fusion Program by integrating the option for additional tritium availability to meet the commissioning demands of pilot and commercial fusion reactors.
This program could be called “Gigatons-to-Gigawatts” (GtG), a name inspired by one of the most successful fissile material reduction programs in history Megatons-to-Megawatts. The increased scale signifies much higher energy densities contained in tritium vs. the uranium commonly used to fuel fission reactors. Fusion and fission reactor technologies also have very different nonproliferation implications. U.S. national security and nonproliferation goals would be furthered by a systematic transition from fission to fusion energy. Lowering reliance on dual-use nuclear fuel cycle technologies such as centrifuges for uranium enrichment would lower overall proliferation risks. Just as it did by promoting an open fuel cycle, the United States could leverage its technological leadership to promote the adoption of a more proliferation-resistant fusion infrastructure.
However, it is important to note another key difference with Megatons-to-Megawatts: because GtG leverages near-term tritium production capacities in concert with reserves rather than repurposing stockpiled weapons-useable material for civilian use such a program could affect the U.S. nuclear deterrent posture as well. The National Nuclear Security Administration (NNSA) Strategic Integrated Roadmap highlights the goal to “Demonstrate enhanced tritium production capability” for 2025 which is coded as “Nuclear Deterrent.” The anticipated excess production quantities noted above would correspond with this goal. Starting from this demonstrated capability, a GtG program would extend this production capacity into a longer-term effort directed toward a fusion energy future. Furthermore, in support of the long-term goal of nuclear disarmament, GtG would also provide a ready-made framework for repurposing valuable tritium from decommissioned warheads.
One way the United States demonstrates the credibility of its nuclear deterrent is through the Stockpile Stewardship and Management Plan (SSMP). Allies and adversaries alike must believe that the United States has sufficient tritium capability to replenish this critical and slowly decaying resource. An enhanced tritium production capability also has a supporting role to play in reassuring U.S. policymakers that key material design requirements are being sustainably met and that future nuclear weapon tests will be unnecessary. Even though GtG would be programmatically dedicated to the peaceful use of tritium, the technological mechanisms used to reach this goal would nonetheless be compatible with and/or even complementary to the existing nuclear defense posture.
Key facts highlighted in the 2024 Fusion Industry Association (FIA) global reports include: (i) tritium remains the key fuel source for most fusion technologies being developed; (ii) tritium self-sufficiency was seen as one of the major near-term challenges and by a slim margin the major challenge after 2030; and (iii) supply chain partners noted tritium was one of the top 3 constraints to scalability. The easiest reaction to achieve is deuterium–tritium (D–T) fusion. Other more technologically challenging approaches to fusion energy rely on different reactions such as deuterium–deuterium (D–D) and deuterium–Helium-3 (D–He-3) fusion. The Earth has a functionally limitless supply of deuterium; however, even though He-3 is radioactively stable, it slowly leaks from the atmosphere into space. Until humanity can mine the vast quantities of He-3 on the moon, one of the only terrestrial sources of this material is from the tritium decay process. A GtG program would directly support an increase in tritium supply and indirectly support long-term He-3 reserves since it can be stockpiled. Even if fusion with He-3 proves viable, it will be necessary to produce the tritium first.
Once commercial fusion reactors begin operation, breeding tritium to replace burned fuel is a major concern because there is no alternative supply sufficient to replace shortfalls from even modest inefficiency. Operating a 1 GW fusion reactor for a year may require more than 55kg of tritium. Tritium self-sufficiency is nonnegotiable for a functional fusion industry. If technological development falters as companies strive toward a sustainable tritium breeding cycle, they may find themselves in the awkward position of needing tritium more than additional funding.
Of the countries leading the way in private fusion ventures and public investment, the only not closely allied with the U.S. is China, which is also the country most capable of leveraging military tritium production for fusion R&D. In stark contrast with the United States, there is no public information on Chinese tritium production capacities or how much they currently possess. Since China is rapidly expanding their nuclear weapon stockpile, their material margins for repurposing tritium for peaceful-use material will be constrained. If a U.S. investment of tritium into fusion R&D accelerates the growth of domestic companies, then China may be forced to choose between advancing their nuclear weapons agenda and competing with the West for a fusion energy breakthrough.
The United States already has a significant lead in technological capabilities for future generations of fusion energy based on Inertial Confinement Fusion (ICF). The National Ignition Facility (NIF) at Lawrence Livermore National Labs (LLNL) first demonstrated fusion ignition from ICF using tritium in 2022. Largely heralded as a breakthrough for the future of nuclear energy, the facility and ICF tests also provide critical, experimental support for the SSMP. To better position the United States to capitalize on these long-term investments in science and technology, fusion energy leadership should not be ceded to other nations.
Plan of Action
Recommendation 1. Name a White House “Gigatons-to-Gigawatts” czar to coordinate a long-term tritium strategy and interagency cooperation harmonizing national security and fusion energy leadership goals.
A Senior Advisor on the National Security Team of the White House Office of Science and Technology Policy (OSTP) serving as the White House czar for GtG would (i) guide and lead efforts, (ii) coordinate interagency partners, and (iii) facilitate private/public stakeholder forums. Key interagency partners include:
- The Nuclear Weapons Council (NWC)
- DOE National Nuclear Security Administration (NNSA) Office of Tritium and Domestic Uranium Enrichment
- DOE NNSA Tritium Modernization Program (NA-19)
- DOE NNSA Office of Nuclear Material Integration (ONMI) (NA-532)
- DOE Office of Science
- The Office of Fusion Energy Sciences (FES) at DOE Office of Science
- DOE Advanced Research Projects Agency – Energy (ARPA-E)
- State Bureau of International Security and Nonproliferation (ISN)
- TVA Tritium Production Program
- Nuclear Regulatory Commission (NRC) Office of Nuclear Material Safety and Safeguards (NMSS)
- Savannah River National Lab (SRNL)
- Los Alamos National Lab (LANL)
- Pacific Northwest National Lab (PNNL)
- Idaho National Lab (INL) Safety and Tritium Applied Research (STAR)
- Environmental Protection Agency (EPA) Office of Radiation and Indoor Air (ORIA)
- Fusion Energy Sciences Advisory Committee (FESAC)
Key private partners include:
- Savannah River Nuclear Solutions (SRNS) and the Savannah River Tritium Enterprise (SRTE) program
- Westinghouse Government Services (WGS) Columbia Fuel Fabrication Facility (CFFF)
- Fusion Industry Association (FIA)
A central task of the GtG czar would be to coordinate with the NWC to review Presidential Policy Directive 9 (PPD-9) and associated/superseding planning documents related to the assessment of tritium demand requirements including (i) laboratory research, development, and surveillance and (ii) presidentially mandated tritium reserve. These two components of the tritium requirement could potentially be expanded to address GtG needs. If deemed appropriate, the President of the United States could be advised to expand the presidentially mandated reserve. Otherwise, the former requirement could be expanded based on optimal quantities to stand up a GtG program capability. A reference target would be the accumulation of ~10kg of tritium on projected timelines for commissioning full-scale FOAK fusion reactors.
The following recommendations could be coordinated by a GtG czar or done independently.
Recommendation 2. The Secretary of Energy should direct the Office of Science to evaluate the Milestone-Based Fusion Development Program for integrating GtG tritium production and supply targets with projected industry demands for commissioning fusion power plants.
The Milestone-Based Fusion Development Program has already provided awards of $46 million to 8 US companies. It is crucial to ensure that any tritium produced for a GtG program is not accumulated without a viable success path for FOAK fusion plant commissioning. Given the modest production capacities currently available at the WBN site, timelines of 5–10 years will be necessary to accumulate tritium. Each fuel cycle could allow for adjustments in production targets, but sufficient lead time will be required to anticipate and plan for necessary core changes and fuel-assembly production.
GtG tritium awards aligned with the Milestone-Based Fusion Development Program would also be more viable and attractive if costs were equitably shared between private awardees and the DOE. The U.S. Government produces tritium at WBN at a premium of ~$50,000/g whereas the market rate for tritium produced in Canada is closer to $30,000/g. A fusion company awarded tritium through the GtG program should be required to pay the prevailing market rate for tritium upon extraction at the Savannah River Site (SRS). This would allow a fusion company to benefit from increased tritium availability, while the DOE shoulders the cost differences of Tritium-Producing Burnable Absorber Rod (TPBAR) production methods. Additionally, this pay-as-you-go requirement will incentivize fusion energy companies to lay out realistic timeframes for FOAK reactor deployments.
The Director of the Office of Science should also direct the FESAC to prepare a report on tritium demand scenarios that would apply to leading fusion technology development timelines and assess the necessary tritium breeding efficiencies needed to sustain fusion power plant operations. The FESAC should give special consideration to projecting possible mitigation and recovery strategies for tritium breeding shortfalls. The committee should also provide thresholds for FOAK fusion reactors’ short-term recoverability from tritium breeding shortfalls. Tritium quantities based on this FESAC report should be considered for future tritium hedges after these fusion reactors begin power operations.
Recommendation 3. The NNSA ONMI (NA-532) should coordinate an interagency review of the tritium supply chain infrastructure.
Raising tritium production targets beyond previously projected requirements would necessitate review from TPBAR assembly at Westinghouse’s CFFF, irradiation at TVA’s Watts Bar Reactors, and then extraction and processing through the SRTE program at SRS. Because this review naturally involves civilian reactors and the transport of nuclear materials the NRC should also be consulted to ensure regulatory compliance is maintained. This review will provide realistic bounding limits to the quantities of tritium and production timelines that could be designated for a GtG program. The outcome of this review will inform industry-facing efforts to better assess how additional tritium supplies could best support fusion energy R&D and pilot plant commissioning.
As part of this process, the NA-532 office should determine which existing tritium supply chain models are best suited for assessing commercial applications, including the LANL Tritium Supply and Demand Model and those developed internally by the NNSA. If no model is determined fit for purpose, then a new model should be developed to best capture the dynamics of commercial fusion R&D. In any case, existing models should form the basis for integrating military requirements and civilian markets to ensure a GtG program adequately accounts for both.
An added-value option for this recommendation would be to prepare an unclassified and publicly accessible version of the commercial tritium supply chain model. This would reinforce the transparency and public accountability already built into the production of tritium in the commercial power reactors at Watts Bar. Furthermore, such a resource would also help explain the rationale and intent behind the use of public funds to support fusion R&D and the commissioning of FOAK fusion reactors.
Recommendation 4. The Secretary of Energy should direct a review of DOE Technical Standards for addressing tritium-related radiological risks.
While the general scientific consensus is that low-level tritium exposure poses negligible human health and ecosystem risks, there are several unknowns that should be better understood before the advent of fusion energy releases unprecedented quantities of tritium into the environment. This adequacy review should include at least [i] a comprehensive analysis of risks from Organically Bound Tritium (OBT) and [ii] more precisely quantifying and considering the potential for damaging mitochondrial DNA and fetuses. These efforts would help ensure the responsible, consent-based rollout of tritium-intensive technologies and allow for an informed public to better understand the magnitude of risks to be weighed against potential benefits.
Key DOE Technical Standards to include in this review:
- Derived Concentration Technical Standard (DOE-STD-1196-2022)
- Internal Dosimetry (DOE-STD-1121-2008 (Reaffirmed 2022))
- Nuclear Materials Control and Accountability (DOE-STD-1194-2019)
Recommendation 5. The Administrator of the Environmental Protection Agency (EPA) should direct the Office of Radiation and Indoor Air (ORIA) to assess the adequacy of radioactive dose calculations in the Federal Guidance Report on External Exposure to Radionuclides in Air, Water, and Soil (FGR 15) last issued in 2019.
This recommendation, along with recommendation 3 above, will provide sufficient lead time to address any uncertainties and unknowns regarding the radiological risks posed by tritium. As in this previous case, this adequacy review should include at least [i] a comprehensive analysis of risks from Organically Bound Tritium (OBT) and [ii] more precisely quantifying and considering the potential for damaging mitochondrial DNA and fetuses. FGR 15 currently calculates effective dose rates for “computational phantom” models of 6 different age groups, including newborns, that incorporate both male and female sex-specific tissues. However, effective dose rates and potential effects are not considered for developing fetuses. The uncertainty surrounding tritium’s radiological risks prompts an extensive precautionary approach to potential exposures for declared pregnant workers. However, the potential for higher levels of tritium exposure for pregnant members of the public should also be taken into consideration when assessing the radiological risks of fusion energy.
Conclusion
With a strategically calibrated GtG program, the United States could remain technology leaders in fusion energy and potentially reduce the rollout timeline of a multi-unit fleet by several years. In the context of state-level technological competition and a multi-polar nuclear security environment, years matter. A strategic GtG reserve will take years to plan and accumulate to ensure sufficient tritium is available at the right time.
The long-term utility of a GtG framework is not limited to the designation of new tritium production for peaceful use. Once nuclear-weapons states return to the negotiating table to reduce the number of nuclear weapons in the world, the United States would have a clear roadmap for repurposing the tritium from decommissioned weapons in support of fusion power. Previously, the United States held onto large reserves of this valuable and critical material for years while transitioning from military to civilian production. The years between 2025 and 2040 will provide more chances to put that material to productive use for fusion energy. Let us not waste this opportunity to ensure the U.S. remains at the vanguard of the fusion revolution.
This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.
A U.S. Government Accountability Office (GAO) report from 2000 provided unclassified approximations of total life-cycle cost ranged from ~$34,000 to $57,000 per gram of tritium. With several program delays and at least one major capital investment (i.e., a 500,000 gallon Tritiated Water Storage Tank (TWST) system) costing ~$20 million, the actual life-cycle costs are likely higher. The cost of tritium produced in Canada is closer to $30,000 per gram, but, as noted above, only fixed and limited amounts of tritium can be made available through this process.
This is unlikely. The SSMP projects tritium needs far enough into the future that demand changes could allow for adjustments to production levels over the span of 1–2 fuel cycles (i.e., one and a half to three years). Barring a catastrophic loss of military tritium reserves or a significant nuclear accident at Watts Bar, there is unlikely to be a tritium supply emergency requiring an immediate response.
Historical tritium production amounts and capacities at SRS remain restricted data. However, due to NRC regulatory requirements for commercial reactors, this information cannot be protected for tritium production at Watts Bar. Since tritium production transparency has been the norm since 2003, the United States may further demonstrate nuclear stockpile credibility by openly producing material in excess of current military requirements.
Unobligated fuel demand would slightly increase. Unobligated fuel requirements are largely a sunk cost. Regardless of how many TPBARs are being irradiated the entire core will be composed of unobligated fuel. However, increased tritium production (i.e., irradiating more TPBARs) would require additional fresh fuel bundles per fuel cycle. The 2024 SSMP currently projects meeting Watts Bar’s unobligated fuel needs through 2044.
This would possibly require new license amendments for each reactor, but if the amounts were below the previously analyzed conditions, then a new Environmental Impact Statement (EIS) would not be required. The current license for each reactor allows for the irradiation of up to 2,496 TPBARs per fuel cycle per reactor. The EIS analysis is bounded at a maximum of 6,000 TPBARs combined per fuel cycle. The average yield of each TPBAR is 0.95g of tritium.
Fusion industry leaders have demonstrated confidence that existing and future supplies of civilian-use tritium, while modest, are sufficient to fuel the necessary near-term R&D. In particular, the planned refurbishments to aging Canadian CANDU reactors and the additional delays at ITER have propped open the tritium window for several more years until tritium breeding blanket technologies can mature. However, tritium supply chain bottlenecks could constrain industry momentum and/or advantage states capable of backstopping any shortages.
Policy Experiment Stations to Accelerate State and Local Government Innovation
The federal government transfers approximately $1.1 trillion dollars every year to state and local governments. Yet most states and localities are not evaluating whether the programs deploying these funds are increasing community well-being. Similarly, achieving important national goals like increasing clean energy production and transmission often requires not only congressional but also state and local policy reform. Yet many states and localities are not implementing the evidence-based policy reforms necessary to achieve these goals.
State and local government innovation is a problem not only of politics but also of capacity. State and local governments generally lack the technical capacity to conduct rigorous evaluations of the efficacy of their programs, search for reliable evidence about programs evaluated in other contexts, and implement the evidence-based programs with the highest chances of improving outcomes in their jurisdictions. This lack of capacity severely constrains the ability of state and local governments to use federal funds effectively and to adopt more effective ways of delivering important public goods and services. To date, efforts to increase the use of evaluation evidence in federal agencies (including the passage of the Evidence Act) have not meaningfully supported the production and use of evidence by state and local governments.
Despite an emerging awareness of the importance of state and local government innovation capacity, there is a shortage of plausible strategies to build that capacity. In the words of journalist Ezra Klein, we spend “too much time and energy imagining the policies that a capable government could execute and not nearly enough time imagining how to make a government capable of executing them.”
Yet an emerging body of research is revealing that an effective strategy to build government innovation capacity is to partner government agencies with local universities on scientifically rigorous evaluations of the efficacy of their programs, curated syntheses of reliable evaluation evidence from other contexts, and implementation of evidence-based programs with the best chances of success. Leveraging these findings, along with recent evidence of the striking efficacy of the national network of university-based “Agriculture Experiment Stations” established by the Hatch Act of 1887, we propose a national network of university-based “Policy Experiment Stations” or policy innovation labs in each state, supported by continuing federal and state appropriations and tasked with accelerating state and local government innovation.
Challenge
Advocates of abundance have identified “failed public policy” as an increasingly significant barrier to economic growth and community flourishing. Of particular concern are state and local policies and programs, including those powered by federal funds, that do not effectively deliver critically important public goods and services like health, education, safety, clean air and water, and growth-oriented infrastructure.
Part of the challenge is that state and local governments lack capacity to conduct rigorous evaluations of the efficacy of their policies and programs. For example, the American Rescue Plan, the largest one-time federal investment in state and local governments in the last century, provided $350 billion in State and Local Fiscal Recovery Funds to state, territorial, local, and Tribal governments to accelerate post-pandemic economic recovery. Yet very few of those investments are being evaluated for efficacy. In a recent survey of state policymakers, 59% of those surveyed cited “lack of time for rigorous evaluations” as a key obstacle to innovation. State and local governments also typically lack the time, resources, and technical capacity to canvass evaluation evidence from other settings and assess whether a program proven to improve outcomes elsewhere might also improve outcomes locally. Finally, state and local governments often don’t adopt more effective programs even when they have rigorous evidence that these programs are more effective than the status quo, because implementing new programs disrupts existing workflows.
If state and local policymakers don’t know what works and what doesn’t, and/or aren’t able to overcome even relatively minor implementation challenges when they do know what works, they won’t be able to spend federal dollars more effectively, or more generally to deliver critical public goods and services.
Opportunity
A growing body of research on government innovation is documenting factors that reliably increase the likelihood that governments will implement evidence-based policy reform. First, government decision makers are more likely to adopt evidence-based policy reforms when they are grounded in local evidence and/or recommended by local researchers. Boston-based researchers sharing a Boston-based study showing that relaxing density restrictions reduces rents and house prices will do less to convince San Francisco decision makers than either a San Francisco-based study, or San Francisco-based researchers endorsing the evidence from Boston. Proximity matters for government innovation.
Second, government decision makers are more likely to adopt evidence-based policy reforms when they are engaged as partners in the research projects that produce the evidence of efficacy, helping to define the set of feasible policy alternatives and design new policy interventions. Research partnerships matter for government innovation.
Third, evidence-based policies are significantly more likely to be adopted when the policy innovation is part of an existing implementation infrastructure, or when agencies receive dedicated implementation support. This means that moving beyond incremental policy reforms will require that state and local governments receive more technical support in overcoming implementation challenges. Implementation matters for government innovation.
We know that the implementation of evidence-based policy reform produces returns for communities that have been estimated to be on the order of 17:1. Our partners in government have voiced their direct experience of these returns. In Puerto Rico, for example, decision makers in the Department of Education have attributed the success of evidence-based efforts to help students learn to the “constant communication and effective collaboration” with researchers who possessed a “strong understanding of the culture and social behavior of the government and people of Puerto Rico.” Carrie S. Cihak, the evidence and impact officer for King County, Washington, likewise observes,
“It is critical to understand whether the programs we’re implementing are actually making a difference in the communities we serve. Throughout my career in King County, I’ve worked with County teams and researchers on evaluations across multiple policy areas, including transportation access, housing stability, and climate change. Working in close partnership with researchers has guided our policymaking related to individual projects, identified the next set of questions for continual learning, and has enabled us to better apply existing knowledge from other contexts to our own. In this work, it is essential to have researchers who are committed to valuing local knowledge and experience–including that of the community and government staff–as a central part of their research, and who are committed to supporting us in getting better outcomes for our communities.”
The emerging body of evidence on the determinants of government innovation can help us define a plan of action that galvanizes the state and local government innovation necessary to accelerate regional economic growth and community flourishing.
Plan of Action
An evidence-based plan to increase state and local government innovation needs to facilitate and sustain durable partnerships between state and local governments and neighboring universities to produce scientifically rigorous policy evaluations, adapt evaluation evidence from other contexts, and develop effective implementation strategies. Over a century ago, the Hatch Act of 1887 created a remarkably effective and durable R&D infrastructure aimed at agricultural innovation, establishing university-based Agricultural Experiment Stations (AES) in each state tasked with developing, testing, and translating innovations designed to increase agricultural productivity.
Locating university-based AES in every state ensured the production and implementation of locally-relevant evidence by researchers working in partnership with local stakeholders. Federal oversight of the state AES by an Office of Experiment Stations in the US Department of Agriculture ensured that work was conducted with scientific rigor and that local evidence was shared across sites. Finally, providing stable annual federal appropriations for the AES, with required matching state appropriations, ensured the durability and financial sustainability of the R&D infrastructure. This infrastructure worked: agricultural productivity near the experiment stations increased by 6% after the stations were established.
Congress should develop new legislation to create and fund a network of state-based “Policy Experiment Stations.”
The 119th Congress that will convene on January 3, 2025 can adapt the core elements of the proven-effective network of state-based Agricultural Experiment Stations to accelerate state and local government innovation. Mimicking the structure of 7 USC 14, federal grants to states would support university-based “Policy Experiment Stations” or policy innovation labs in each state, tasked with partnering with state and local governments on (1) scientifically rigorous evaluations of the efficacy of state and local policies and programs; (2) translations of evaluation evidence from other settings; and (3) overcoming implementation challenges.
As in 7 USC 14, grants to support state policy innovation labs would be overseen by a federal office charged with ensuring that work was conducted with scientific rigor and that local evidence was shared across sites. We see two potential paths for this oversight function, paths that in turn would influence legislative strategy.
Pathway 1: This oversight function could be located in the Office of Evaluation Sciences (OES) in the General Services Administration (GSA). In this case, the congressional committees overseeing GSA, namely the House Committee on Oversight and Responsibility and the Senate Committee on Homeland Security and Governmental Affairs, would craft legislation providing for an appropriation to GSA to support a new OES grants program for university-based policy innovation labs in each state. The advantage of this structure is that OES is a highly respected locus of program and policy evaluation expertise.
Pathway 2: Oversight could instead be located in the Directorate of Technology, Innovation, and Partnerships in the National Science Foundation (NSF TIP). In this case, the House Committee on Science, Space, and Technology and the Senate Committee on Commerce, Science, and Transportation would craft legislation providing for a new grants program within NSF TIP to support university-based policy innovation labs in each state. The advantage of this structure is that NSF is a highly respected grant-making agency.
Either of these paths is feasible with bipartisan political will. Alternatively, there are unilateral steps that could be taken by the incoming administration to advance state and local government innovation. For example, the Office of Management and Budget (OMB) recently released updated Uniform Grants Guidance clarifying that federal grants may be used to support recipients’ evaluation costs, including “conducting evaluations, sharing evaluation results, and other personnel or materials costs related to the effective building and use of evidence and evaluation for program design, administration, or improvement.” The Uniform Grants Guidance also requires federal agencies to assess the performance of grant recipients, and further allows federal agencies to require that recipients use federal grant funds to conduct program evaluations. The incoming administration could further update the Uniform Grants Guidance to direct federal agencies to require that state and local government grant recipients set aside grant funds for impact evaluations of the efficacy of any programs supported by federal funds, and further clarify the allowability of subgrants to universities to support these impact evaluations.
Conclusion
Establishing a national network of university-based “Policy Experiment Stations” or policy innovation labs in each state, supported by continuing federal and state appropriations, is an evidence-based plan to facilitate abundance-oriented state and local government innovation. We already have impressive examples of what these policy labs might be able to accomplish. At MIT’s Abdul Latif Jameel Poverty Action Lab North America, the University of Chicago’s Crime Lab and Education Lab, the University of California’s California Policy Lab, and Harvard University’s The People Lab, to name just a few, leading researchers partner with state and local governments on scientifically rigorous evaluations of the efficacy of public policies and programs, the translation of evidence from other settings, and overcoming implementation challenges, leading in several cases to evidence-based policy reform. Yet effective as these initiatives are, they are largely supported by philanthropic funds, an infeasible strategy for national scaling.
In recent years we’ve made massive investments in communities through federal grants to state and local governments. We’ve also initiated ambitious efforts at growth-oriented regulatory reform which require not only federal but also state and local action. Now it’s time to invest in building state and local capacity to deploy federal investments effectively and to galvanize regional economic growth. Emerging research findings about the determinants of government innovation, and about the efficacy of the R&D infrastructure for agricultural innovation established over a century ago, give us an evidence-based roadmap for state and local government innovation.
This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.
Accelerating Materials Science with AI and Robotics
Innovations in materials science enable innumerable downstream innovations: steel enabled skyscrapers, and novel configurations of silicon enabled microelectronics. Yet progress in materials science has slowed in recent years. Fundamentally, this is because there is a vast universe of potential materials, and the only way to discover which among them are most useful is to experiment. Today, those experiments are largely conducted by hand. Innovations in artificial intelligence and robotics will allow us to accelerate the search process using foundation AI models for science research and automate much of the experimentation with robotic, self-driving labs. This policy memo recommends the Department of Energy (DOE) lead this effort because of its unique expertise in supercomputing, AI, and its large network of National Labs.
Challenge and Opportunity
Take a look at your smartphone. How long does its battery last? How durable is its frame? How tough is its screen? How fast and efficient are the chips inside it?
Each of these questions implicates materials science in fundamental ways. The limits of our technological capabilities are defined by the limits of what we can build, and what we can build is defined by what materials we have at our disposal. The early eras of human history are named for materials: the Stone Age, the Bronze Age, the Iron Age. Even today, the cradle of American innovation is Silicon Valley, a reminder that even our digital era is enabled by finding innovative ways to assemble matter to accomplish novel things.
Materials science has been a driver of economic growth and innovation for decades. Improvements to silicon purification and processing—painstakingly worked on in labs for decades—fundamentally enabled silicon-based semiconductors, a $600 billion industry today that McKinsey recently projected would double in size by 2030. The entire digital economy, conservatively estimated by the Bureau of Economic Analysis (BEA) at $3.7 trillion in the U.S. alone, in turn, rests on semiconductors. Plastics, another profound materials science innovation, are estimated to have generated more than $500 billion in economic value in the U.S. last year. The quantitative benefits are staggering, but even qualitatively, it is impossible to imagine modern life without these materials.
However, present-day materials are beginning to show their age. We need better batteries to accelerate the transition to clean energy. We may be approaching the limits of traditional methods of manufacturing semiconductors in the next decade. We require exotic new forms of magnets to bring technologies like nuclear fusion to life. We need materials with better thermal properties to improve spacecraft.
Yet materials science and engineering—the disciplines of discovering and learning to use new materials—have slowed down in recent decades. The low-hanging fruit has been plucked, and the easy discoveries are old news. We’re approaching the limits of what our materials can do because we are also approaching the limits of what the traditional practice of materials science can do.
Today, materials science proceeds at much the same pace as it did half a century ago: manually, with small academic labs and graduate students formulating potential new combinations of elements, synthesizing those combinations, and studying their characteristics. Because there are more ways to configure matter than there are atoms in the universe, manually searching through the space of possible materials is an impossible task.
Fortunately, AI and robotics present an opportunity to automate that process. AI foundation models for physics and chemistry can be used to simulate potential materials with unprecedented speed and low cost compared to traditional ab initio methods. Robotic labs (also known as “self-driving labs”) can automate the manual process of performing experiments, allowing scientists to synthesize, validate, and characterize new materials twenty-four hours a day at dramatically lower costs. The experiments will generate valuable data for further refining the foundation models, resulting in a positive feedback loop. AI language models like OpenAI’s GPT-4 can write summaries of experimental results and even help ideate new experiments. The scientists and their grad students, freed from this manual and often tedious labor, can do what humans do best: think creatively and imaginatively.
Achieving this goal will require a coordinated effort, significant investment, and expertise at the frontiers of science and engineering. Because much of materials science is basic R&D—too far from commercialization to attract private investment—there is a unique opportunity for the federal government to lead the way. As with much scientific R&D, the economic benefits of new materials science discoveries may take time to emerge. One literature review estimated that it can take roughly 20 years for basic research to translate to economic growth. Research indicates that the returns—once they materialize—are significant. A study from the Federal Reserve Bank of Dallas suggests a return of 150-300% on federal R&D spending.
The best-positioned department within the federal government to coordinate this effort is the DOE, which has many of the key ingredients in place: a demonstrated track record of building and maintaining the supercomputing facilities required to make physics-based AI models, unparalleled scientific datasets with which to train those models collected over decades of work by national labs and other DOE facilities, and a skilled scientific and engineering workforce capable of bringing challenging projects to fruition.
Plan of Action
Achieving the goal of using AI and robotics to simulate potential materials with unprecedented speed and low cost, and benefit from the discoveries, rests on five key pillars:
- Creating large physics and chemistry datasets for foundation model training (estimated cost: $100 million)
- Developing foundation AI models for materials science discovery, either independently or in collaboration with the private sector (estimated cost: $10-100 million, depending on the nature of the collaboration);
- Building 1-2 pilot self-driving labs (SDLs) aimed at establishing best practices, building a supply chain for robotics and other equipment, and validating the scientific merit of SDLs (estimated cost: $20-40 million);
- Making self-driving labs an official priority of the DOE’s preexisting FASST initiative (described below);
- Directing the DOE’s new Foundation for Energy Security and Innovation (FESI) to prioritize establishing fellowships and public-private partnerships to support items (1) and (2), both financially and with human capital.
The total cost of the proposal, then, is estimated at between $130-240 million. The potential return on this investment, though, is far higher. Moderate improvements to battery materials could drive tens or hundreds of billions of dollars in value. Discovery of a “holy grail” material, such as a room-temperature, ambient-pressure superconductor, could create trillions of dollars in value.
Creating Materials Science Foundation Model Datasets
Before a large materials science foundation model can be trained, vast datasets must be assembled. DOE, through its large network of scientific facilities including particle colliders, observatories, supercomputers, and other experimental sites, collects enormous quantities of data–but this, unfortunately, is only the beginning. DOE’s data infrastructure is out-of-date and fragmented between different user facilities. Data access and retention policies make sharing and combining different datasets difficult or impossible.
All of these policy and infrastructural decisions were made far before training large-scale foundation models was a priority. They will have to be changed to capitalize on the newfound opportunity of AI. Existing DOE data will have to be reorganized into formats and within technical infrastructure suited to training foundation models. In some cases, data access and retention policies will need to be relaxed or otherwise modified.
In other cases, however, highly sensitive data will need to be integrated in more sophisticated ways. A 2023 DOE report, recognizing the problems with DOE data infrastructure, suggests developing federated learning capabilities–an active area of research in the broader machine learning community–which would allow for data to be used for training without being shared. This would, the report argues, ”allow access and connections to the information through access control processes that are developed explicitly for multilevel privacy.”
This work will require deep collaboration between data scientists, machine learning scientists and engineers, and domain-specific scientists. It is, by far, the least glamorous part of the process–yet it is the necessary groundwork for all progress to follow.
Building AI Foundation Models for Science
Fundamentally, AI is a sophisticated form of statistics. Deep learning, the broad approach that has undergirded all advances in AI over the past decade, allows AI models to uncover deep patterns in extremely complex datasets, such as all the content on the internet, the genomes of millions of organisms, or the structures of thousands of proteins and other biomolecules. Models of this kind are sometimes loosely referred to as “foundation models.”
Foundation models for materials science can take many different forms, incorporating various aspects of physics, chemistry, and even—for the emerging field of biomaterials—biology. Broadly speaking, foundation models can help materials science in two ways: inverse design and property prediction. Inverse design allows scientists to input a given set of desired characteristics (toughness, brittleness, heat resistance, electrical conductivity, etc.) and receive a prediction for what material might be able to achieve those properties. Property prediction is the opposite flow of information, inputting a given material and receiving a prediction of what properties it will have in the real world.
DOE has already proposed creating AI foundation models for materials science as part of its Frontiers in Artificial Intelligence for Science, Security and Technology (FASST) initiative. While this initiative contains numerous other AI-related science and technology objectives, supporting it would enable the creation of new foundation models, which can in turn be used to support the broader materials science work.
DOE’s long history of stewarding America’s national labs makes it the best-suited home for this proposal. DOE labs and other DOE sub-agencies have decades of data from particle accelerators, nuclear fusion reactors, and other specialized equipment rarely seen in other facilities. These labs have performed hundreds of thousands of experiments in physics and chemistry over their lifetimes, and over time, DOE has created standardized data collection practices. AI models are defined by the data that they are trained with, and DOE has some of the most comprehensive physics and chemistry datasets in the country—if not the world.
The foundation models created by DOE should be made available to scientists. The extent of that availability should be determined by the sensitivity of the data used to train the model and other potential risks associated with broad availability. If, for example, a model was created using purely internal or otherwise sensitive DOE datasets, it might have to be made available only to select audiences with usage monitored; otherwise, there is a risk of exfiltrating sensitive training data. If there are no such data security concerns, DOE could choose to fully open source the models, meaning their weights and code would be available to the general public. Regardless of how the models themselves are distributed, the fruits of all research enabled by both DOE foundation models and self-driving labs should be made available to the academic community and broader public.
Scaling Self-Driving Labs
Self-driving labs are largely automated facilities that allow robotic equipment to autonomously conduct scientific experiments with human supervision. They are well-suited to relatively simple, routine experiments—the exact kind involved in much of materials science. Recent advancements in robotics have been driven by a combination of cheaper hardware and enhanced AI models. While fully autonomous humanoid robots capable of automating arbitrary manual labor are likely years away, it is now possible to configure facilities to automate a broad range of scripted tasks.
Many experiments in materials science involve making iterative tweaks to variables within the same broad experimental design. For example, a grad student might tweak the ratios of the elements that constitute the material, or change the temperature at which the elements are combined. These are highly automatable tasks. Furthermore, by allowing multiple experiments to be conducted in parallel, self-driving labs allow scientists to rapidly accelerate the pace at which they conduct their work.
Creating a successful large-scale self-driving lab will require collaboration with private sector partners, particularly robot manufacturers and the creators of AI models for robotics. Fortunately, the United States has many such firms. Therefore, DOE should initiate a competitive bidding process for the robotic equipment that will be housed within its self-driving labs. Because DOE has experience in building lab facilities, it should directly oversee the construction of the self-driving lab itself.
The United States already has several small-scale self-driving labs, primarily led by investments at DOE National Labs. The small size of these projects, however, makes it difficult to achieve the economies of scale that are necessary for self-driving labs to become an enduring part of America’s scientific ecosystem.
AI creates additional opportunities to expand automated materials science. Frontier language and multi-modal models, such as OpenAI’s GPT-4o, Anthropic’s Claude 3.5, and Google’s Gemini family, have already been used to ideate scientific experiments, including directing a robotic lab in the fully autonomous synthesis of a known chemical compound. These models would not operate with full autonomy. Instead, scientists would direct the inquiry and the design of the experiment, with the models autonomously suggesting variables to tweak.
Modern frontier models have substantial knowledge in all fields of science, and can hold all of the academic literature relevant to a specific niche of materials science within their active attention. This combination means that they have—when paired with a trained human—the scientific intuition to iteratively tweak an experimental design. They can also write the code necessary to direct the robots in the self-driving lab. Finally, they can write summaries of the experimental results—including the failures. This is crucial, because, given the constraints on their time, scientists today often only report their successes in published writing. Yet failures are just as important to document publicly to avoid other scientists duplicating their efforts.
Once constructed, this self-driving lab infrastructure can be a resource made available as another DOE user facility to materials scientists across the country, much as DOE supercomputers are today. DOE already has a robust process and infrastructure in place to share in-demand resources among different scientists, again underscoring why the Department is well-positioned to lead this endeavor.
Conclusion
Taken together, materials science faces a grand challenge, yet an even grander opportunity. Room-temperature, ambient-pressure superconductors—permitted by the laws of physics but as-yet undiscovered—could transform consumer electronics, clean energy, transportation, and even space travel. New forms of magnets could enable a wide range of cutting-edge technologies, such as nuclear fusion reactors. High-performance ceramics could improve reusable rockets and hypersonic aircraft. The opportunities are limitless.
With a coordinated effort led by DOE, the federal government can demonstrate to Americans that scientific innovation and technological progress can still deliver profound improvements to daily life. It can pave the way for a new approach to science firmly rooted in modern technology, creating an example for other areas of science to follow. Perhaps most importantly, it can make Americans excited about the future—something that has been sorely lacking in American society in recent decades.
AI is a radically transformative technology. Contemplating that transformation in the abstract almost inevitably leads to anxiety and fear. There are legislative proposals, white papers, speeches, blog posts, and tweets about using AI to positive ends. Yet merely talking about positive uses of AI is insufficient: the technology is ready, and the opportunities are there. Now is the time to act.
This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.
Compared to “cloud labs” for biology and chemistry, the risks associated with self-driving labs for materials science are low. In a cloud lab equipped with nucleic acid synthesis machines, for example, genetic sequences need to be screened carefully to ensure that they are not dangerous pathogens—a nontrivial task. There are not analogous risks for most materials science applications.
However, given the dual-use nature of many novel materials, any self-driving lab would need to have strong cybersecurity and intellectual property protections. Scientists using self-driving lab facilities would need to be carefully screened by DOE—fortunately, this is an infrastructure DOE possesses already for determining access to its supercomputing facilities.
Not all materials involve easily repeatable, and hence automatable, experiments for synthesis and characterization. But many important classes of materials do, including:
- Thin films and coatings
- Photonic and optoelectronic materials such as perovskites (used for solar panels)
- Polymers and monomers
- Battery and energy storage materials
Over time, additional classes of materials can be added.
DOE can and should be creative and resourceful in finding additional resources beyond public funding for this project. Collaborations on both foundation AI models and scaling self-driving labs between DOE and private sector AI firms can be uniquely facilitated by DOE’s new Foundation for Energy Security and Innovation (FESI), a private foundation created by DOE to support scientific fellowships, public-private partnerships, and other key mission-related initiatives.
Yes. Some private firms have recently demonstrated the promise. In late 2023, Google DeepMind unveiled GNoME, a materials science model that identified thousands of new potential materials (though they need to be experimentally validated). Microsoft’s GenMatter model pushed in a similar direction. Both models were developed in collaboration with DOE National Labs (Lawrence Berkeley in the case of DeepMind, and Pacific Northwest in the case of Microsoft).
Promoting American Resilience Through a Strategic Investment Fund
Critical minerals, robotics, advanced energy systems, quantum computing, biotechnology, shipbuilding, and space are some of the resources and technologies that will define the economic and security climate of the 21st century. However, the United States is at risk of losing its edge in these technologies of the future. For instance, China processes the vast majority of the world’s batteries and critical metals and has successfully launched a quantum communications satellite. The implications are enormous: the U.S. relies on its qualitative technological edge to fuel productivity growth, improve living standards, and maintain the existing global order. Indeed, the Inflation Reduction Act (IRA) and CHIPS Act were largely reactionary moves to shore up atrophied manufacturing capabilities in the American battery and semiconductor industries, requiring hundreds of billions in outlays to catch up. In an ideal world, critical industries would be sufficiently funded well in advance to avoid economically costly catch-up spending.
However, many of these technologies are characterized by long timelines, significant capital expenditures, and low and uncertain profit margins, presenting major challenges for private-sector investors who are required by their limited partners (capital providers such as pension funds, university endowments, and insurance companies) to underwrite to a certain risk-adjusted return threshold. This stands in contrast to technologies like artificial intelligence and pharmaceuticals: While both are also characterized by large upfront investments and lengthy research and development timelines, the financial payoffs are far clearer, incentivizing private sectors to play a leading role in commercialization. This issue for technologies in economically and geopolitically vital industries such as lithium processing and chips is most acute in the “valley of death,” when companies require scale-up capital for an early commercialization effort: the capital required is too large for traditional venture capital, yet too risky for traditional project finance.
The United States needs a strategic investment fund (SIF) to shepherd promising technologies in nationally vital sectors through the valley of death. An American SIF is not intended to provide subsidies, pick political winners or losers, or subvert the role of private capital markets. On the contrary, its role would be to “crowd in” capital by uniquely managing risk that no private or philanthropic entities have the capacity to do. In doing so, an SIF would ensure that the U.S. maintains an edge in critical technologies, promoting economic dynamism and national security in an agile, cost-efficient manner.
Challenges
The Need for Private Investment
A handful of resources and technologies, some of which have yet to be fully characterized, have the potential to play an outsized role in the future economy. Most of these key technologies have meaningful national security implications.
Since ChatGPT’s release in November 2022, artificial intelligence (AI) has experienced a commercial renaissance that has captured the public’s imagination and huge sums of venture dollars, as evidenced by OpenAI’s October 2024 $6.5 billion round at a $150 billion pre-money valuation. However, AI is not the only critical resource or technology that will power the future economy, and many of those critical resources and technologies may struggle to attract the same level of private investment. Consider the following:
- To meet climate goals, the world needs to increase production of lithium by nearly 475%, rare earths by 100%, and nickel by 60% through 2035. For defense applications, rare earths are especially important; the construction of one F-35, for instance, uses 920 pounds of rare earth materials.
- The conflict in Ukraine has unequivocally demonstrated the value of low-cost drones on the battlefield. However, drones also have significant commercial applications, including safety and last-mile delivery. Reducing production and component costs could make a meaningful difference.
- Quantum technology has the potential to exponentially expand compute power, which can be used to simulate biological pathways, accelerate materials development, and process vast amounts of financial data. However, quantum technology can also be used to break existing encryption technologies and safeguard communications. China launched its first quantum communications satellite in 2020.
Few sectors receive the level of consistent venture attention that software technology, most recently in AI, has gotten in the last 18 months. However, this does not make them unbackable or unimportant; on the contrary, technologies that increase mineral recovery yields or make drone engines cheaper should receive sufficient support to get to scale. While private-sector capital markets have supported the development of many important industries, they are not perfect and may miss important opportunities due to information asymmetries and externalities.
Overcoming the Valley of Death
Many strategically important technologies are characterized by high upfront costs and low or uncertain margins, which tends to dissuade investment by private-sector organizations at key inflection points, namely, the “valley of death.”
By their nature, innovative technologies are complex and highly uncertain. However, some factors make future economic value—and therefore financeability—more difficult to ascertain than others. For example, innovative battery technologies that enable long-term storage of energy generated from renewables would greatly improve the economics of utility-scale solar and wind projects. However, this requires production at scale in the face of potential competition from low-cost incumbents. In addition, there is the element of scientific risk itself, as well as the question of customer adoption and integration. There are many good reasons why technologies and companies that seem feasible, economical, and societally valuable do not succeed.
These dynamics result in lopsided investment allocations. In the early stages of innovation, venture capital is available to fund startups with the promise of outsized return driven partially by technological hype and partially by the opportunity to take large equity stakes in young companies. At the other end of the barbell, private equity and infrastructure capital are available to mature companies seeking an acquisition or project financing based on predictable cash flows and known technologies.
However, gaps appear in the middle as capital requirements increase (often by an order of magnitude) to support the transition to early commercialization. This phenomenon is called the “valley of death” as companies struggle to raise the capital they need to get to scale given the uncertainties they face.
Shortcoming of Federal Subsidies
While the federal government has provided loans and subsidies in the past, its programs remain highly reactive and require large amounts of funding.
Aside from asking investors to take on greater risk and lower returns, there are several tools in place to ameliorate the valley of death. The IRA one such example: It appropriated some $370 billion for climate-related spending with a range of instruments, including tax subsidies for renewable energy production, low-cost loans through organizations such as the Department of Energy’s Loan Program Office (LPO), and discretionary grants.
On the other hand, there are major issues with this approach. First, funding is spread out across many calls for funding that tend to be slow, opaque, and costly. Indeed, it is difficult to keep track of available resources, funding announcements, and key requirements—just try searching for a comprehensive, easy-to-understand list of opportunities.
More importantly, these funding mechanisms are simply expensive. The U.S. does not have the financial capacity to support an IRA or CHIPS Act for every industry, nor should it go down that route. While one could argue that these bills reflect the true cost of achieving the stated policy aims of energy transition or securing the semiconductor supply chain, it is also the case that there both knowledge (engineering expertise) and capital (manufacturing facility) capabilities underpin these technologies. Allowing these networks to atrophy created greater costs down the road, which could have been prevented by targeted investments at the right points of development.
The Future Is Dynamic
The future is not perfectly knowable, and new technological needs may arise that change priorities or solve previous problems. Therefore, agility and constant re-evaluation are essential.
Technological progress is not static. Take the concept of peak oil: For decades, many of the world’s most intelligent geologists and energy forecasters believed that the world would quickly run out of oil reserves as the easiest to extract resources were extracted. In reality, technological advances in chemistry, surveying, and drilling enabled hydraulic fracturing (fracking) and horizontal drilling, creating access to “unconventional reserves” that substantially increased fossil fuel supply.
Fracking greatly expanded fossil fuel production in the U.S., increasing resource supply, securing greater energy independence, and facilitating the transition from coal to natural gas, whose expansion has proved to be a helpful bridge towards renewable energy generation. This transition would not have been possible without a series of technological innovations—and highly motivated entrepreneurs—that arose to meet the challenge of energy costs.
To meet the challenges of tomorrow, policymakers need tools that provide them with flexible and targeted options as well as sufficient scale to make an impact on technologies that might need to get through the valley of death. However, they need to remain sufficiently agile so as not to distort well-functioning market forces. This balance is challenging to achieve and requires an organizational structure, authorizations, and funding mechanisms that are sufficiently nimble to adapt to changing technologies and markets.
Opportunity
Given these challenges, it seems unlikely that solutions that rely solely on the private sector will bridge the commercialization gap in a number of capital-intensive strategic industries. On the other hand, existing public-sector tools, such as grants and subsidies, are too costly to implement at scale for every possible externality and are generally too retrospective in nature rather than forward-looking. The government can be an impactful player in bridging the innovation gap, but it needs to do so cost-efficiently.
An SIF is a promising potential solution to the challenges posed above. By its nature, an SIF would have a public mission focused on strategic technologies crossing the valley of death by using targeted interventions and creative financing structures that crowd in private investors. This would enable the government to more sustainably fund innovation, maintain a light touch on private companies, and support key industries and technologies that will define the future global economic and security outlook.
Plan of Action
Recommendation 1. Shepherd technologies through the valley of death.
While the SIF’s investment managers are expected to make the best possible returns, this is secondary to the overarching public policy goal of ensuring that strategically and economically vital technologies have an opportunity to get to commercial scale.
The SIF is meant to crowd in capital such that we achieve broader societal gains—and eventually, market-rate returns—enabled by technologies that would not have survived without timely and well-structured funding. This creates tension between two competing goals: The SIF needs to act as if it will intend to make returns, or else there is the potential for moral hazard and complacency. However, it also has to be willing to not make market-rate returns, or even lose some of its principal, in the service of broader market and ecosystem development.
Thus, it needs to be made explicitly clear from the beginning that an SIF has the intent of achieving market rate returns by catalyzing strategic industries but is not mandated to do so. One way to do this is to adopt a 501(c)(3) structure that has a loose affiliation to a department or agency, similar to that of In-Q-Tel. Excess returns could either be recycled to the fund or distributed to taxpayers.
The SIF should adapt the practices, structures, and procedures of established private-sector funds. It should have a standing investment committee made up of senior stakeholders across various agencies and departments (expanded upon below). Its day-to-day operations should be conducted by professionals who provide a range of experiences, including investing, engineering and technology, and public policy across a spectrum of issue areas.
In addition, the SIF should develop clear underwriting criteria and outputs for each investment. These include, but are not limited to, identifying the broader market and investment thesis, projecting product penetration, and developing potential return scenarios based on different permutations of outcomes. More critically, each investment needs to create a compelling case for why the private sector cannot fund commercialization on its own and why public catalytic funding is essential.
Recommendation 2. The SIF should have a permanent authorization to support innovation under the Department of Commerce.
The SIF should be affiliated with the Department of Commerce but work closely with other departments and agencies, including the Department of Energy, Department of Treasury, Department of Defense, Department of Health and Human Services, National Science Foundation, and National Economic Council.
Strategic technologies do not fall neatly into one sector and cut across many customers. Siloing funding in different departments misses the opportunity to capture funding synergies and, more importantly, develop priorities that are built through information sharing and consensus. Enter the Department of Commerce. In addition to administering the National Institute of Standards and Technology, they have a strong history of working across agencies, such as with the CHIPS Act.
Similar arguments can also be made for the Treasury, and it may even be possible to have Treasury and Commerce work together to manage an SIF. They would be responsible for bringing in subject matter experts (for example, from the Department of Energy or National Science Foundation) to provide specific inputs and arguments for why specific technologies need government-based commercialization funding and at what point such funding is appropriate, acting as an honest broker to allocate strategic capital.
To be clear: The SIF is not intended to supersede any existing funding programs (e.g., the Department of Energy’s Loan Program Office or the National Institute of Health’s ARPA-H) that provide fit-for-purpose funding to specific sectors. Rather, an SIF is intended to fill in the gaps and coordinate with existing programs while providing more creative financing structures than are typically available from government programs.
Recommendation 3. Create a clear innovation roadmap.
Every two years, the SIF should develop or update a roadmap of strategically important industries, working closely with private, nonprofit, and academic experts to define key technological and capability gaps that merit public sector investment.
The SIF’s leaders should be empowered to make decisions on areas to prioritize but have the ability to change and adapt as the economic environment evolves. Although there is a long list of industries that an SIF could potentially support, resources are not infinite. However, a critical mass of investment is required to ensure adequate resourcing. One acute challenge is that this is not perfectly known in advance and changes depending on the technology and sector. However, this is precisely what the strategic investment roadmap is supposed to solve for: It should provide an even-handed assessment of the likely capital requirements and where the SIF is best suited to provide funding compared to other agencies or the private sector.
Moreover, given the ever-changing nature of technology, the SIF should frequently reassess its understanding of key use cases and their broader economic and strategic importance. Thus, after initial development of the SIF, it should be updated every two years to ensure that its takeaways and priorities remain relevant. This is no different than documents such as the National Security Strategy, which are updated every two to four years; in fact, the SIF’s planning documents should flow seamlessly into the National Security Strategy.
To provide a sufficiently broad set of perspectives, the government should include the expertise and insights of outside experts to develop its plan. Existing bodies, such as the President’s Council of Advisors on Science and Technology and the National Quantum Initiative, provide some of the consultative expertise required. However, the SIF should also stand up subject matter specific advisory bodies where a need arises (for example, on critical minerals and mining) and work internally to set specific investment areas and priorities.
Recommendation 4. Limit the SIF to financing.
The government should not be an outsized player in capital markets. As such, the SIF should receive no governance rights (e.g., voting or board seats) in the companies that it invests in.
Although the SIF aims to catalyze technological and ecosystem development, it should be careful not to dictate the future of specific companies. Thus, the SIF should avoid information rights beyond financial reporting. Typical board decks and stockholder updates include updates on customers, technologies, personnel matters, and other highly confidential and specific pieces of information that, if made public through government channels, would play a highly distortionary role in markets. Given that the SIF is primarily focused on supporting innovation through a particularly tricky stage to navigate, the SIF should receive the least amount of information possible to avoid disrupting markets.
Recommendation 5. Focus on providing first-loss capital.
First-loss capital should be the primary mechanism by which the SIF supports new technologies, providing greater incentives for private-sector funders to support early commercialization while providing a means for taxpayers to directly participate in the economic upside of SIF-supported technologies.
Consider the following stylized example to demonstrate a key issue in the valley of death. A promising clean technology company, such as a carbon-free cement or long-duration energy storage firm, is raising $100mm of capital for facility expansion and first commercial deployment. To date, the company has likely raised $30 – $50mm of venture capital to enable tech development, pilot the product, and grow the team’s engineering, R&D, and sales departments.
However, this company faces a fundraising dilemma. Its funding requirements are now too big for all but the largest venture capital firms, who may or may not want to invest in projects and companies like these. On the other hand, this hypothetical company is not mature enough for private equity buyouts nor is it a good candidate for typical project-based debt, which typically require several commercial proof points in order to provide sufficient risk reduction for an investor whose upside is relatively limited. Hence, the “valley of death.”
First-loss capital is an elegant solution to this issue: A prospective funder could commit to equal (pro rata) terms as other investors, except that this first-loss funder is willing to use its investment to make other investors whole (or at least partially offset losses) in the event that the project or company does not succeed. In this example, a first-loss funder would commit to $33.5 million of equity funding (roughly one-third of the company’s capital requirement). If the company succeeds, the first-loss funder makes the same returns as the other investors. However, if the company is unable to fully meet these obligations, the first-loss funder’s $33.5 million would be used to pay the other investors back (the other $66.5 million that was committed). This creates a floor on losses for the non-first-loss investors: Rather than being at risk of losing 100% of their principal, they are at risk of losing 50% of their principal.
The creation of a first-loss layer has a meaningful impact on the risk-reward profile for non-first-loss investors, who now have a floor on returns (in the case above, half their investment). By expanding the acceptable potential loss ratio, growth equity capital (or another appropriate instrument, such as project finance) can fill the rest, thereby crowding in capital.
From a risk-adjusted returns standpoint, this is not a free lunch for the government or taxpayers. Rather, it is intended to be a capital-efficient way of supporting the private-sector ecosystem in developing strategically and economically vital technologies. In other words, it leverages the power of the private sector to solve externalities while providing just enough support to get them to the starting line in the first place.
Conclusion
Many of tomorrow’s strategically important technologies face critical funding challenges in the valley of death. Due to their capital intensity and uncertain outcomes, existing financing tools are largely falling short in the critical early commercialization phases. However, a nimble, properly funded SIF could bridge key gaps while allowing the private sector to do most of the heavy lifting. The SIF would require buy-in from many stakeholders and well-defined sources of funding, but these can be solved with the right mandates, structures, and pay-fors. Indeed, the stakes are too high, and the consequences too dire, to not get strategic innovation right in the 21st century.
This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.
Put simply, there needs to be an entity that is actually willing and able to absorb lower returns, or even lose some of its principal, in the service of building an ecosystem. Even if the “median” outcome is a market-rate return of capital, the risk-adjusted returns are in effect far lower because the probability of a zero outcome for first-loss providers is substantially nonzero. Moreover, it’s not clear exactly what the right probability estimate should be; therefore, it requires a leap of faith that no economically self-interested private market actor would be willing to take. While some quasi-social-sector organizations can play this role (for example, Bill Gates’s Breakthrough Energy Ventures for climate tech), their capacity is finite, nor is there a guarantee that such a vehicle will appear for every sector of interest. Therefore, a publicly funded SIF is an integral solution to bridging the valley of death.
No, the SIF would not always have to use first-loss structures. However, it is the most differentiated structure that is available to the U.S. government; otherwise, a private-sector player is likely able—and better positioned—to provide funding.
The SIF should be able to use the full range of instruments, including project finance, corporate debt, convertible loans, and equity capital, and all combinations thereof. The instrument of choice should be up to the judgment of the applicant and SIF investment team. This is distinct from providing first-loss capital: Regardless of the financial instrument used, the SIF’s investment would be used to buffer other investors against potential losses.
The target return rate should be commensurate with that of the instrument used. For example, mezzanine debt should target 13–18% IRR, while equity investments should aim for 20–25% IRR. However, because of the increased risk of capital loss to the SIF given its first loss position, the effective blended return should be expected to be lower.
The SIF should be prepared to lose capital on each individual investment and, as a blended portfolio, to have negative returns. While it should underwrite such that it will achieve market-rate returns if successful in crowding in other capital that improves the commercial prospects of technologies and companies in the valley of death, the SIF has a public goal of ecosystem development for strategic domains. Therefore, lower-than-market-rate returns, and even some principal degradation, is acceptable but should be avoided as much as possible through the prudence of the investment committee.
By and large, the necessary public protections are granted through CFIUS, which requires regulatory approval for export or foreign ownership stakes with voting rights above 25% of critical technologies. The SIF can also enact controls around information rights (e.g., customer lists, revenue, product roadmaps) such that they have a veto on parties that can receive such information. However, given its catalytic mission, the SIF does not need board seats or representation and should focus on ensuring that critical technologies and assets are properly protected.
In most private investment firms, the investment committee is made up of the most senior individuals in the fund. These individuals can cross asset classes, sectors of expertise, and even functional backgrounds. However, the investment committee represents a wide breadth of expertise and experiences that, when brought together, enable intellectual honesty and the application of collective wisdom and judgment to the opportunity at hand.
Similarly, the SIF’s investment committee could include the head of the fund and representatives from various departments and agencies in alignment with its strategic priorities. The exact size of the investment committee should be defined by these priorities, but approval should be driven by consensus, and unanimity (or near unanimity) should be expected for investments that are approved.
Given the fluid nature of investment opportunities, the committee should be called upon whenever needed to evaluate a potential opportunity. However, given the generally long process times for investments discussed above (6–12 months), the investment committee should have been briefed multiple times before a formal decision is made.
Check sizes can be flexible to the needs of the investment opportunity. However, as an initial guiding principle, first loss capital should likely make up 20–35% of capital invested so as to require private-sector investors to have meaningful skin in the game. Depending on the fundraise size, this could imply investments of $25 million to $100 million.
Target funding amounts should be set over multiyear timeframes, but the annual appropriations process implies that there will likely be a set cap in any given year. In order to meet the needs of the market, there should be mechanisms that enable emergency draws, up to a cap (e.g., 10% of the annual target funding amount, which will need to be “paid for” by reducing future outlays).
An economically efficient way to fund a government program in support of a positive externality is a Pigouvian tax on negative externalities (such as carbon). However, carbon taxes are as politically unappealing as they are economically sensible and need to be packaged into other policy goals that could potentially support such legislation. Notwithstanding the questionable economic wisdom of tariffs in general, some 56% of voters support a 10% tax on all imports and 60% tariffs on China. Rather than using tariffs harmfully, they could be used more productively. One such proposal is a carbon import tariff that taxes imports on the carbon emitted in the production and transportation of goods into the U.S.
The U.S. would not be a first mover: in fact, the European Union has already implemented a similar mechanism called the Carbon Border Adjustment Mechanism (CBAM), which is focused on heavy industry, including cement, iron and steel, aluminum, fertilizers, electricity, and hydrogen, with chemicals and polymers potentially to be included after 2026. At full rollout in 2030, the CBAM is expected to generate roughly €10–15 billion of tax revenue. Tax receipts of a similar size could be used to fund an SIF or, if Congress authorizes an upfront amount, could be used to nullify the incremental deficit over time.
The EU’s CBAM phased in its reporting requirements over several years. Through July 2024, companies were allowed to use default amounts per unit of production without an explanation as to why actual data was not used. Until January 1, 2026, companies can make estimates for up to 20% of goods; thereafter, the CBAM requires reporting of actual quantities and embedded greenhouse gas emissions.
The U.S. could use a similar phase-in, although given the challenges of carbon reporting, could allow companies to use the lower of actual, verified emissions or per-unit estimates. Under a carbon innovation fee regime, exporters and countries could apply for exemption on a case-by-case basis to the Department of Commerce, which they could approve in line with other goals (e.g., economic development in a region).
The SIF could also be funded by repurposing other funding and elevating their strategic importance. Potential candidates include the Small Business Innovation Research (SBIR) and Small State Business Credit Initiative (SSBCI), which could play a bigger role if moved into the SIF umbrella. For example, the SBIR program, whose latest reporting data is as of FY2019, awarded $3.3 billion in funding that year and $54.6 billion over its lifespan. Moreover, the SSBCI, a $10 billion fund that already provides loan guarantees and other instruments similar to those described above, can be used to support technologies that fall into the purview of the SIF.
Congress could also assess reallocating dollars towards an SIF from spending reforms that are likely inevitable given the country’s fiscal position. In 2023, the Congressional Budget Office (CBO) published a report highlighting potential solutions for reducing the budget deficit. Some potential solutions, like establishing caps on Medicaid federal spending, while fiscally promising, seem unlikely to pass in the near future. However, others are more palatable, especially those that eliminate loopholes or ask higher-income individuals to pay their fair share.
For instance, increasing the amount subject to Social Security taxes above the $250,000 threshold has the potential to raise up to $1.2 trillion over 10 years; while this can be calibrated, an SIF would take only a small fraction of the taxes raised. In addition, the CBO found that federal matching funds for Medicaid frequently ended up getting back to healthcare providers in the form of higher reimbursement rates; eliminating what are effectively kickbacks could reduce the deficit by up to $525 billion over 10 years.