Establishing White House Initiative for STEM Educational Excellence

Our national security and competitive edge rely on our science and technological innovation.  Now more than ever every child deserves access to a well-rounded and high-quality education that provides them with the critical thinking, problem solving skills that will enable them to access science and technology jobs and contribute to solving global challenges.  Science, technology, engineering and mathematics education (STEM). For the purposes of this memo STEM includes computer science, data science, AI and other emerging technology fields in addition to science, engineering and mathematics education. Education and workforce development must be at the forefront of the next administration.   

The next administration’s Department of Education (ED) has an incredible opportunity to support our nation’s youth, America’s current and future workforce, to succeed and thrive. Students, families and communities want and need more STEM learning experiences to realize the American Dream, and yet they cannot access them.    

In the FY25 President’s Budget, ED called for four full time employees to focus on STEM in the Office of the Deputy Secretary, yet the out-going Administration failed to support this imperative.  We hope that this imperative is funded and staffed by the new Administration.

Challenge and Opportunity

Now more than ever our economy and national defense call for every child to have access to a well-rounded and high-quality education that sets them up for success and provides them with the critical thinking and problem solving skills that will enable them to access economic opportunities and contribute to solving global challenges. A well-rounded education must include science, technology, engineering and mathematics education (STEM) and especially STEM learning experiences both in- and out-of-school that provide students with technical skills through hands-on, problem/project-based learning.  

The Invest in America package of bills (CHIPS + Science, Bipartisan Infrastructure Law and Inflation Reduction Act) have created decades of employment opportunities that unfortunately may in some regions of the nation go lacking for talent unless we significantly invest in providing a strong well-rounded STEM education to every child.

The future workforce is not the only reason that ED must prioritize STEM teaching as part of their agenda. Kids and families are voting with their feet. Chronic absenteeism, defined as missing 10 or more days of school, has more than doubled since pre-pandemic rates. We must modernize STEM learning opportunities and ensure they are rigorous, relevant and aligned to what kids and families want.

Most teens report math or science as their favorite subject in school. Seventy-five percent  of Gen Z youth are interested in STEM occupations. Two-thirds of parents think computer science should be required for learning in schools. According to the Afterschool Alliance, More than 7 in 10 parents (72 percent) report that STEM and computer science learning opportunities were important in their selection of an afterschool program, up 19 percentage points from 2014 (53 percent).”

Simply put, students want more STEM opportunities and families want more STEM opportunities for their children.

Yet, we know that despite students’ interest in STEM and natural proclivity towards problem solving, too many students don’t have access to STEM learning experiences both in- and out-of-school. Strategic industries ranging from aerospace to communications and agriculture to energy, and many more presently clamor for and compete unproductively to chase talented new employees. The federal government owes it to them to take any and all actions to meet their employment needs, prominently including casting a wider net across the nation’s entire young population for talent. 

For example, across the board, NAEP results consistently show that students of color, students who are eligible for free and reduced-price lunch, students with disabilities and English language learners are not well served by our current system. On the 2018 NAEP Technology and Engineering Literacy Assessment, 13% of 8th grade students with disabilities scored at or above proficient compared to 53% of students without a disability. Fifty-nine percent of 8th grade White students scored at or above proficient compared to 23% Black students, 31% Hispanic, 29% of American Indian/Alaska Native. On the 2018 TEL assessment, 30% of students who are eligible for free or reduced-priced lunch scored at or above proficient compared to 60% who are not eligible for the program. These gaps also play out in Math and Science leading to just 6% of Black 12th graders, 9% of Hispanic 12th graders, 13% of American Indian/Alaska Native 12th graders, 7% of 12th graders with disabilities, and 1% of English Learners leaving high school proficient in science. The reality in math is just as stark with only 8% of Black 12th graders, 11% of Hispanic 12th graders, 9% of American Indian/Alaska Native 12th graders, 7% of 12th graders with disabilities, and 3% of English Learners finishing high school proficient in mathematics. The United States can ill afford to half-heartedly serve the educational needs of many of our students in this era of great demand. It is a profound responsibility of the federal government.

While progress is being made to provide more students with high-quality STEM learning during out-of-school time, we know that access is unequal. Children whose families have lower incomes are often the ones missing out on these engaging and enriching opportunities. It is estimated that there are 25 million children who would like access to an afterschool program, but are not able to access any program, let alone a STEM focused program.

We must change this reality quickly. Prioritizing STEM education must be an urgent priority for the Federal government. Luckily, the Federal government has built up significant infrastructure to try to better align federal resources to support this issue. The Federal Coordination on STEM (FC-STEM) effort aligns agencies to support the implementation of key priorities related to STEM.  

While STEM has been prioritized across Federal Agencies, STEM has not been a consistent priority at ED. ED should be leading. The Department must establish a structure that persists between administrations and can support deploying financial resources, technical assistance and other tools of the Department to support States, Districts and their partners to increase access, participation and success in STEM learning both in and out-of-school.  

In the FY25 President’s Budget, ED called for four full time employees to focus on STEM in the Office of the Deputy Secretary, yet the out-going Administration failed to support this imperative.  We hope that this imperative is funded and staffed by the new Administration.

Plan of Action

There are two logical paths forward to ensuring STEM is a priority at ED both of which require establishing dedicated STEM capacity at ED.

First, the new administration could sign an inaugural executive order, similar to this example, but modified for STEM,  that establishes a new White House Initiative for STEM Education and Workforce (WHISEW) that could stand alongside other White House initiatives and elevate STEM across the Department. This initiative would establish a STEM team at ED and could also name a list of advisors to ensure that ED could benefit from the expertise of non-government organizations.  

Or, a new Congress could appropriate the necessary funds to ensure adequate staffing and direct ED to establish the STEM team as requested in the former President’s FY25 Budget.  

Given the ever changing nature of STEM education and workforce, the STEM structure at ED should be a lean and nimble hub of talent that can staff up or down depending on the high-priority issue areas such as math, data science, computational thinking, AI and other emergent technologies. 

Whatever structure is established, the primary priorities of the Initiative should include:

In the next administration, the team should focus on the following four priorities:

Regardless of pathway, it is estimated that the cost to the Department would be equivalent to four full time employees, one of whom would be appointed (Executive Director) and three of whom would be a GS-15 civil servant. This staff could be bolstered by STEM field leaders through fellowships, reimbursed by ED, or funded through partner institutions. The total cost of this investment would be estimated at ~$2.5M annually.

Conclusion

A relatively modest investment (~$2.5M annually) has the potential to impact generations of children, families and their communities by increasing access, participation and success in STEM learning experiences both in and out-of-school. The time is now to establish a permanent and consistent focus on STEM education and workforce at the U.S. Department of Education.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
How much will this proposal cost?

It is estimated that to support a small team (3 FTEs plus Fellows) it would cost approximately $5M annually. This cost would cover salary, benefits, travel, technology needs and also a modest events and programming budget. 

Why should ED play a larger role in STEM Education?

The US Department of Education’s mission is to “promote student achievement and preparation for global competitiveness by fostering educational excellence and ensuring equal access.” STEM education is critical for supporting students’ global competitiveness.  As outlined above, STEM education is not equally accessible to all students. The Department has a critical role to play in supporting STEM education and closing persistent access gaps in STEM.

Why a White House Initiative versus staffing a team or office within the Office of the Deputy Secretary or Office of the Undersecretary?

STEM education cuts across PreK-12 and higher education priorities.  Existing White House Initiatives have prior experience coordinating efforts across the department and across student learning experiences from cradle to career.  Standing up a new White House Initiative would enable a more holistic and crosscutting view of STEM at the Department.  It would also support further coordination between the other White House Initiatives as well. STEM is a priority in the governing documents of many of the current White House Initiatives and it would support collaboration and coherence to have a White House STEM Initiative with the same reporting structure.

How could STEM E3 be sustained across administrations?

One of the critical structure elements of STEM E3 is that the Executive Director of the Initiative is a politically appointed role, enabling each administration to select someone that aligns with their priorities and campaign promises.  There should be at least one career staff member to provide continuity and sustainability across administrations.  The flexible capacity of Fellows or IPAs allows the team to bring in expertise aligned to the priorities of each administration.

Modernizing AI Analysis in Education Contexts

The 2022 release of ChatGPT and subsequent foundation models sparked a generative AI (GenAI) explosion in American society, driving rapid adoption of AI-powered tools in schools, colleges, and universities nationwide. Education technology was one of the first applications used to develop and test ChatGPT in a real-world context. A recent national survey indicated that nearly 50% of teachers, students, and parents use GenAI Chatbots in school, and over 66% of parents and teachers believe that GenAI Chatbots can help students learn more and faster. While this innovation is exciting and holds tremendous promise to personalize education, educators, families, and researchers are concerned that AI-powered solutions may not be equally useful, accurate, and effective for all students, in particular students from minoritized populations. It is possible that as this technology further develops that bias will be addressed; however, to ensure that students are not harmed as these tools become more widespread it is critical for the Department of Education to provide guidance for education decision-makers to evaluate AI solutions during procurement, to support EdTech developers to detect and mitigate bias in their applications, and to develop new fairness methods to ensure that these solutions serve the students with the most to gain from our educational systems. Creating this guidance will require leadership from the Department of Education to declare this issue as a priority and to resource an independent organization with the expertise needed to deliver these services.  

Challenge and Opportunity

Known Bias and  Potential Harm

There are many examples of the use of AI-based systems introducing more bias into an already-biased system. One example with widely varying results for different student groups is the use of GenAI tools to detect AI-generated text as a form of plagiarism. Liang et. al  found that several GPT-based plagiarism checkers frequently identified the writing of students for whom English is not their first language as AI-generated, even though their work was written before ChatGPT was available. The same errors did not occur with text generated by native English speakers. However, in a publication by Jiang (2024), no bias against non-native English speakers was encountered in the detection of plagiarism between human-authored essays and ChatGPT-generated essays written in response to analytical writing prompts from the GRE, which is an example of how thoughtful AI tool design and representative sampling in the training set can achieve fairer outcomes and mitigate bias. 

Beyond bias, researchers have raised additional concerns about the overall efficacy of these tools for all students; however, more understanding around different results for subpopulations and potential instances of bias(es) is a critical aspect of deciding whether or not these tools should be used by teachers in classrooms. For AI-based tools to be usable in high-stakes educational contexts such as testing, detecting and mitigating bias is critical, particularly when the consequences of being incorrect are so high, such as for students from minoritized populations who may not have the resources to recover from an error (e.g., failing a course, being prevented from graduating school). 

Another example of algorithmic bias before the widespread emergence of GenAI which illustrates potential harms is found in the Wisconsin Dropout Early Warning System. This AI-based tool was designed to flag students who may be at risk of dropping out of school; however, an analysis of the outcomes of these predictions found that the system disproportionately flagged African American and Hispanic students as being likely to drop out of school when most of these students were not at risk of dropping out). When teachers learn that one of their students is at risk, this may change how they approach that student, which can cause further negative treatment and consequences for that student, creating a self-fulfilling prophecy and not providing that student with the education opportunities and confidence that they deserve. These examples are only two of many consequences of using systems that have underlying bias and demonstrate the criticality of conducting fairness analysis before these systems are used with actual students. 

Existing Guidance on Fair AI & Standards for Education Technology Applications

Guidance for Education Technology Applications

Given the harms that algorithmic bias can cause in educational settings, there is an opportunity to provide national guidelines and best practices that help educators avoid these harms. The Department of Education is already responsible for protecting student privacy and provides guidelines via the Every Student Succeeds Act (ESSA) Evidence Levels to evaluate the quality of EdTech solution evidence. The Office of Educational Technology, through support of a private non-profit organization (Digital Promise) has developed guidance documents for teachers and administrators, and another for education technology developers (U.S. Department of Education, 2023, 2024). In particular, “Designing for Education with Artificial Intelligence” includes guidance for EdTech developers including an entire section called “Advancing Equity and Protecting Civil Rights” that describes algorithmic bias and suggests that, “Developers should proactively and continuously test AI products or services in education to mitigate the risk of algorithmic discrimination.” (p 28). While this is a good overall guideline, the document critically is not sufficient to help developers conduct these tests

Similarly, the National Institute of Standards and Technology has released a publication on identifying and managing bias in AI . While this publication highlights some areas of the development process and several fairness metrics, it does not provide specific guidelines to use these fairness metrics, nor is it exhaustive. Finally demonstrating the interest of industry partners, the EDSAFE AI Alliance, a philanthropically-funded alliance representing a diverse group of companies in educational technology, has also created guidance in the form of the 2024 SAFE (Safety, Accountability, Fairness, and Efficacy) Framework. Within the Fairness section of the framework, the authors highlight the importance of using fair training data, monitoring for bias, and ensuring accessibility of any AI-based tool. But again, this framework does not provide specific actions that education administrators, teachers, or EdTech developers can take to ensure these tools are fair and are not biased against specific populations. The risk to these populations and existing efforts demonstrate the need for further work to develop new approaches that can be used in the field. 

Fairness in Education Measurement

As AI is becoming increasingly used in education, the field of educational measurement has begun creating a set of analytic approaches for finding examples of algorithmic bias, many of which are based on existing approaches to uncovering bias in educational testing. One common tool is called Differential Item Functioning (DIF), which checks that test questions are fair for all students regardless of their background. For example, it ensures that native English speakers and students learning English have an equal chance to succeed on a question if they have the same level of knowledge . When differences are found, this indicates that a student’s performance on that question is not based on their knowledge of the content. 

While DIF checks have been used for several decades as a best practice in standardized testing, a comparable process in the use of AI for assessment purposes does not yet exist. There also is little historical precedent indicating that for-profit educational companies will self-govern and self-regulate without a larger set of guidelines and expectations from a governing body, such as the federal government. 

We are at a critical juncture as school districts begin adopting AI tools with minimal guidance or guardrails, and all signs point to an increase of AI in education. The US Department of Education has an opportunity to take a proactive approach to ensuring AI fairness through strategic programs of support for school leadership, developers in educational technology, and experts in the field. It is important for the larger federal government to support all educational stakeholders under a common vision for AI fairness while the field is still at the relative beginning of being adopted for educational use. 

Plan of Action 

To address this situation, the Department of Education’s Office of the Chief Data Officer should lead development of a national resource that provides direct technical assistance to school leadership, supports software developers and vendors of AI tools in creating quality tech, and invests resources to create solutions that can be used by both school leaders and application developers. This office is already responsible for data management and asset policies, and provides resources on grants and artificial intelligence for the field. The implementation of these resources would likely be carried out via grants to external actors with sufficient technical expertise, given the rapid pace of innovation in the private and academic research sectors. Leading the effort from this office ensures that these advances are answering the most important questions and can integrate them into policy standards and requirements for education solutions. Congress should allocate additional funding to the Department of Education to support the development of a technical assistance program for school districts, establish new grants for fairness evaluation tools that span the full development lifecycle, and pursue an R&D agenda for AI fairness in education. While it is hard to provide an exact estimate, similar existing programs currently cost the Department of Education between $4 and $30 million a year. 

Action 1. The Department of Education Should Provide Independent Support for School Leadership Through a Fair AI Technical Assistance Center (FAIR-AI-TAC) 

School administrators are hearing about the promise and concerns of AI solutions in the popular press, from parents, and from students. They are also being bombarded by education technology providers with new applications of AI within existing tools and through new solutions. 

These busy school leaders do not have time to learn the details of AI and bias analysis, nor do they have the technical background required to conduct deep technical evaluations of fairness within AI applications. Leaders are forced to either reject these innovations or implement them and expose their students to significant potential risk with the promise of improved learning. This is not an acceptable status quo.  

To address these issues, the Department of Education should create an AI Technical Assistance Center (the Center) that is tasked with providing direct guidance to state and local education leaders who want to incorporate AI tools fairly and effectively. The Center should be staffed by a team of professionals with expertise in data science, data safety, ethics, education, and AI system evaluation. Additionally, the Center should operate independently of AI tool vendors to maintain objectivity.

There is precedent for this type of technical support. The U.S. Department of Education’s Privacy Technical Assistance Center (PTAC) provides guidance related to data privacy and security procedures and processes to meet FERPA guidelines; they operate a help desk via phone or email, develop training materials for broad use, and provide targeted training and technical assistance for leaders. A similar kind of center could be stood up to support leaders in education who need support evaluating proposed policy or procurement decisions.  

This Center should provide a structured consulting service offering a variety of levels of expertise based on the individual stakeholder’s needs and the variety of levels of potential impact of the system/tool being evaluated on learners; this should include everything from basic levels of AI literacy to active support in choosing technological solutions for educational purposes. The Center should partner with external organizations to develop a certification system for high-quality AI educational tools that have passed a series of fairness checks. Creating a fairness certification (operationalized by third party evaluators)  would make it much easier for school leaders to recognize and adopt fair AI solutions that meet student needs. 

Action 2. The Department of Education Should Provide Expert Services, Data, and Grants for EdTech Developers 

There are many educational technology developers with AI-powered innovations. Even when well-intentioned, some of these tools do not achieve their desired impacts or may be unintentionally unsafe due to a lack of processes and tests for fairness and safety.

Educational Technology developers generally operate under significant constraints when incorporating AI models into their tools and applications. Student data is often highly detailed and deeply personal, potentially containing financial, disability, and educational status information that is currently protected by FERPA, which makes it unavailable for use in AI model training or testing. 

Developers need safe, legal, and quality datasets that they can use for testing for bias, as well as appropriate bias evaluation tools. There are several promising examples of these types of applications and new approaches to data security, such as the recently awarded NSF SafeInsights project, which allows analysis without disclosing the underlying data. In addition, philanthropically-funded organizations such as the Allen Institute for AI have released LLM evaluation tools that could be adapted and provided to Education Technology developers for testing. A vetted set of evaluation tools, along with more detailed technical resources and instructions for how to use them would encourage developers to incorporate bias evaluations early and often. Currently, there are very few market incentives or existing requirements that push developers to invest the necessary time or resources into this type of fairness analysis. Thus, the government has a key role to play here.

The Department of Education should also fund a new grant program that tasks grantees with developing a robust and independently validated third-party evaluation system that checks for fairness violations and biases throughout the model development process from pre-processing of data, to the actual AI use, to testing after AI results are created. This approach would support developers in ensuring that the tools they are publishing meet an agreed-upon minimum threshold for safe and fair use and could provide additional justification for the adoption of AI tools by school administrators.

Action 3. The Department of Education Should Develop Better Fairness R&D Tools with Researchers 

There is still no consensus on best practices for how to ensure that AI tools are fair. As AI capabilities evolve, the field needs an ongoing vetted set of analyses and approaches that will ensure that any tools being used in an educational context are safe and fair for use with no unintended consequences.

The Department of Education should lead the creation of a a working group or task force comprised of subject matter experts from education, educational technology, educational measurement, and the larger AI field to identify the state of the art in existing fairness approaches for education technology and assessment applications, with a focus on modernized conceptions of identity. This proposed task force would be an inter-organizational group that would include representatives from several different federal government offices, such as the Office of Educational Technology and the Chief Data Office as well as prominent experts from industry and academia. An initial convening could be conducted alongside leading national conferences that already attract thousands of attendees conducting cutting-edge education research (such as the American Education Research Association and National Council for Measurement in Education).

The working group’s mandate should include creating a set of recommendations for federal funding to advance research on evaluating AI educational tools for fairness and efficacy. This research agenda would likely span multiple agencies including NIST, the Institute of Education Sciences of the U.S. Department of Education, and the National Science Foundation. There are existing models for funding early stage research and development with applied approaches, including the IES “Accelerate, Transform, Scale” programs that integrate learning sciences theory with efforts to scale theories through applied education technology program and Generative AI research centers that have the existing infrastructure and mandates to conduct this type of applied research. 

Additionally, the working group should recommend the selection of a specialized group of researchers who would contribute ongoing research into new empirically-based approaches to AI fairness that would continue to be used by the larger field. This innovative work might look like developing new datasets that deliberately look for instances of bias and stereotypes, such as the CrowS-Pairs dataset. It may build on current cutting edge research into the specific contributions of variables and elements of LLM models that directly contribute to biased AI scores, such as the work being done by the AI company Anthropic. It may compare different foundation LLMs and demonstrate specific areas of bias within their output. It may also look like a collaborative effort between organizations, such as the development of the RSM-Tool, which looks for biased scoring. Finally, it may be an improved auditing tool for any portion of the model development pipeline. In general, the field does not yet have a set of universally agreed upon actionable tools and approaches that can be used across contexts and applications; this research team would help create these for the field.

Finally, the working group should recommend policies and standards that would incentivize vendors and developers working on AI education tools to adopt fairness evaluations and share their results.

Conclusion

As AI-based tools continue being used for educational purposes, there is an urgent need to develop new approaches to evaluating these solutions to fairness that include modern conceptions of student belonging and identity. This effort should be led by the Department of Education, through the Office of the Chief Data Officer, given the technical nature of the services and the relationship with sensitive data sources. While the Chief Data Officer should provide direction and leadership for the project, partnering with external organizations through federal grant processes would provide necessary capacity boosts to fulfill the mandate described in this memo.As we move into an age of widespread AI adoption, AI tools for education will be increasingly used in classrooms and in homes. Thus, it is imperative that robust fairness approaches are deployed before a new tool is used in order to protect our students, and also to protect the developers and administrators from potential litigation, loss of reputation, and other negative outcomes.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
What are some examples of what is currently being done to ensure fairness in AI applications for educational purposes?

When AI is used to grade student work, fairness is evaluated by comparing the scores assigned by AI to those assigned by human graders across different demographic groups. This is often done using statistical metrics, such as the standardized mean difference (SMD), to detect any additional bias introduced by the AI. A common benchmark for SMD is 0.15, which suggests the presence of potential machine bias compared to human scores. However, there is a need for more guidance on how to address cases where SMD values exceed this threshold.


In addition to SMD, other metrics like exact agreement, exact + adjacent agreement, correlation, and Quadratic Weighted Kappa are often used to assess the consistency and alignment between human and AI-generated scores. While these methods provide valuable insights, further research is needed to ensure these metrics are robust, resistant to manipulation, and appropriately tailored to specific use cases, data types, and varying levels of importance.

What are some concerns about using AI in education for students with diverse and overlapping identities?

Existing approaches to demographic post hoc analysis of fairness assume that there are two discrete populations that can be compared, for example students from African-American families vs. those not from African-American families, students from an English language learner family background vs. those that are not, and other known family characteristics. However in practice, people do not experience these discrete identities. Since at least the 1980s, contemporary sociological theories have emphasized that a person’s identity is contextual, hybrid, and fluid/changing. One current approach to identity that integrates concerns of equity that has been applied to AI is “intersectional identity” theory . This approach has begun to develop promising new methods that bring contemporary approaches to identity into evaluating fairness of AI using automated methods. Measuring all interactions between variables results in too small a sample; these interactions can be prioritized using theory or design principles or more advanced statistical techniques (e.g., dimensional data reduction techniques).

Elevate and Strengthen the Presidential Management Fellows Program

Founded in 1977, the Presidential Management Fellows (PMF) program is intended to be “the Federal Government’s premier leadership development program for advanced degree holders across all academic disciplines” with a mission “to recruit and develop a cadre of future government leaders from all segments of society.” The challenges facing our country require a robust pipeline of talented and representative rising leaders across federal agencies. The PMF program has historically been a leading source of such talent. 

The next Administration should leverage this storied program to reinvigorate recruitment for a small, highly-skilled management corps of upwardly-mobile public servants and ensure that the PMF program retains its role as the government’s premier pipeline for early-career talent. It should do so by committing to placing all PMF Finalists in federal jobs (rather than only half, as has been common in recent years), creating new incentives for agencies to engage, and enhancing user experience for all PMF stakeholders. 

Challenge and Opportunity

Bearing the Presidential Seal, the Presidential Management Fellows (PMF) Program is the Federal Government’s premier leadership development program for advanced degree holders across all academic disciplines. Appropriately for a program created in the President’s name, the application process for the PMF program is rigorous and competitive. Following a resume and transcript review, two assessments, and a structured interview, the Office of Personnel Management (OPM) selects and announces PMF Finalists. 

Selection as a Finalist is only the first step in a PMF applicant’s journey to a federal position. After they are announced, PMF Finalists have 12 months to find an agency posting by completing a second round of applications to specific positions that agencies have designated as eligible for PMFs. OPM reports that “over the past ten years, on average, 50% of Finalists obtain appointments as Fellows.” Most Finalists who are placed are not appointed until late in the eligibility period: halfway through the 2024 eligibility window, only 85 of 825 finalists (10%) had been appointed to positions in agencies.

For applicants and universities, this reality can be dispiriting and damage the reputation of the program, especially for those not placed. The yearlong waiting period ending without a job offer for about half of Finalists belies the magnitude of the accomplishment of rising to the top of such a competitive pool of candidates eager to serve their country. Additionally, Finalists who are not placed in a timely manner will be likelier to pursue job opportunities outside of federal service.  At a moment when the federal government is facing an extraordinary talent crisis with an aging workforce and large-scale retirements, the PMF program must better serve its purpose as a trusted source of high-level, early-career talent.

zThe current program design also affects the experience of agency leaders—such as hiring managers and Chief Human Capital Officers (CHCOs)—as they consider hiring PMFs. When agencies hire a PMF for a 2-year placement, they cover the candidate’s salary plus an $8,000 fee to OPM’s PMF program office to support its operations. Agencies consider hiring PMF Finalists with the knowledge that the PMF has the option to complete a 6-month rotational assignment outside of their hiring unit. These factors may create the impression that hiring a PMF is “costlier” than other staffing options.

Despite these challenges, the reasons for agencies to invest in the PMF program remain numerous:

The PMF is still correctly understood as the government’s premier onramp program for early career managerial talent. With some thoughtful realignment, it can sustain and strengthen this role and improve experience for all its core stakeholders.  

Plan of Action

The next Administration should take a direct hand in supporting the PMF Program. As the President’s appointee overseeing the program, the OPM Director should begin by publicly setting an ambitious placement percentage goal and then driving the below reforms to advance that goal. 

Recommendation 1. Increase the Finalist placement rate by reducing the Finalist pool.

The status quo reveals misalignment between the pool of PMF Finalists and demand for PMFs across government. This may be in part due to the scale of demand, but is also a consequence of PMF candidates and finalists with ever-broader skill sets, which makes placement more challenging and complex. Along with the 50% placement rates, the existing imbalance between finalists and placements is reflected in the decision to contract the finalist pool from 1100 in 2022 to 850 in 2023 and 825 in 2024. The next Administration should adjust the size of the Finalist pool further to ensure a near-100% placement rate and double down on its focus on general managerial talent to simplify disciplinary matching. Initially, this might mean shrinking the pool from the 825 advanced in 2024 to 500 or even fewer. 

The core principle is simple: PMF Finalists should be a valuable resource for which agencies compete. There should be (modestly) fewer Finalists than realistic agency demand, not more. Critically, this change would not aim to reduce the number of PMFs serving in government. Rather, it seeks to sustain the current numbers while dramatically reducing the number of Finalists not placed and creating a healthier set of incentives for all parties.

When the program can reliably boast high placement rates, then the Federal government can strategize on ways to meaningfully increase the pool of Fellows and use the program to zero in on priority hard-to-hire disciplines outside of general managerial talent.

Recommendation 2. Attach a financial incentive to hiring and retaining a PMF while improving accountability. 

To underscore the singular value of PMFs and their role in the hiring ecosystem, the next Administration should attach a financial incentive to hiring a PMF. 

Because of the $8,000 placement fee, PMFs are seen as a costlier route than other sources of talent. A financial incentive to hire PMFs would reverse this dynamic. The next Administration might implement a large incentive of $50,000 per Fellow, half of which would be granted when a Fellow is placed and the other half to be granted when the Fellow accepts a permanent full-time job offer in the Federal government. This split payment would signal an investment in Fellows as the future leaders of the federal government. 

Assuming an initial cohort of 400 placed Fellows at $50,000 each, OPM would require $20 million plus operating costs for the PMF program office. To secure funds, the Administration could seek appropriations, repurpose funds through normal budget channels, or pursue an agency pass-the-hat model like the financing of the Federal Executive Board and Hiring Experience program offices. 

To parallel this incentive, the Administration should also implement accountability measures to ensure agencies more accurately project their PMF needs by assigning a cost to failing to place some minimum proportion–perhaps 70%–of the Finalists projected in a given cycle. This would avoid too many unplaced Finalists. Agencies that fail to meet the threshold should have reduced or delayed access to the PMF pool in subsequent years. 

Recommendation 3. Build a Stronger Support Ecosystem 

In support of these implementation changes, the next Administration should pursue a series of actions to elevate the program and strengthen the PMF ecosystem. 

Even if the Administration pursues the above recommendations, some Finalists would remain unpaired. The PMF program office should embrace the role of a talent concierge for a smaller, more manageably-sized cohort of yet-unpaired Finalists, leveraging relationships across the government, including with PMF Alumni and the Presidential Management Alumni Association (PMAA) and OPM’s position as the government’s strategic talent lead to encourage agencies to consider specific PMF Finalists in a bespoke way. The Federal government should also consider ways to privilege applications from unplaced Finalists who meet criteria for a specific posting.

To strengthen key PMF partnerships in agencies, the Administration should elevate the role of PMF Coordinators beyond “other duties as assigned” to a GS-14 “PMF Director.” With new incentives to encourage placement and consistent strategic orientation from agency partners, agencies will be in a better position to project their placement needs by volume and role and hire PMF Finalists who meet them. PMF Coordinators would have explicit performance measures that reflect ownership over the success of the program.

The Administration must commit and sustain senior-level engagement—in the White House and at the senior levels of OMB, OPM, and in senior agency roles including Deputy Secretaries, Assistant Secretaries for Management, and Chief Human Capital Officers—to drive forward these changes. It must seize key leverage points throughout the budget and strategic management cycle, including OPM’s Human Capital Operating Plan process, OMB’s Strategic Reviews process, and the Cross-Agency Priority Goal setting rhythms. And it must sustain focus, recognizing that these new design elements may not succeed in their first cycle, and should provide support for experimentation and innovation.

Current PMF Program Compared to Proposed Reform
Status QuoProposed Change
Size of Finalist Pool800-1100400-500
Placement Rate~50%Target 100%, achieve 80-90%
Total Placements400-550320-450
Number of Unplaced Finalists400-550<100
Financial modelAgencies carry salary and benefits and pay a premium of $8,000 to OPM in cost recovery to fund program officeEach Fellow carries a financial incentive encouraging speedy placement; program office and incentive funded centrally
Experience for FinalistsFrustrating waits are typical; many hundreds of potential public servants left unplaced; experience of being a Finalist does not always reflect magnitude of accomplishmentFinalists are a valuable, scarce commodity; they have more potential matches with agencies and experience shorter waits
Experience for AgenciesLarge pool of Finalists is difficult to navigate; agencies harbor concerns about quality of Fellows waiting for placement; little urgency to act; PMFs seen as one talent pool among many; Program Coordination is often an “other duty as assigned”Smaller pool that is easier to navigate; even higher quality finalist pool; significant urgency to act to capture financial incentive and meet talent needs; clear understanding of role PMFs play in talent strategy; coordination and needs forecasting resides in a higher-graded, strategically-oriented role
Experience for Program OfficeCost-recovery model creates significant uncertainty in budgeting and operations planning; difficult to make selections due to inconsistent agency need forecastingProgram office manages access to a valuable asset; with less “selling,” staff focuses on bespoke pairing for smaller number of unpaired applicants and shaping each year’s finalist pool to reflect improved needs forecasts

Conclusion

For decades, the PMF program has consistently delivered top-tier talent to the federal government. However, the past few years have revealed a need for reform to improve the experience of PMF hopefuls and the agencies that will undoubtedly benefit from their skills. With a smaller Finalist pool, healthier incentives, and a more supportive ecosystem, agencies would compete for a subsidized pool of high-quality talent available to them at lower cost than alternative route, and Fellows who clear the significant barrier of the rigorous selection process would have far stronger assurance of a placement. If these reforms are successfully implemented, esteem for the government’s premier onramp for rising managerial talent will rise, contributing to the impression that the Federal government is a leading and prestigious employer of our nation’s rising leaders. 

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
What is the role of the PMF rotation?

The PMF program is a 2-year placement with an optional 6-month rotation in another office within the appointing agency or another agency. The rotation is an important and longstanding design element of a program aiming to build a rising cohort of managerial talent with a broad purview. While the current program requires agencies pay OPM the full salary, benefits, and a placement fee for placing a PMF, the one quarter rotation may act as a barrier to embracing PMF talent. This can be addressed by adding a significant subsidy to balance this concern.

How does shrinking the size of the Finalist pool enhance the program?

In the current program, OPM uses a rule of thumb to set the number of Finalists at approximately 80% of anticipated demand to minimize the number of unplaced Finalists. This is a prudent approach, reflected in shifting Finalist numbers in recent years: from 1100 in 2022 to 850 in 2023 and 825 in 2024. Despite adjusting the Finalist pool, unfortunately placement rates have remained near 50%. Agencies are failing to follow-through on their projected demand for PMFs, which has unfortunate consequences for Finalists and presents management challenges for the PMF program office.


This reform proposal would take a large step by reducing the Finalist pool to well below the stated demand–500 or less–and focus on general managerial talent to make the pairing process simpler. This would be, fundamentally, a temporary reset to raise placement rates and improve user experience for candidates, agencies, and the program management team. As placement rhythms strengthen along the lines described above, there is every reason for the program to grow.

Is a subsidy for PMF Finalists going to cost the government more money?

The subsidy proposed for placing a PMF candidate would not require a net increase in federal expenditures. In the status quo, all costs of the PMF program are borne by the government: agencies pay salaries and benefits, and pay a fee to OPM at the point of appointment. This proposal would surface and centralize these costs and create an agency incentive through the subsidy to hire PMFs, either by “recouping” funds collected from agencies through a pass-the-hat revolving fund or “capitalizing” on a central investment from another source. In either case, it would ensure that PMF Finalists are a scarce asset to be competed for, as the program was envisioned, and that the PMF program office manages thoughtful access to this asset for the whole government, rather than needing to be “selling” to recover operational costs.

A Quantitative Imaging Infrastructure to Revolutionize AI-Enabled Precision Medicine

Medical imaging, a non-invasive method to detect and characterize disease, stands at a crossroads. With the explosive growth of artificial intelligence (AI), medical imaging offers extraordinary potential for precision medicine yet lacks adequate quality standards to safely and effectively fulfill the promise of AI. Now is the time to create a quantitative imaging (QI) infrastructure to drive the development of precise, data-driven solutions that enhance patient care, reduce costs, and unlock the full potential of AI in modern medicine.

Medical imaging plays a major role in healthcare delivery and is an essential tool in diagnosing numerous health issues and diseases (e.g., oncology, neurology, cardiology, hepatology, nephrology, pulmonary, and musculoskeletal). In 2023, there were more than 607 million imaging procedures in the United States and, per a 2021 study, $66 billion (8.9% of the U.S. healthcare budget) is spent on imaging.  

Despite the importance and widespread use of medical imaging like magnetic resonance imaging (MRI), X-ray, ultrasound, computed tomography (CT), it is rarely standardized or quantitative. This leads to unnecessary costs due to repeat scans to achieve adequate image quality, and unharmonized and uncalibrated imaging datasets, which are often unsuitable for AI/machine learning (ML) applications. In the nascent yet exponentially expanding world of AI in medical imaging, a well-defined standards and metrology framework is required to establish robust imaging datasets for true precision medicine, thereby improving patient outcomes and reducing spiraling healthcare costs.

Challenge and Opportunity 

The U.S. spends more on healthcare than any other high-income country yet performs worse on measures of health and healthcare. Research has demonstrated that medical imaging could help save money for the health system with every $1 spent on inpatient imaging resulting in approximately $3 total savings in healthcare delivered. However, to generate healthcare savings and improve outcomes, rigorous quality assurance (QA)/quality control(QC) standards are required for true QI and data integrity.   

Today, medical imaging suffers two shortcomings inhibiting AI: 

Both result in variability impacting assessments and reducing the generalizability of, and confidence in, imaging test results and compromise data quality required for AI applications.

The growing field of QI, however, provides accurate and precise (repeatable and reproducible) quantitative-image-based metrics that are consistent across different imaging devices and over time. This benefits patients (fewer scans, biopsies), doctors, researchers, insurers, and hospitals and enables safe, viable development and use of AI/ML tools.  

Quantitative imaging metrology and standards are required as a foundation for clinically relevant and useful QI. A change from “this might be a stage 3 tumor” to “this is a stage 3 tumor” will affect how oncologists can treat a patient. Quantitative imaging also has the potential to remove the need for an invasive biopsy and, in some cases, provide valuable and objective information before even the most expert radiologist’s qualitative assessment. This can mean the difference between taking a nonresponding patient off a toxic chemotherapeutic agent or recognizing a strong positive treatment response before a traditional assessment. 

Plan of Action 

The incoming administration should develop and fund a Quantitative Imaging Infrastructure to provide medical imaging with a foundation of rigorous QA/QC methodologies, metrology, and standards—all essential for AI applications.

Coordinated leadership is essential to achieve such standardization. Numerous medical, radiological, and standards organizations support and recognize the power of QI and the need for rigorous QA/QC and metrology standards (see FAQs). Currently, no single U.S. organization has the oversight capabilities, breadth, mandate, or funding to effectively implement and regulate QI or a standards and metrology framework.

As set forth below, earlier successful approaches to quality and standards in other realms offer inspiration and guidance for medical imaging and this proposal:

Recommendation 1. Create a Medical Metrology Center of Excellence for Quantitative Imaging. 

Establishing a QI infrastructure would transform all medical imaging modalities and clinical applications. Our recommendation is that an autonomous organization be formed, possibly appended to existing infrastructure, with the mandate and responsibility to develop and operationally support the implementation of quantitative QA/QC methodologies for medical imaging in the age of AI. Specifically this fully integrated QI Metrology Center of Excellence would need federal funding to:

Once implemented, the Center could focus on self-sustaining approaches such as testing and services provided for a fee to users.

Similar programs and efforts have resulted in funding (public and private) ranging from $90 million (e.g., Pathogen Genomics Centers of Excellence Network) to $150 million (e.g., Biology and Machine Learning – Broad Institute). Importantly, implementing a QI Center of Excellence would augment and complement federal funding currently being awarded through ARPA-H and the Cancer Moonshot, as neither have an overarching imaging framework for intercomparability between projects.  

While this list is by no means exhaustive, any organization would need input and buy-in from:

International organizations also have relevant programs, guidance, and insight, including:

Recommendation 2. Implement legislation and/or regulation providing incentives for standardizing all medical imaging. 

The variability of current standard-of-care medical imaging (whether acquired across different sites or over a period of time) creates different “appearances.” This variability can result in different diagnoses or treatment response measurements, even though the underlying pathology for a given patient is unchanged. Real-world examples abound, such as one study that found 10 MRI studies over three weeks resulted in 10 different reports. This heterogeneity of imaging data can lead to a variable assessment by a radiologist (inter-reader variability), AI interpretation (“garbage-in-garbage-out”), or treatment recommendations from clinicians. Efforts are underway to develop “vendor-neutral sequences” for MRI and other methods (such as quantitative ground truth references, metrological standards, etc.) to improve data quality and ensure intercomparable results across vendors and over time. 

To do so, however, requires coordination by all original equipment manufacturers (OEMs) or legislation to incentivize standards. The 1992 Mammography Quality Standards Act (MQSA) provides an analogous roadmap. MQSA’s passage implemented rigorous standards for mammography, and similar legislation focused on quality assurance of quantitative imaging, reducing or eliminating machine bias, and improved standards would reduce the need for repeat scans and improve datasets. 

In addition, regulatory initiatives could also advance quantitative imaging. For example, in 2022, the Food and Drug Administration (FDA) issued Technical Performance Assessment of Quantitative Imaging in Radiological Device Premarket Submissions, recognizing the importance of ground truth references with respect to quantitative imaging algorithms. A mandate requiring the use of ground truth reference standards would change standard practice and be a significant step to improving quantitative imaging algorithms.

Recommendation 3. Ensure a funded QA component for federally funded research using medical imaging. 

All federal medical research grant or contract awards should contain QA funds and require rigorous QA methodologies. The quality system aspects of such grants would fit the scope of the project; for example, a multiyear, multisite project would have a different scope than single-site, short-term work.

NIH spends the majority of its $48 billion budget on medical research. Projects include multiyear, multisite studies with imaging components. While NIH does have guidelines on research and grant funding (e.g., Guidance: Rigor and Reproducibility in Grant Applications), this guidance falls short in multisite, multiyear projects where clinical scanning is a component of the study.  

To the extent NIH-funded programs fail to include ground truth references where clinical imaging is used, the resulting data cannot be accurately compared over time or across sites. Lack of standardization and failure to require rigorous and reproducible methods compromises the long-term use and applicability of the funded research. 

By contrast, implementation of rigorous standards regarding QA/QC, standardization, etc. improve research in terms of reproducibility, repeatability, and ultimate outcomes. Further, confidence in imaging datasets enables the use of existing and qualified research in future NIH-funded work and/or imaging dataset repositories that are being leveraged for AI research and development, such as the Medical Imaging and Resource Center (MIDRC). (See also: Open Access Medical Imaging Repositories.)  

Recommendation 4. Implement a Clinical Standardization Program (CSP) for quantitative imaging. 

While not focused on medical imaging, the CDC’s CSPs have been incredibly successful and “improve the accuracy and reliability of laboratory tests for key chronic biomarkers, such as those for diabetes, cancer, and kidney, bone, heart, and thyroid disease.” By way of example, the CSP for Lipids Standardization has “resulted in an estimated benefit of $338M at a cost of $1.7M.” Given the breadth of use of medical imaging, implementing such a program for QI would have even greater benefits.  

Although many people think of the images derived from clinical imaging scans as “pictures,” the pixel and voxel numbers that make up those images contain meaningful biological information. The objective biological information that is extracted by QI is conceptually the same as the biological information that is extracted from tissue or fluids by laboratory assay techniques. Thus, quantitative imaging biomarkers can be understood to be “imaging assays.” 

The QA/QC standards that have been developed for laboratory assays can and should be adapted to quantitative imaging.  (See also regulations, history, and standards of the Clinical Laboratory Improvement Amendment (CLIA) ensuring quality laboratory testing.)

Recommendation 5. Implement an accreditation program and reimbursement code for quantitative imaging starting with qMRI.

The American College of Radiology currently provides basic accreditation for clinical imaging scanners and concomitant QA for MRI. These requirements, however, have been in place for nearly two decades and do not address many newer quantitative aspects (e.g., relaxometry and ADC) nor account for the impact of image variability in effective AI use. Several new Current Procedural Terminology (CPT) codes have been recently adopted focused on quantitative imaging. An expansion of reimbursement codes for quantitative imaging could drive more widespread clinical adoption.

QI is analogous to the quantitative blood, serum and tissue assays done in clinical laboratories, subject to CLIA, one of the most impactful programs for improving the accuracy and reliability of laboratory assays. This CMS-administered mandatory accreditation program promulgates quality standards for all laboratory testing to ensure the accuracy, reliability, and timeliness of patient test results, regardless of where the test was performed. 

Conclusion

These five proposals provide a range of actionable opportunities to modernize the approach to medical imaging to fit the age of AI, data integrity, and precision patient health. A comprehensive, metrology-based quantitative imaging infrastructure will transform medical imaging through:

With robust metrological underpinnings and a funded infrastructure, the medical community will have confidence in the QI data, unlocking powerful health insights only imaginable until now.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
Is scanner variability and lack of standardization really an issue?

Yes. Using MRI as an example, numerous articles, papers, and publications acknowledge qMRI variability in scanner output can vary between manufacturers, over time, and after software or hardware maintenance or upgrades.

What is in-vivo imaging metrology, and why is it the future?

With in-vivo metrology, measurements are performed on the “body of living subjects (human or animal) without taking the sample out of the living subject (biopsy).” True in-vivo metrology will enable the diagnosis or understanding of tissue state before a radiologist’s visual inspection. Such measurement capabilities are objective, in contrast to the subjective, qualitative interpretation by a human observer. In-vivo metrology will enhance and support the practice of radiology in addition to reducing unnecessary procedures and associated costs.

What are the essential aspects of QI?

Current digital imaging modalities provide the ability to measure a variety of biological and physical quantities with accuracy and reliability, e.g., tissue characterization, physical dimensions, temperature, body mass components, etc. However, consensus standards and corresponding certification or accreditation programs are essential to bring the benefits of these objective QI parameters to patient care. The CSP follows this paradigm as does the earlier CLIA, both of which have been instrumental in improving the accuracy and consistency of laboratory assays. This proposal aims to bring the same rigor to immediately improve the quality, safety and effectiveness of medical imaging in clinical care and to advance the input data needed to create, as well as safely and responsibly use, robust imaging AI tools for the benefit of all patients.

What are “phantoms,” or ground truth references, and why are they important?

Phantoms are specialized test objects used as ground truth references for quantitative imaging and analysis. NIST plays a central role in measuring and testing solutions for phantoms. Phantoms are used in ultrasound, CT, MRI, and other imaging modalities for routine QA/QC and machine testing. Phantoms are key to harmonizing and standardizing data and improve data quality needed for AI applications.

What do you mean by “precision medicine”? Don’t we already have it?

Precision medicine is a popular term with many definitions/approaches applying to genetics, oncology, pharmacogenetics, oncology, etc. (See, e.g., NCI, FDA, NIH, National Human Genome Research Institute.) Generally, precision (or personalized) medicine focuses on the idea that treatment can be individualized (rather than generalized). While there have been exciting advances in personalized medicine (such as gene testing), the variability of medical imaging is a major limitation in realizing the full potential of precision medicine. Recognizing that medical imaging is a fundamental measurement tool from diagnosis through measurement of treatment response and toxicity assessment, this proposal aims to transition medical imaging practices to quantitative imaging to enable the realization of precision medicine and timely personalized approaches to patient care.

How does standardized imaging data and QI help radiology and support healthcare practitioners?

Radiologists need accurate and reliable data to make informed decisions. Improving standardization and advancing QI metrology will support radiologists by improving data quality. To the extent radiologists are relying on AI platforms, data quality is even more essential when it is used to drive AI applications, as the outputs of AI models rely on sound acquisition methods and accurate quantitative datasets.


Standardized data also helps patients by reducing the need for repeat scans, which saves time, money, and unnecessary radiation (for ionizing methods).

Does quantitative imaging improve accessibility to healthcare?

Yes! Using MRI as an example, qMRI can advance and support efforts to make MRI more accessible. Historically, MRI systems cost millions of dollars and are located in high-resource hospital settings. Numerous healthcare and policy providers are making efforts to create “accessible” MRI systems, which include portable systems at lower field strengths and to address organ-specific diseases. New low-field systems can reach patient populations historically absent from high-resource hospital settings. However, robust and reliable quantitative data are needed to ensure data collected in rural, nonhospital settings, or in Low and Middle Income Countries, can be objectively compared to data from high-resource hospital settings.


Further, accessibility can be limited by a lack of local expertise. AI could help fill the gap.
However, a QI infrastructure is needed for safe and responsible use of AI tools, ensuring adequate quality of the input imaging data.

What is a specific example of the benefits of standardization?

The I-SPY 2 Clinical Breast Trials provide a prime example of the need for rigorous QA and scanner standardization. The I-SPY 2 trial is a novel approach to breast cancer treatment that closely monitors treatment response to neoadjuvant therapy. If there is no immediate/early response, the patient is switched to a different drug. MR imaging is acquired at various points during the treatment to determine the initial tumor size and functional characteristics and then to measure any tumor shrinkage/response over the course of treatment. One quantitative MRI tumor characteristic that has shown promise for evaluation of treatment response and is being evaluated in the trial is ADC, a measure of tissue water mobility which is calculated from diffusion-weighted imaging. It is essential for the trial that MR results can be compared over time as well as across sites. To truly know whether a patient is responding, the radiologist must have confidence that any change in the MR reading or measurement is due to a physiological change and not due to a scanner change such as drift, gradient failure, or software upgrade.


For the I-SPY 2 trial, breast MRI phantoms and a standardized imaging protocol are used to test and harmonize scanner performance and evaluate measurement bias over time and across sites. This approach then provides clear data/information on image quality and quantitative measurement (e.g., ADC) for both the trial (comparing data from all sites is possible) as well as for the individual imaging sites.

What are the benefits of a metrological and standards-based framework for medical imaging in the age of AI?

Nonstandardized imaging results in variation that requires orders of magnitude more data to train an algorithm. More importantly, without reliable and standardized datasets, AI algorithms drift, resulting in degradation of both protocols and performance. Creating and supporting a standards-based framework for medical imaging will mitigate these issues as well as lead to:



  • Integrated and coordinated system for establishing QIBs, screening, and treatment planning.

  • Cost savings: Standardizing data and implementing quantitative results in superior datasets for clinical use or as part of large datasets for AI applications. Clinical Standardization Programs have focused on standardizing tests and have been shown to save “millions in health care costs.”

  • Better health outcomes: Standardization reduces reader error and enables new AI applications to support current radiology practices.

  • Support for radiologists’ diagnoses.

  • Fewer incorrect diagnoses (false positives and false negatives).

  • Elimination of millions of unnecessary invasive biopsies.

  • Fewer repeat scans.

  • Robust and reliable datasets for AI applications (e.g., preventing model collapse).


It benefits federal organizations such as the National Institutes of Health, Centers for Medicare and Medicaid Services, and Veterans Affairs as well as the private and nonprofit sectors (insurers, hospital systems, pharmaceutical, imaging software, and AI companies). The ultimate beneficiary, however, is the patient, who will receive an objective, reliable quantitative measure of their health—relevant for a point-in-time assessment as well as longitudinal follow-up.

Who is likely to push back on this proposal, and how can that hurdle be overcome?

Possible pushback from such a program may come from: (1) radiologists who are unfamiliar with the power of quantitative imaging for precision health and/or the importance and incredible benefits of clean datasets for AI applications; or (2) manufacturers (OEMs) who aim to improve output through differentiation and are focused on customers who are more interested in their qualitative practice.


Radiology practices: Radiology practices’ main objective is to provide the most accurate diagnosis possible in the least amount of time, as cost-effectively as possible. Standardization and calibration are generally perceived as requiring additional time and increased costs; however, these perceptions are often not true, and the variability in imaging introduces more time consumption and challenges. The existing standard of care relies on qualitative assessments of medical images.


While excellent for understanding a patient’s health at a single point in time (though even in these cases subtle abnormalities can be missed), longitudinal monitoring is impossible without robust metrological standards for reproducibility and quantitative assessment of tissue health. While a move from qualitative to quantitative imaging may require additional education, understanding, and time, such an infrastructure will provide radiologists with improved capabilities and an opportunity to supplement and augment the existing standard of care.


Further, AI is undeniably being incorporated into numerous radiology applications, which will require accurate and reliable datasets. As such, it will be important to work with radiology practices to demonstrate a move to standardization will, ultimately, reduce time and increase the ability to accurately diagnose patients.


OEMs: Imaging device manufacturers work diligently to improve their outputs. To the extent differentiation is seen as a business advantage, a move toward vendor-neutral and scanner-agnostic metrics may initially be met with resistance. However, all OEMs are investing resources to improve AI applications and patient health. All benefit from input data that is standard and robust and provides enough transparency to ensure FAIR data principles (findability, accessibility, interoperability, and reusability).


OEMs have plenty of areas for differentiation including improving the patient experience and shortening scan times. We believe OEMs, as part of their move to embrace AI, will find clear metrology and standards-based framework a positive for their own business and the field as a whole.

What is the first step to get this proposal off the ground? Could there be a pilot project?

The first step is to convene a meeting of leaders in the field within three months to establish priorities and timelines for successful implementation and adoption of a Center of Excellence. Any Center must be well-funded with experienced leadership and will need the support and collaboration across the relevant agencies and organizations.


There are numerous potential pilots. The key is to identify an actionable study where results could be achieved within a reasonable time. For example, a pilot study to demonstrate the importance of quantitative MRI and sound datasets for AI could be implemented at the Veterans Administration hospital system. This study could focus on quantifying benefits from standardization and implementation of quantitative diffusion MRI, an “imaging biopsy” modality as well as mirror advances and knowledge identified in the existing I-SPY 2 clinical breast trials.

Why have similar efforts failed in the past? How will your proposal avoid those pitfalls?

The timing is right for three reasons: (1) quantitative imaging is doable; (2) AI is upon us; and (3) there is a desire and need to reduce healthcare costs and improve patient outcomes.


There is widespread agreement that QI methodologies have enormous potential benefits, and many government agencies and industry organizations have acknowledged this. Unfortunately, there has been no unifying entity with sufficient resources and professional leadership to coordinate and focus these efforts. Many organizations have been organized and run by volunteers. Finally, some previously funded efforts to support quantitative imaging (e.g., QIN and QIBA) have recently lost dedicated funding.


With rapid advances in technology, including the promise of AI, there is new and shared motivation across communities to revise our approach to data generation and collection at-large—focused on standardization, precision, and transparency. By leveraging the existing widespread support, along with dedicated resources for implementation and enforcement, this proposal will drive the necessary change.

Is there an effort or need for an international component?

Yes. Human health has no geographical boundaries, so a global approach to quantitative imaging would benefit all. QI is being studied, implemented, and adopted globally.


However, as is the case in the U.S., while standards have been proposed, there is no international body to govern the implementation, coordination, and maturation of this process. The initiatives put forth here could provide a roadmap for global collaboration (ever-more important with AI) and standards that would speed up development and implementation both in the U.S. and abroad.

Work-based Learning for All: Aligning K-12 Education and the Workplace for both Students and Teachers

The incoming presidential administration of 2025 should champion a policy position calling for strengthening of the connection between K-12 schools and community workplaces. Such connections result in a number of benefits including modernized curricula, more meaningful lessons, more motivated students, more college and career readiness, more qualified applicants for local jobs, more vibrant communities, and a stronger nation. The gains associated with education-workplace partnerships are certainly not exclusive to STEM disciplines of study but given the high-demand for talent in STEM business and industry, the imperative may be greatest in science and mathematics, and the applied domains of engineering and technology. 

The rationale for a policy priority around K-12 and workplace partnerships centers around waning public confidence in the ability of schools to prepare tomorrow’s workforce. A perceived disconnect between what gets taught and what learners need in order to thrive on the job threatens individual livelihoods, family and community stability, and national competitiveness in an ever-more rapidly evolving global economy. Bridges are needed that unite education and workplaces, putting students and their teachers to work beyond the classroom. A new administration should:

  1. Expand externships for teachers in community workplaces. The best way to help every student to explore and to be inspired about career horizons is to prepare and inspire their teachers to represent to them the opportunities that await. Externships in community workplaces sharpen teachers’ content knowledge and skills and equip them to portray the exciting careers that await students. The existing Research Experiences for Teachers (RET) federal infrastructure can be adapted for supporting externships. 
  2. Deploy Competency-Based Education (CBE) at scale. America’s prevailing school model inhibits the expansion of experiential, or Work-Based Learning (WBL) in workplaces. The school day is a regimented sequence of seat-time tallies toward a seven-period stack of classes yielding little if any time to immerse learners in relevant experiences at workplaces. Or as one advocacy organization phrased it, “Today’s high school transcript is a record of time and activity, but not a very good measure of knowledge, skills, and dispositions. It doesn’t capture experiences or work products that provide evidence of growth and accomplishment.” An internet search of Work-based Learning nets over 3 billion hits. It’s one of the hottest topics in education. But those hits reveal a weakness to the WBL “movement”: it is almost entirely focused on career and technical education, a branch of general education serving about one-fourth of all students. Going forward, core area teachers and classes must take part. To do so, mathematics, science and other required and college preparatory courses need flexibility from seat time and content delivery. When teachers, schools and districts adopt Competency-Based Education, this allows more time for the other 75% of learners to earn credits by acquiring the knowledge and skills of a subject area while doing, making and working. Models exist for doing so.  

Concerted federal policy promoting the connection between K-12 schools and community workplaces sends a strong, bipartisan message to both education and employer sectors of the nation that the myriad advantages to learners, employers, and communities of cross-sector collaboration will now be the norm, not the exception. Moreover, it requires no new or novel and untested programmatic priorities – they are already at play in forward-thinking communities. Teacher externships dot the American landscape and will fit neatly into a new RET mold (coupling Research Experiences for Teachers with Regional Externships for Teachers as menu options). Competency-Based Education, with guidelines for Work-Based Learning, is already on paper in most U.S. states. Now is prime time to expand these life-changing educational reforms for all young Americans. 

Such expansions would fit neatly into existing federal structures; federal agencies have long supported competency-based education (U.S. Department of Education), Work-based Learning (U.S. Department of Labor), and Teacher-Externships (U.S. Department of Energy, and National Science Foundation). The current national landscape of teacher-externships, while promising, is  fraught with inconsistency and low participation: presently there are thousands of local teacher-externship models of wide variation in duration and rigor operated by school districts, local business organizations, higher education institutions, and regional education groups. Federal research-based guidelines and example-setting is a desperately needed function for standardizing high-quality experiences. Federal guidance and promotion could also help expand those experiences from the present low-capacity  (estimating 10 teachers per year in 5,000 local programs equates to 50,000 teacher-externs annually while there are over 3 million K-12 educators nationwide, meaning 60 years to reach all practitioners) to greater volume through more workplace and educator involvement.

Similarly, the national portrait for competency-based education leading to work-based learning presents a golden opportunity to usher educational transformation. At present, many schools and districts implement CBE to limited degrees in specific courses (typically Career and Technology Education, or CTE) for certain students (non-college bound). The potential for far greater impact across courses and the entire student spectrum awaits federal guidance and support.   

Challenge and Opportunity  

Urgency for Action

Thousands of businesses in towns and cities across the United States use science, mathematics and technology to engineer global goods while struggling to find and employ local talent. Thousands of schools across the U.S. teach science, mathematics, engineering and technology yet struggle to inspire their students toward local career opportunities. These two seemingly parallel universes overlap like the acetate pages of an anatomy textbook—muscle over bone—while largely failing to unite for mutual benefit. Iowa for example, is home to 4,273 global manufacturers depending on 263,870 employees to move product out the door. Pella Window, John Deere, Vermeer, Diamond-Vogel, Collins Aerospace, Winnebago, Tyson and others scramble to fill roughly 15,000 STEM job openings (p. 61) at any given time. The good news is that 75% of the state’s high school graduates profess interest (p. 29) in STEM careers. The bad news is that just 37% of graduates (p. 30) intend to live and work in Iowa. That is unless they’ve enjoyed a work-based learning experience and/or had a teacher who had spent a summer in industry. The Iowa experience parallels that of many rural and urban regions across the country: students whose teacher externed find more relevance in STEM classes applied to local jobs, And students who enjoy work-based learning are more likely to pursue careers locally after graduation. In combination, these two programs serve up a culture of connectedness between the world of work and the world of education, generating a win-win outcome for educators, employers, families, communities, and most importantly, for students.       

Opportunity for Impact

Immersing students and their teachers in workplace experiences is not a new idea. Career and technology education (CTE) has been a driving force for WBL for over 100 years. More recently, federal policy during the Obama administration re-shaped the blueprint for Perkins reauthorization by encouraging models that “better focus on real world experiences” (p. 3). And under the Trump administration the federal STEM education strategic plan called for a new and renewed emphasis on “…education-employer partnerships leading to work-based learning…” (p. 4). The key word here is “new”, and it’s not being emphasized enough: the status quo remains centered on CTE when it comes to teachers and students connecting with the work world, leaving out nearly three-quarters of all students. High school internships, for example, are completed by only about two percent of U.S. students, and CTE programs are completed by approximately 22 percent of white students but 18 percent of Black and 16 percent of Hispanic students. The national standards upon which states and districts base their mathematics and science curricula, including the Common Core and the Next Generation Science Standards, are not much help. They urge applied classroom problem-solving but fail to promote WBL for students or teachers. Today, the vast majority of K-12 student WBL opportunities—internships, apprenticeships, job shadows, collaborative projects, etc., take place through the CTE wing of schools. Likewise, most teacher-externship programs engage CTE educators almost exclusively. 

The potent WBL tools of career-technical education transposed over to core subject area students and teachers can invigorate mathematics, science and computing class, too. 

Impact Opportunity for Externships

As one former extern put it, “If you send one kid on an internship, it affects that one kid. If you send a teacher, the impact reaches their 200 students!” Especially for today’s rapidly growing and economically vital career sectors including Health Science, Information Technology, Biotech, Manufacturing, Agriculture, Data Analytics, Food, and Nature Resources, teacher externships can fuel the talent pipeline. Iowa has been conducting just such an experiment for a decade, making this type of professional development available to core discipline teachers. “Surveyed teacher-externs agreed or strongly agreed that it affected the way they taught, their understanding of 21st century [transportable] skills through math and science, and they agreed or strongly agreed that more students expressed an interest in STEM careers as a result of their having participated in the externship (p. 12). Nearly all participating teachers (93%) described the externship as “more valuable than any other PD in which they had ever taken part” (p. 13).

Specific impacts on teachers included the following: 

Specific impacts on their students include the following: 

Beyond the direct effects upon students and their teachers, externships in local workplaces leave lasting relationships that manifest year after year in tours, projects, mentorships, equipment support, summer jobs, etc. Teacher testimonials speak to the lasting effects. 

Impact Opportunity for CBE and WBL

Although rarely implemented, every U.S. state now allows Competency-Based Education. Broadly defined, CBE is an education model where students demonstrate mastery of concepts and skills of a subject to advance and graduate, rather than log a set number of hours seat-time and pass tests. Students move at individualized pace, concepts are accrued at variable rates and sequences, teachers operate as facilitators, and the work is more often projects-based—much of it occurring outside classroom walls. CBE solves the top inhibitor to Work-Based Learning for non-CTE, core content areas of study including science, mathematics, and computing: it frees up time. 

Utah, Washington, and Wyoming are considered leaders in the CBE arena for crafting policy guidelines sufficient for a few schools to pilot the model. In Washington, 28 school districts are collaborating to establish at least one CBE school in each, the Mastery-Based Learning Collaborative (MBLC). 

Another trailblazer in CBE, North Dakota, was recently recognized by the Education Commission of the States for legislating a series of changes to school rules to disinhibit CBE and WBL: (a) A competency-based student graduation pathway and allowance for outside work to count for course credit; (b) Level state support per student whether credits are earned inside or outside the classroom; and  (c) Scholarships that honor demonstrated competency equally to the standard credits and grades criterion.  

Finally, a school that typifies the power of CBE across subject areas, supported by the influential XQ Institute, is a metropolitan magnet model called Iowa BIG in Cedar Rapids. Enrollees choose local projects in partnership with an industry partner. Projects, like real life, are necessarily transdisciplinary. And project outcomes (i.e., mastery) determine grades. Outcomes include:

Yet, for all its impact and promise, Iowa BIG, like many CBE pilots, struggles to broaden offerings (currently limited to English, social studies, and business credits), and enrollment (roughly 100 students out of a grade 11-12 regional population over ten-times that amount). As discussed in the next section, CBE programs can be significantly constrained by local, state, and federal policies (or lack thereof).

Challenges Limiting Impact

The limited exposure of American K-12 students to teachers who enjoy an Externship, or to Competency-Based Education leading to Work-Based Learning testifies to the multiple layers of challenge to be navigated. At the local district level, school schedules and the lack of communication across school   – business boundaries are chief inhibitors to WBL, while educator professional development and crediting/graduation rules suppress CBE. At the state level, the inhibitors reveal themselves to be systemic: funding of and priority needs for educator professional development, a lack of a coherent and unifying profile of a graduate, standardized assessments, and graduation requirements retard forward movement on experiential partnerships. Logically, federal challenges have enormous influence on state and local conditions: the paucity of research and development on innovative instructional and assessment practices, inadequate communication of existent resources to drive WBL and other national education imperatives, insufficient support for the establishment of state and regional intermediary structures to drive local innovation, and non-complimentary funding programs that if coordinated could significantly advance K-12 –workplace alignment.  

The pace of progress at the local school level is ultimately most strongly influenced by federal policy priority. The policy is well-established by the federal STEM education strategic plan Charting a Course for Success: America’s Strategy for STEM Education, a report by the Committee on STEM Education of the National Science and Technology Council, Pathway 1: Develop and Enrich Strategic Partnerships (p. 9). The plan was developed through and embraced for its bipartisan approach. Refocusing on its fulfillment will make the United States a stronger and more prosperous nation.

Plan of Action

The federal government’s leadership is paramount in driving policy toward education-workplace alignment. Specific roles range from investment to asset allocation to communication, specific to both teacher externships and CBE leading to WBL.

(1) Congress should legislate that all federal agencies involved in STEM education outreach (those represented on the Committee on STEM Education [Co-STEM] and on the Subcommittee on Federal Coordination in STEM Education [FC-STEM]) establish teacher-externship programs at their facilities as capacity and security permit. The FC-STEM should designate an Inter-agency Working Group on Teacher-Externships [IWG-TE]  to be charged with developing a standard protocol consistent with evidence-based practice (e.g., minimum four-week, maximum eight-week summer immersion, authentic work experience applying knowledge and skills of their teaching discipline, close mentorship and supervision, the production of a translational teaching product such as a lesson, unit, or career exploratory component, compensation commensurate with qualifications, awareness and promotion activities, etc.). The IWG-TE will provide an annual report of externships activity across agencies to the FC-STEM and Co-STEM. 

(2) Within two years of enactment, all agencies participating in teacher externships shall develop and implement an expansion of the externships model to localities nationwide through a grant program by which eligible LEAs, AEAs, and SEAs may compete for funding to administer local teacher-externship programs in partnership with local employers (industry, nonprofit, public sector, etc.) pertinent to the mission and scope of the respective agency. For example, EPA may fund externs in state natural resource offices, and NASA may fund externs in aerospace industry facilities. The IWG-TE will include progress and participation in the grant program as part of their annual report.

(3) The IWG-TE shall design and administer an assessment instrument for components (1) and (2) that details participation rates by agency, demographics of participants, impact on participants’ teaching, and evidence of impact on the students of participants related to interest in and capability for high-demand career pursuit. An external expert in teacher-externships administration may be contracted for guidance in the establishment of the externships program and its assessment. 

As to funding, the agencies charged with implementation are those already conducting outreach, so it could be that initially no new dollars accompany the mandate. However, for the second component (grants), new funding would definitely be needed. A budget line request in 2027 seeking $10 million to be distributed proportionally to agencies based on numbers of externs – determined by the Office of Science and Technology Policy in close consult with FC-STEM – such that a goal of 1500 total externs be supported nationwide at an estimated cost of $6,000 each, plus administrative costs. In summary:

Teacher Externships

Competency-based Education leading to Work-Based Learning

Recommendations supporting both innovations

Conclusion 

Teachers prepared to connect what happens between 8:00 am and 3:00 pm to real life beyond school walls reflect the future of education. Learners whose classrooms expand to workplaces hold our best hopes as tomorrow’s innovators. Studying forces and vectors at the amusement park make Physics come alive. Embryo care at the local hatchery enlivens biology lessons. Pricing insurance against actuarial tables adds up in Algebra. Crime lab forensics gives chemistry a courtroom. Designing video games that use AI to up the action puts a byte in computer study. And all such experiences fuel passions and ignite dreams for STEM study and careers. Let America put learners and their teachers to work beyond classrooms to bridge the chasm between classrooms and careers. This federal policy priority will be a win-win for learners, their families and communities, employers, and the nation.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Pursuing A Missile Pre-Launch Notification Agreement with China as a Risk Reduction Measure

While attempts at dialogue and military-to-military communication with China regarding its growing nuclear arsenal have increased, the United States has so far been unable to establish permanent lines of communication on nuclear weapons issues, let alone reach a substantive bilateral arms control agreement with China. Given the simmering tensions between the United States and China, lack of communication can be dangerous. Miscommunication or miscalculation between the two nuclear powers – especially during a crisis – could lead to escalation and increased risk of nuclear weapons use. 

In an effort to prevent this, the next U.S. presidential administration should pursue a Missile Pre-Launch Notification Agreement with China. The agreement should include a commitment by each party to notify the other ahead of all strategic ballistic missile launches. Similar agreements currently exist between the United States and Russia and between China and Russia. One between the United States and China would be a significant confidence-building measure for reducing the risk of nuclear weapons use and establishing a foundation for future arms control negotiations.

Challenge and Opportunity

Between states with fragile relations, missile launches may be seen as provocative. In the absence of proper communication, a surprise missile test launch in the heat of a tense crisis could trigger overreaction and escalate tensions. Early warning systems are made to detect incoming missiles, but experts estimate that the US early-warning system would have just two minutes to determine if the attack is real or serious enough to advise the president on a possible nuclear counterattack. For example, when the Soviet Union test-launched four submarine-launched ballistic missiles (SLBMs) in 1980, the US early warning system projected that one of the missiles appeared to be headed toward the United States, resulting in an emergency threat assessment conference of US officials. 

Establishing regular communications is increasingly important as China grows its nuclear arsenal of quick-launching ballistic missiles, with the Pentagon estimating that China’s arsenal may reach 1,000 warheads by 2030. This is creating increasing concern about China’s intentions for how it might use nuclear weapons. In reaction, some US officials are signaling that it may be necessary for the United States to field new nuclear weapons systems or increase the number of deployed warheads. Defense hawks even advocate curtailing diplomatic communication with China, arguing that talks would allow China leverage and insight into US nuclear thinking.

With tensions and aggressive rhetoric on the rise, the next administration needs to prioritize and reaffirm the necessity of regular communication with China on military and nuclear weapons issues to reduce the risk of misunderstandings and conflict and mitigate the chance of accidental escalation and miscalculation.

The opportunity for negotiating an agreement with China exists despite heightened tensions. Although still inadequate, military-to-military communications between China and the United States have improved since a breakdown in 2022 following Speaker Nancy Pelosi’s visit to Taiwan, to which China responded with military exercises, missile tests, and sanctions on the island.

On November 6, 2023, Chinese Director-General of the Department of Arms Control Sun Xiaobo and US Assistant Secretary of State for Arms Control, Deterrence, and Stability Mallory Stewart discussed nonproliferation and nuclear transparency during the first US-China arms control talk in five years. Days later, Presidents Biden and Xi decided to resume military-to-military conversations and encouraged a follow-up arms control talk. A high-level China-US defense policy talk at the Pentagon in early January 2024 followed this summit. Most recently, Presidents Biden and Xi agreed in Lima, Peru that humans, not artificial intelligence, should have control over the decision to launch nuclear weapons. These meetings show promising signs of improved dialogue, but the United States’ continual emphasis on China as a competitor and China’s recent cancelation of arms control talks with the United States over Taiwan continue to undermine progress.

Policy Models

A Missile Pre-Launch Notification Agreement between China and the United States should include a commitment to provide at least 24 hours of advanced notice of all strategic ballistic missile tests including the planned launch and impact locations. The agreement would build on historical models of risk reduction measures between other states. For example, at the 1988 Moscow Summit, the United States and the Soviet Union signed the Agreement on Notifications of Launches of Ballistic Missiles to notify each other of the planned date, launch area, and area of impact no less than 24 hours in advance of any intercontinental ballistic missile (ICBM) or submarine-launched ballistic missile (SLBM) launches. These notifications were communicated through established Nuclear Risk Reduction Centers. The Strategic Arms Reduction Talks (START), signed in 1991, followed up on the notification agreement by including an agreement to provide more information, such as telemetry broadcast frequencies, in addition to the planned launch date and the launch and reentry area. 

The two countries expanded on this agreement through the Memorandum of Agreement on the Establishment of a Joint Center for the Exchange of Data from Early Warning Systems and Notifications of Missile Launches (also known as JDEC MOA) and the Memorandum of Understanding on Notifications of Missile Launches (PLNS MOU). The purpose of these agreements, signed in 2000, was to prevent a nuclear attack based on a false early warning system notification, and the agreements were carried forward into the New START treaty that entered into force in 2011.

While Russia has suspended its participation in the New START treaty and increased its threatening rhetoric around the potential use of nuclear weapons in its war in Ukraine, the Russian Foreign Ministry said that Russia would continue to provide notification of ballistic missile launches to the United States. This demonstrates the value of communication amid tensions and conventional conflict to prevent misunderstanding. 

In 2009, Russia and China signed a pre-launch notification agreement, marking China’s first bilateral arms control agreement. This agreement was extended in 2020 for another 10 years and covers launches of ballistic missiles with ranges over 2,000 km that are in the direction of the other country. The United States and China have no such arrangement. However, China did notify the United States, Australia, New Zealand, and the Japanese Coast Guard 24 hours before an ICBM launch into the Pacific Ocean on September 25, 2024. This launch appeared to be the first test into the Pacific China has conducted in over thirty years, and the gesture of notifying the United States beforehand was, according to a Pentagon spokesperson, “a step in the right direction to reducing the risks of misperception and miscalculation.” With this notification, the groundwork and precedent for dialogue on a missile pre-launch notification agreement has been laid.

Plan of Action

Create and present a draft agreement

The next administration should direct the State Department Bureau of Arms Control, Deterrence, and Stability to draft a proposal for a missile pre-launch notification agreement requiring mutual pre-launch notifications for missile launches with ranges of 2,000 km or more, as well as the sharing of launch and impact locations.

The US Assistant Secretary of State for Arms Control, Deterrence, and Stability should present the draft proposal to China’s Director-General of the Department of Arms Control of the Foreign Ministry.

Invite President Xi Jinping to participate in talks

The administration should propose a neutral site in the Asian-Pacific region, possibly in Hanoi, Vietnam, for a meeting between the US president and President Xi Jinping to emphasize the shared goal of trade security and discuss a missile test launch agreement. The meeting should include other high-level military commanders, including the Chairman of the Joint Chiefs of Staff and Secretary of Defense, as well as their relevant Chinese counterparts. 

Continue notifying China of all US missile test launches

The next administration should continue the precedent set by China in September 2024 to voluntarily provide advance notification of all ballistic missile test launches even in the absence of a negotiated agreement, like was done in the November 2024 Minuteman launch, and even if done unilaterally going forward. Such action would improve the prospect for reaching a negotiated agreement by demonstrating good faith and commitment to conflict mitigation.  

Raise the topic of missile launch notifications in P5 meetings

China has since assumed the rotating position of Chair of the P5, which could be a useful forum for considering new proposals for risk reduction measures among all nuclear states. After direct engagement with China on an agreement, China may have an interest in working with the United States to lead a multilateral agreement, as China would have more control over the language, international recognition for nuclear risk reduction, and improved security amid global nuclear modernization.

The next administration should direct The Special Representative of the President for Nuclear Nonproliferation under the Bureau of International Security and Nonproliferation to raise the topic of missile launch notifications and a potential launch notification agreement during the P5 process meeting ahead of the 2025 Nonproliferation Treaty (NPT) preparatory conference.

In order to work constructively with China on reducing the risk of nuclear use, a pre-launch notification agreement should, for now, be decoupled from any other arms control measures that would propose limiting China’s nuclear weapons stockpile or any launch capabilities. While comprehensive arms control may be an ultimate goal, linking the two at the outset would complicate talks significantly and likely prevent an agreement from coming to fruition; the United States should start with small steps to foster trust between the two nations and deepen regular military-to-military communication. 

Pursuing and negotiating a Missile Pre-Launch Notification Agreement with China will emphasize common objectives and help prevent escalation by miscommunication.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Unlocking The Future Of Work by Updating Federal Job Classifications

The Standard Occupational Classification (SOC) system contains critical statistical information about occupations, employment levels, trends, pay and benefits, demographic characteristics, and more. The system allows users – including leaders at Federal agencies – to collect, analyze, and disseminate data on employment trends, wages, and workforce demographics, and it enables a consistent analysis of the labor market. However, the rapid evolution of the job market, particularly in the tech sector, is outpacing updates to the SOC system. This misalignment poses challenges for economic measurement and development. The Office of Performance and Personnel Management (OPM) and the Office of Management and Budget (OMB) at the White House should lead a comprehensive effort to update SOC codes through research, collaboration with industry experts, pilot programs, and regulatory adjustments. By acting now, the Administration can create clear career pathways for workers and better equip federal agencies with critical workforce insights to optimize national investments.

Challenge and Opportunity

Outdated SOC classifications hinder efficient workforce planning, as traditional classifications do not reflect emerging tech roles and the energy innovation sector. Accurate SOC codes are necessary to enhance job growth analysis and create an efficient hiring pipeline that meets the demands of a fast-evolving job market. OMB is currently updating the Standard Occupational Classification (SOC) system manual and aims to complete the update by 2028. This is an opportunity to modernize classifications and include new roles that drive economic growth and support workforce development. Newer and emerging roles such as Renewable Energy Technicians, Large Language Model Engineers, Blockchain Developers, and Sustainability Engineers are either absent or not sufficiently detailed within the current SOC system. These emerging positions involve specialized skills like developing AI algorithms, creating decentralized applications, or designing immersive virtual environments, which go beyond the scope of traditional software development or IT security. 

Clear job classifications will allow for the efficient tracking of new, in-demand roles in emerging tech sectors, aligning with recent large federal investments, such as the CHIPS Act and IIJA, which aim to strengthen American industries. Updates to the SOC system will boost local economies by helping communities develop effective workforce training programs tailored to new job trends. They will provide clarity on required skills and competencies, making it easier for employers to develop accurate job descriptions and hire efficiently. Updates will provide workers with access to clear job descriptions and career pathways, allowing them to pursue opportunities and training in emerging fields like renewable energy and AI. SOC updates ensure national workforce strategies are data-driven and align with economic and industrial goals. The updates will ensure policymakers and researchers have accurate measurements of economic impacts and employment trends. 

Plan of Action

To modernize the SOC system and better reflect emerging tech roles, a dual-track plan involving comprehensive research, collaboration with key stakeholders, pilot programs, interagency awareness efforts, and regulatory updates is needed. The Bureau of Labor Statistics (BLS), specifically the SOC policy committee, should lead this work in partnership with the Office of Personnel Management (OPM), and the Office of Management and Budget (OMB). Key partners will include the Department of Energy (DOE), and Department of Labor (DOL), industry experts, academic institutions, and nonprofit organizations focused on workforce development.

Recommendation 1. Update the SOC System. 

The BLS, along with OPM and OMB,  should begin a comprehensive update process, with a focus on defining new roles in the market. Collaborate with industry experts, pilot programs with federal and state agencies, and research with academic institutions to ensure classifications accurately reflect the responsibilities and qualifications of modern roles. 

Recommendation 2. Reinstate Green Job Programs/Develop Frameworks.

OPM and OMB should work to immediately establish classifications for tech occupations. They should establish guidelines that facilitate the inclusion of emerging job categories in federal and state employment databases. Concurrently, advocate for the reinstatement and sustainable funding of job programs impacted by sequestration. These actions align with broader federal priorities on technological innovation and will require ongoing collaboration with Congress for budget approval. For example, before the work was stopped, BLS had $8 million per year for its “measuring green collar” jobs initiative. 

Recommendation 3. Pilot Programs and Interagency Awareness Efforts.

To validate the proposed changes, the BLS can implement pilot programs in collaboration with the broader DOL and selected state workforce agencies. These pilots will test the practical application of updated SOC codes and gather data on their effectiveness and increase awareness of the SOC role. The total estimated budget for implementing these actions is similar to those involved in a rulemaking process, which can vary from $500,000 to upwards of $10 million over two years.  The costs of the updates could be offset by reallocating unspent funds from a previous year’s budget allocation for workforce training and readiness programs or as part of an appropriation from Congress that restores program measurement funding. 

Conclusion

Modernizing the SOC system to reflect new and emerging occupations is essential for efficient workforce planning, economic growth, and national policy implementation. This update will provide local communities, employers, workers, and federal agencies with accurate data, ensuring efficient use of federal resources and alignment with the Administration’s economic priorities. By prioritizing these updates, the Administration can enhance job tracking, workforce strategies, and data accuracy, supporting investments that drive economic competitiveness.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
How will updating SOC codes help American workers compete globally?

Modernized SOC codes will ensure that American workers are trained and prepared for cutting-edge roles in technology and green sectors, helping the U.S. maintain its competitive edge in the global economy.

Why update SOCs for emerging roles when federal hiring doesn’t mandate SOC use?

While SOC codes are not required for federal hiring, they play a crucial role in tracking labor trends, planning workforce programs, and informing grant requirements. Accurate job data from updated SOCs will enhance federal and private sector collaboration, helping to shape initiatives that drive economic growth and efficiency.

What steps will be taken to ensure the updated SOC system supports sustainable job creation?

The proposed updates include advocating for the reinstatement and sustainable funding of job programs impacted by sequestration. Additionally, the updates will encourage the development of certification and training programs aligned with the new SOC classifications, supporting workforce readiness and career advancement in emerging sectors. These steps will contribute to sustainable job creation and economic growth.

Polar Infrastructure and Science for National Security: A Federal Agenda to Promote Glacier Resilience and Strengthen American Competitiveness

Polar regions – both the Arctic and Antarctic – are an important venue for strategic competition and loom as emerging and future national security challenges. As recognized during the first Trump Administration, ceding U.S. leadership at the poles threatens our future and emboldens our adversaries. The recent actions that the People’s Republic of China (PRC) and Russia have taken in the Arctic undermine regional stability as both nations aim to take economic advantage of newly available resources, such as oil, invest in research with dual military-civil applications, and take on an increasingly dominant role in regional governance. 

The Antarctic is the next security frontier. U.S. leadership in the Antarctic is eroding as U.S. investments dwindle and nations, including the PRC, establish new outposts and operations there. Simultaneously, polar change threatens to upend U.S. coastal communities and global security as ice-melt and glacier collapse could lead to catastrophic sea level rise, fueling extreme property loss, conflict, and mass migration. Glacier resilience, defined as the capacity of glacier systems to withstand and adapt to climate-driven stressors while maintaining their critical functions, is essential to mitigating these risks. Despite a longstanding treaty, the United States and our strategic partners have woefully underinvested in the development of tools, technologies, models, and monitoring infrastructure to inform glacial management, enable solutions to mitigate risks, and to shape U.S. security and foreign policy. 

Building on the prior Trump Administration’s plans for additional polar security icebreakers to protect national interests in the Arctic and Antarctic regions, Congress and the incoming Trump Administration should work together to reinforce the U.S. position in the regions, recognizing the role Antarctica in particular may have in a changing global order and its significance for sea-level rise.

We propose a Polar/Antarctic strategy for the incoming Trump Administration to enhance U.S. national security, promote American leadership, deter our adversaries, and prevent disastrous ice sheet collapse. This strategy involves research and development of engineering methods to slow the loss of glaciers and rates of sea-level rise by reducing the forces that drive glacier change and sensitivity of glaciers to those forces. Consistent with and reinforcing of the Antarctic Treaty System, this plan would focus investment across four areas:

Challenge and Opportunity 

The threat of sea-level rise is often seen as manageable, with increases of centimeters or inches. However, projections indicate that the collapse of the Thwaites Glacier and West Antarctic Ice Sheet could result in“doomsday scenarios” characterized by sea-level rises of as much as 10 feet worldwide. The probabilities of these occurrences have increased recently. If these possibilities became reality, then sea level would inundate major U.S. coastal regions and cities that are home to 12 million people and trillions of dollars of property and infrastructure. Globally, hundreds of millions of people would be at risk, fueling mass migration, refugee crises, and security challenges that threaten U.S. interests. Protecting Thwaites and the Antarctic Ice Sheet from collapse is crucial for a manageable future, making glacial resilience essential in any domestic and international national security strategy.  

There are many ideas about how to slow glacial collapse and protect the ice to hold back sea level rise; however, this research and technology development receives almost no federal funding. We must take this threat seriously and dramatically ramp up our infrastructure at the poles to monitor glaciers and demonstrate new technologies to protect the West Antarctic Ice Sheet.

While the current Antarctic treaty prohibits military activity in the region, it allows scientific research and other activities that could have military applications. At the same time, U.S. polar research infrastructure and funding is woefully insufficient to support the necessary innovation and operations required to address the sea-level rise challenge and maintain American leadership. Federal science funding agencies including the National Science Foundation (NSF), National Oceanic and Atmospheric Administration (NOAA), and National Aeronautics and Space Administration (NASA) play a critical role in supporting research in the Antarctic and on glacial processes. While these efforts have yielded some tools and understanding of glacial dynamics, there is no comprehensive, sustained approach to learn about and monitor changes to the ice sheets over time or to develop and test new strategies for glacial resilience. As a result, U.S. scientific infrastructure in the Antarctic has been largely neglected. The data produced by prior funded Antarctic studies have been insufficient to build an authoritative projection model of sea-level rise, a necessity to inform Antarctic management and to inform adaptation measures required by decision makers, coastal communities, and other stakeholders. 

A glacial resilience initiative that leverages space-based commercial and governmental satellite systems, long-duration unmanned aerial radar capabilities, and other observational capabilities would revitalize American leadership in polar regions at a critical time, as the PRC and other adversaries increase their polar presence – particularly in the Antarctic.

Plan of Action 

To strengthen glacial resilience and U.S. polar security, the next Administration should launch a comprehensive initiative to build critical world-leading infrastructure, promote innovation in glacial resilience technologies, enhance research on glacial dynamics and monitoring, and pursue policies that preserve U.S. national security interests. The recommendations below address each of these areas.

Develop and maintain world-leading critical infrastructure for glacial monitoring and resilience research and innovation.

NSF and the Air Force currently maintain operations for the U.S. in the Antarctic, but these facilities are in such a deplorable state that NSF has recently canceled all new field research and indefinitely delayed high priority experiments slated to be built at the South Pole. As the primary physical presence for the U.S. government, this infrastructure must be upgraded so that NSF can support scaled research and monitoring efforts. 

Expand glacial monitoring capabilities, utilizing space, air, and on-ice methods through NASA, NOAA, DOD, and NSF.

This effort should maximally leverage existing commercial and governmental space-based assets and deploy other air-based, long-duration unmanned aerial capabilities. The next  administration should also create national glacier models to provide detailed and timely information about glacier dynamics and sea-level rise to inform coastal planning and glacial resilience field efforts.

Pilot development and demonstration of glacier resilience technologies.

There is currently extremely limited investment in technology development to enhance glacier resilience. Agencies such as NSF and the Defense Advanced Research Projects Agency (DARPA) should support innovation and grand challenges to spur development of new ideas and technologies. The PRC is already investing in this kind of research and the United States and our strategic partners are far behind in ensuring we are the ones to develop the technology and set the standards for its use. 

Support a robust research program to improve understanding of glacier dynamics.

To address critical gaps and develop a coordinated, sustained approach to glacier research, the U.S. must invest in basic science to better understand ice sheet dynamics and destabilization. Investments should include field research as well as artificial intelligence (AI), modeling, and forecasting capabilities through NSF, NASA, DOD, and NOAA. These efforts rely on the infrastructure discussed above and will be used to better develop future infrastructure, creating a cycle of innovation that supports the U.S. operational presence and leadership and giving us a comparative advantage over our adversaries.  

Protect national security interests and maintain American leadership by promoting glacial resilience in international contexts.

There is a major void in international polar discussions about the importance of glacial resilience and extremely limited attention to developing technologies that would prevent ice sheet collapse and catastrophic sea level rise. The next  administration should play a leadership role in advancing global investment, ensuring that our allies contribute to this effort and the U.S. alone does not bear its costs. International research collaboration with our strategic allies will prevent the PRC and other competitors from expanding their influence and from surpassing the United States as the leader in Antarctic and polar research and innovation.

Support a new legislative package focused on advancing critical Antarctic research.

The Arctic Research and Policy Act of 1982 provides “for a comprehensive national policy dealing with national research needs and objectives in the Arctic.” Modeled on the Arctic Research and Policy Act, a new legislative package could include:

This legislation would elevate Antarctic research as a crucial part of a national security strategy and ensure the United States is prepared to confront the risks and consequences of Antarctic ice sheet collapse.

Conclusion 

The U.S. faces an important moment to address polar challenges that threaten both national security and global stability. As adversaries like PRC and Russia expand their presence and influence in the Arctic and Antarctic, the U.S. must reclaim leadership. Glacial resilience is a strategic imperative, given the catastrophic risks of sea-level rise and its impacts on coastal communities, migration, and security. By prioritizing investment in polar infrastructure, advancing cutting-edge technologies to mitigate glacial collapse, and strengthening international collaboration, the U.S. can lead a global effort to safeguard polar regions. A robust, coordinated strategy will bolster American interests, deter adversaries, and build resilience against one of the most pressing challenges we face today.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
How much will this proposal cost? Why is it worth the investment?

We estimate a budget of $100 million annually for the full approach, including investments in observational technologies, modeling efforts, and infrastructure improvements. This number includes funding for critical satellite programs, field research campaigns, and enhanced data modeling. The investment supports national security by addressing one of the most pressing threats to U.S. stability – sea-level rise. Accelerating glacier melt and the resulting sea-level rise could displace millions of people, destabilize coastal economies, and threaten critical infrastructure, including military bases and ports. This work enhances our nation’s ability to forecast and prepare for these threats, as well as our ability to mitigate glacial melt in ways that safeguard lives, property, and national interests.

What justifies forecasting and mitigating the risk of catastrophic sea-level rise vs. other possible options?

This course of action prioritizes early investment in observational technology, predictive modeling, and infrastructure development because these elements form the foundation of any meaningful response to the threat of catastrophic sea-level rise. The policy aligns with national security priorities by focusing on capabilities that enable accurate forecasting and risk assessment. Waiting to implement risks missing critical warning signs of glacial destabilization and puts the nation’s preparedness at risk. The recommended approach emphasizes proactive investment, which is far less expensive than responding to catastrophic sea-level rise.

How does this proposal enhance U.S. national security?

This proposal addresses the risks posed by catastrophic sea-level rise, which threaten critical infrastructure, economic stability, and global geopolitical stability. Specifically:



  • Many U.S. military installations, including naval bases and strategic ports, are located in coastal areas or on low-lying islands vulnerable to sea-level rise. Improved forecasting will allow the DoD to proactively adapt to sea-level rise.

  • Sudden and severe sea-level rise could force millions of people to migrate, creating humanitarian crises and destabilizing regions critical to U.S. interests. Early warning and mitigation strategies could reduce the likelihood of mass displacement and conflict.

  • The Arctic and Antarctic are becoming areas of increased geopolitical competition. This proposal is an opportunity for the U.S. to strengthen global influence while maintaining strategic advantages in these regions.

Why focus on glaciers specifically, rather than other climate-related risks?

Glaciers, particularly the Thwaites Glacier in West Antarctica, represent one of the most immediate and uncontrollable contributors to sea-level rise. Destabilized marine ice sheets are capable of causing rapid sea-level rise, threatening millions of coastal residents and vital infrastructure. Unlike other areas of climate science, the dynamics of glacial flow and melt are poorly understood and underfunded. With targeted investments, we can significantly improve our ability to monitor, model, and mitigate glacial contributions to sea-level rise.

What lessons can we learn from past initiatives addressing climate threats?

  • Initiatives like hurricane forecasting and flood mitigation have demonstrated that early investments in forecasting technologies save billions in recovery costs and reduce loss of life.

  • Programs such as NASA’s Earth Observing system and NOAA’s disaster resilience initiatives show that partnerships between federal agencies, academia, and the private sector drive innovation and amplify impact.

  • Delays in addressing risks like wildfires and droughts have highlighted the high cost of inaction, underscoring the need to move quickly and decisively in tackling sea-level rise threats.

Micro-ARPAs: Enhancing Scientific Innovation Through Small Grant Programs

The National Science Foundation (NSF) has long supported innovative scientific research through grant programs. Among these, the EAGER (Early-concept Grants for Exploratory Research) and RAPID (Rapid Response Research) grants are crucial in fostering early-stage questions and ideas. This memo proposes expanding and improving these programs by addressing their current limitations and leveraging the successful aspects of their predecessor program, the Small Grants for Exploratory Research (SGER) program, and other innovative funding models like the Defense Advanced Research Projects Agency (DARPA).

Current Challenges and Opportunities

The landscape of scientific funding has always been a balancing act between supporting established research and nurturing new ideas. Over the years, the NSF has played a pivotal role in maintaining this balance through various grant programs. One way they support new ideas is through small, fast grants. The SGER program, active from 1990 to 2006, provided nearly 5,000 grants, with an average size of about $54,000. This program laid the groundwork for the current EAGER and RAPID grants, which took SGER’s place and were designed to support exploratory and urgent research, respectively. Using the historical data, researchers analyzed the effectiveness of the SGER program and found it wildly effective, with “transformative research results tied to more than 10% of projects.” The paper also found that the program was underutilized by NSF program officers, leaving open questions about how such an effective and relatively inexpensive mechanism was being overlooked.

Did the NSF learn anything from the paper? Probably not enough, according to the data.

In 2013, the year the SGER paper was published, roughly 2% of total NSF grant funding went towards EAGER and RAPID grants (which translated to more than 4% of the total NSF-funded projects that year). Except for a spike in RAPID grants in 2020 in response to the COVID-19 pandemic, there has been a steady decline in the volume, amount, and percentage of EAGER and RAPID grants over the ensuing decade. Over the past few years, EAGER and RAPID have barely exceeded 1% of the award budget. Despite the proven effectiveness of these funding mechanisms and their relative affordability, the rate of small, fast grantmaking has stagnated over the past decade.

There is a pressing need to support more high-risk, high-reward research through more flexible and efficient funding mechanisms. Increasing the small, fast grant capacity of the national research programs is an obvious place to improve, given the results of the SGER study and the fact that small grants are easier on the budget.

The current EAGER and RAPID grant programs, while effective, face administrative and cultural challenges that limit their scalability and impact. The reasons for their underuse remain poorly understood, but anecdotal insights from NSF program officers offer clues. The most plausible explanation is also the simplest: It’s difficult to prioritize small grants while juggling larger ones that carry higher stakes and greater visibility. While deeper, formal studies could further pinpoint the barriers, the lack of such research should not hinder the pursuit of bold, alternative strategies—especially when small grant programs offer a rare blend of impact and affordability.

Drawing inspiration from the ARPA model, which empowers program managers with funding discretion and contracting authority, there is an opportunity to revolutionize how small grants are administered. The ARPA approach, characterized by high degrees of autonomy and focus on high-risk, high-reward projects, has already inspired successful initiatives beyond its initial form in the Department of Defense (DARPA), like ARPA-E for energy and ARPA-H for health. A similar “Micro-ARPA” approach — in which dedicated, empowered personnel manage these funds — could be transformative for ensuring that small grant programs within NSF reach their full potential. 

Plan of Action

To enhance the volume, impact, and efficiency of small, fast grant programs, we propose the following:

  1. Establish a Micro-ARPA program with dedicated funding for small, flexible grants: The NSF should allocate 50% of the typical yearly funding for EAGER/RAPID grants — roughly $50–100 million per year — to a separate dedicated fund. This fund would use the existing EAGER/RAPID mechanisms for disbursing awards but be implemented through a programmatically distinct Micro-ARPA model that empowers dedicated project managers with more discretion and reduces the inherent tension between use of these streamlined mechanisms and traditional applications.
    1. By allocating approximately 50% of the current spend to this fund and using the existing EAGER/RAPID mechanisms within it, this fund would be unlikely to pull resources from other programs. It would instead set a floor for the use of these flexible frameworks while continuing to allow for their use in the traditional program-level manner when desired.
  2. Establish a Micro-ARPA program manager (PM) role: As compared to the current model, in which the allocation of EAGER/RAPID grants is a small subset of broader NSF program director responsibilities, Micro-ARPA PMs (who could be lovingly nicknamed “Micro-Managers”) should be hired or assigned within each directorate to manage the dedicated Micro-ARPA budgets. Allocating these small, fast grants should be their only job in the directorate, though it can and should be a part-time position per the needs of the directorate.
    1. Given the diversity of awards and domains that this officer may consider, they should be empowered to seek the advice of program-specific staff within their directorate as well as external reviewers when they see fit, but should not be required to make funding decisions in alignment with programmatic feedback. 
    2. Applications to the Micro-ARPA PM role should be competitive and open to scientists and researchers at all career levels. Based on our experience managing these programs at the Experiment Foundation, there is every reason to suspect that early-career researchers, community-based researchers, or other innovators from nontraditional backgrounds could be as good or better than experienced program officers. Given the relatively low cost of the program, the NSF should open this role to a wide variety of participants to learn and study the outcomes.
  3. Evaluate: The agency should work with academic partners to design and implement clear metrics—similar to those used in the paper that evaluated the SGER program—to assess the programs’ decision-making and impacts. Findings should be regularly compiled and circulated to PMs to facilitate rapid learning and improvement. Based on evaluation of this program, and comparison to the existing approach to allocating EAGER/RAPID grants, relative funding quantities between the two can be reallocated to maximize scientific and social impact. 

Benefits

The proposed enhancements to the small grant programs will yield several key benefits:

  1. Increased innovation: By funding more early-stage, high-risk projects, we can accelerate scientific breakthroughs and technological advancements, addressing global challenges more effectively.
  2. Support for early-career scientists: Expanded grant opportunities will empower more early-career researchers to pursue innovative ideas, fostering a new generation of scientific leaders.
  3. Experience opportunity for program managers: Running Micro-ARPAs will provide an opportunity for new and emerging program manager talent to train and develop their skills with relatively smaller amounts of money.
  4. Platform for metascience research: The high volume of new Micro-ARPA PMs will create an opportunity to study the effective characteristics of program managers and translate them into insights for larger ARPA programs.
  5. Administrative efficiency: A streamlined, decentralized approach will reduce the administrative burden on both applicants and program officers, making the grant process more agile and responsive. Speedier grants could also help the NSF achieve its stated dwell time goal of 75% (response rate within six months), which they have failed to do consistently in recent years.

Conclusion

Small, fast grant programs are vital to supporting transformative research. By adopting a more flexible, decentralized model, we can significantly enhance their impact. The proposed changes will foster a more dynamic and innovative scientific ecosystem, ultimately driving progress and addressing urgent global challenges.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
Do small grants really matter?

Absolutely. The research supports it, but the stories bring it to life. Ask any scientist about the first grant they received for their own work, and you’ll often hear about a small, pivotal award that changed everything. These grants may not make headlines, but they ignite careers, foster innovation, and open doors to discovery.

Can this be done with reallocating existing budget and under existing authority?

Almost certainly within the existing budget. As for authority, it’s theoretically possible but politically fraught. NSF program officers already have the discretion to use RAPID and EAGER grants as they see fit, so in principle, a program officer could be directed to use only those mechanisms. That mandate would essentially transform their role into a Micro-ARPA program manager. The real challenge lies in the culture and practice of grant-making. There’s a reason that DARPA operates independently from the rest of the military branches’ research and development infrastructure.

Why would dedicated staffing and a Micro-ARPA program structure overcome administrative challenges?

In a word: focus. Program officers juggle large, complex grants that demand significant time and resources. Small grants, though impactful, can get lost in the shuffle. By dedicating staff to exclusively manage these smaller, fast grants, we create the conditions to test an important hypothesis: that administrative burden and competing priorities, not lack of interest, are the primary barriers to scaling small grant programs. It’s about clearing the runway so these grants can truly take off.

Why not just set goals for greater usage of EAGER and RAPID?

Encouraging greater use of EAGER and RAPID is a good start, but it’s not enough. We need to think bigger, trying alternative structures and dedicated programs that push the boundaries of what’s possible. Incremental change can help, but bold experiments are what transform systems.

Removing Arbitrary Deployment Quotas for Nuclear Force Posture

Every year since Fiscal Year 2017, Congress has passed an amendment to the National Defense Authorization Act (NDAA) that prohibits reducing the quantity of deployed intercontinental ballistic missiles (ICBMs) below 400. This amendment inhibits progress on adapting the U.S. ICBM force to meet the demands of the new geostrategic environment and restricts military planners to a force structure based on status quo rather than strategic requirements. Congress should ensure that no amendments dictating the size of the ICBM force are included in future NDAAs; this will allow the size of the ICBM force to be determined by strategic military requirements, rather than arbitrary quotas set by Congress. 

Challenge and Opportunity

Congressional offices that represent the districts where ICBMs are located work together on a bipartisan basis to advocate for the indefinite sustainment of their ICBM bases. This group of lawmakers, known as the “Senate ICBM Coalition,” consists of senators from the three ICBM host states – Wyoming, Montana, and North Dakota – plus Utah, where ICBM sustainment and replacement activities are headquartered at Hill Air Force Base. Occasionally, senators from Louisiana – the home state of Air Force Global Strike Command – have also participated in the Coalition’s activities.

Over the past two decades, the members of the coalition have played an outsized role in dictating U.S. nuclear force posture for primarily parochial reasons – occasionally even overriding the guidelines set by U.S. military leaders – in order to prevent any significant ICBM force reductions from taking place. 

In 2006, for example, this congressional coalition successfully reduced the mandated life expectancy for the Minuteman III ICBM from 2040 to 2030, thus accelerating the deployment of a costly new ICBM by effectively shortening the ICBM’s modernization timeline by a decade. As U.S. Air Force historian David N. Spires describes in On Alert: An Operational History of the United States Intercontinental Ballistic Missile Program, 1945-2011, “Although Air Force leaders had asserted that incremental upgrades, as prescribed in the analysis of land-based strategic deterrent alternatives, could extend the Minuteman’s life span to 2040, the congressionally mandated target year of 2030 became the new standard.”

In another notable example, during the Fiscal Year 2014 NDAA negotiations, senators from the ICBM coalition inserted amendments into the bill that explicitly blocked the Obama administration from conducting the environmental assessment that would be legally necessary in order to reduce the number of ICBM silos. In a subsequent statement, coalition members specifically boasted about how they had overruled the Pentagon on the ICBM issue: “the Defense Department tried to find a way around the Hoeven-Tester language, but pressure from the coalition forced the department to back off.” 

By inserting these types of amendments into successive NDAAs, the ICBM coalition has been highly successful in preventing the Department of Defense from fully determining its own nuclear force posture. 

The force posture of the United States’ ICBMs, however, is not – and has never been – sacred or immutable. The current force level of 400 deployed ICBMs is not a magic number; the number of deployed U.S. ICBMs has shifted dramatically since the end of the Cold War, and it could be reduced even further for a variety of reasons, including those related to national security, financial obligations, the United States’ modernization capacity, or a good faith effort to reduce deployed U.S. nuclear forces.

When the Bush administration deactivated the “Odd Squad” at Malmstrom Air Force Base in the mid-2000s, for example – bringing the ICBM force down from 500 to 450 – the main driver was economics, not security: the 564th Missile Squadron used completely different and more expensive communications and launch control systems from the rest of the Minuteman III force. (See: David N. Spires, On Alert: An Operational History of the United States Intercontinental Ballistic Missile Program, 1945-2011, 88 2nd ed., pp. 185.)

By legislating an arbitrary quota for the number of ICBMs that the United States must deploy at all times, Congress is leaving successive presidential administrations and Departments of Defense hamstrung with regards to shaping future force posture. 

Plan of Action

In order to ensure that the Department of Defense is no longer held to arbitrary force posture requirements that have little basis in military strategy, Congress should ensure that no amendments dictating the size of the ICBM force are included in future NDAAs. If such amendments are included, however, they should be based on strategic needs established by presidential and Defense Department guidance documents. 

Conclusion

The stakes of inaction on this front are significant, particularly from a cost perspective, as the maintenance of this arbitrary 400-ICBM quota has served to heavily bias procurement outcomes towards significantly more expensive options. For example, in part due to this arbitrary 400-ICBM quota, the Pentagon’s procurement process for the next-generation ICBM yielded a preference for producing a brand-new missile – the Sentinel – rather than life-extending the current Minuteman III, deploying a smaller number, and cannibalizing the retired missiles for parts that would facilitate the life-extension process. 

While this adapted life-extension could have likely been accomplished at a fraction of the cost of building a completely new missile, the Sentinel acquisition program, in contrast, is now approximately 81 percent over-budget and more than two years behind schedule relative to Pentagon estimates from 2020. This constituted an overrun in “critical” breach of the Nunn-McCurdy Act. 

To that end, it is imperative that Congress take action to ensure that ICBM force posture is shaped by security requirements, rather than parochial and arbitrary metrics that limit the financial and military flexibility of both the Pentagon and the President. 

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Slow Aging, Extend Healthy Life: New incentives to lower the late-life disease burden through the discovery, validation, and approval of biomarkers and surrogate endpoints

The world is aging. Today, some two thirds of the global population is dying from an age-related condition. Biological aging imposes significant socio-economic costs, increasing health expenses, reducing productivity, and straining social systems. Between 2010 and 2030,  Medicare spending is projected to nearly double – to $1.2 trillion per year. Yet the costly diseases of aging can be therapeutically targeted before they become late-stage conditions like Alzheimer’s. Slowing aging could alleviate these burdens, reducing unpaid caregivers, medical costs, and mortality rates, while enhancing productivity.  But a number of market failures and misaligned incentives stand in the way of extending the healthy lifespan of aging populations worldwide. New solutions are needed to target diseases before they are life-threatening or debilitating, moving from retroactive sick-care towards preventative healthcare.  

The new administration should establish a comprehensive framework to incentivize the discovery, validation, and regulatory approval of biomarkers as surrogate endpoints to accelerate clinical trials and increase the availability of health-extending drugs. Reliable biomarkers or surrogate endpoints could meaningfully reduce clinical trial durations, and enable new classes of therapeutics for non-disease conditions (e.g., biological aging). An example is how LDL (a surrogate marker of heart health) helped enable the development of lipid-lowering drugs. The current lack of validated surrogate endpoints for major late-life conditions is a critical bottleneck in clinical research. Because companies do not capture the majority of the benefit from the (expensive) validation of biomarkers, the private sector under-invests in biomarker and surrogate endpoint validation. This leads to countless lives lost and to trillions of public dollars spent on age-related conditions that could be prevented by better-aligned incentives.  It should be an R&D priority for the new administration to fund the collection and validation of biomarkers and surrogate endpoints, then gain regulatory approval for them. As we explain below, the existing FNIH Biomarkers Consortium does not fill this role.

Currently, companies are understandably hesitant to invest in validation without clear rewards or regulatory pathways. The proposed framework would encourage private companies and laboratories to contribute their biomarker data to a shared repository. This repository would expedite regulatory approval, moving away from the current product-by-product assessment that discourages data sharing and collaboration. Establishing a broader pathway within the FDA for standardized biomarker approval would allow validated biomarkers to be recognized for use across multiple products, reducing the existing incentives to safeguard data while increasing the supply of validated biomarkers and surrogate endpoints. Importantly, this would accelerate the development of drugs which holistically extend the healthspan of aging populations in the U.S. by preventing instead of treating late-stage conditions. (Statins similarly helped prevent millions of heart attacks.)

Key players such as the FDA, NIH, ARPA-H, and BARDA should collaborate to establish a streamlined pathway for the collection and validation of biomarkers and surrogate endpoints, allowing these to be recognized for use across multiple products. This initiative aligns with the administration’s priorities of accelerating medical innovation and improving public health with the potential to add trillions of dollars in economic value by making treatments and preventatives available sooner. This memo outlines a framework applicable to various diseases and conditions, using biological aging as a case study where the validation of predictive and responsive biomarkers may be vital for significant breakthroughs. Other critical areas include Alzheimer’s disease and amyotrophic lateral sclerosis (ALS), where the lack of validated surrogate endpoints significantly hinders the development of life-saving and life-improving therapies. By addressing these bottlenecks, we can unlock new avenues for medical advancements that will profoundly improve public health and mitigate the fast-growing, nearly trillion-dollar Medicare spend on late-life conditions.

Challenge and Opportunity

By 2029, the United States will spend roughly $3 trillion dollars  yearly – half its federal budget – on adults aged 65 and older. A good portion of these funds will go towards Medicare-related expenses that could be prevented. Yet the process of bringing preventative drugs to market is lengthy, costly, and currently lacking in commercial incentives. Even for therapeutics that target late-stage diseases, drug development often takes 10+ years and cost estimates range between $300 million to $2.8 billion. This extensive duration and expense are due, in part, to the reliance on traditional clinical endpoints, which require long-term observation and longitudinal data collection. The burden of chronic diseases is growing, and better biomarkers and surrogate endpoints are needed to accelerate the development of therapeutics that prevent non-communicable diseases and age-related decline. Chronological age, for instance, is a commonly used but inadequate surrogate marker for biological age. This means that, to date, clinical trials on therapeutics designed to improve the biology of aging take decades to validate, rather than years. As a result, pharmaceutical companies find more short-term rewards in treating late-stage diseases, since developing drugs that reduce overall age-related decline requires longer and currently uncertain endpoints.

The validation of reliable biomarkers and surrogate endpoints offers a promising solution to this challenge. Biological measures often correlate with and predict clinical outcomes, and can therefore provide early indications of whether a treatment is effective. If sufficiently predictive, biomarkers can serve as surrogate clinical endpoints, potentially reducing the duration and cost of clinical trials. Validated biomarkers must accurately predict clinical outcomes and be accepted by regulatory authorities, yet the validation process is underfunded due to insufficient commercial incentives for individual agents to share their biomarkers to be used as a public good. (From a purely financial standpoint, companies are better off targeting diseases with known endpoints.) 

The most prominent existing efforts to advance biomarkers and surrogate endpoints are the Foundation for the National Institute of Health’s (FNIH) Biomarkers Consortium and the FDA’s Biomarker Qualification Program. Established in 2006, the Biomarkers Consortium is a public-private partnership aimed at advancing the development and use of biomarkers in medical research. Meanwhile, the FDA’s qualification program was the result of the 21st Century Cures Act, passed in 2016, which underscored the critical role biomarkers play in accelerating medical product development. The Act mandated the FDA to implement a more transparent and efficient process for biomarker qualification. 

Despite the Consortium’s ambitious goals, the rate of biomarker qualification by the FDA has been slow. Since its inception in 2006, only a small number of biomarkers have been successfully qualified. This sluggish progress has been a source of criticism for stakeholders, especially given the high level of resources and collaboration involved. For example, the process of validating biomarkers for osteoarthritis under the Consortium’s “PROGRESS OA” project has been ongoing since Phase 1 and still faces hurdles before full qualification​. We are of the view that this is the result of two issues. Firstly, the qualification process, which involves FDA approval, is seen as overly complex and time-consuming. Despite the 21st Century Cures Act aiming to streamline the process, resulting in the qualification pathway, it remains a significant challenge. The difficulty in navigating the regulatory landscape can limit the impact of Biomarkers Consortium (BC) projects. The Kidney Safety Project, for example, faced substantial regulatory hurdles before finally achieving the first qualification of a clinical safety biomarker​. Secondly, even though the Consortium operates in a precompetitive space, there are ongoing challenges related to data sharing. Companies may still hesitate to share critical data that could advance biomarker validation out of concern for losing a competitive edge, which hampers collaboration​. To address these issues, it is crucial to implement a framework that promotes data sharing in the academic and private sectors, providing strong incentives for the validation and regulatory approval of biomarkers, while improving regulatory certainty with a standardized regulatory process for surrogate endpoint validation.

The current boom in biotechnology underscores the urgency of addressing persisting inefficiencies. Without changes, we face a significant bottleneck in proving the efficacy of new drugs. This is exacerbated by Eroom’s Law—the observation that drug discovery is becoming slower and more expensive over time. This growing inefficiency threatens to hinder the development of new, life-saving treatments at a time when the American population is aging and rapid medical advancements are crucial to deter increasing medical and social costs. In just 11 years—between 2018 and 2029—the U.S. mandatory spending on Social Security and Medicare will more than double, from $1.3 trillion to $2.7 trillion per year. Yet the costly diseases of aging can be therapeutically targeted before they become late-stage conditions like Alzheimer’s.  For federal policymakers, taking immediate action to improve data sharing and biomarker validation processes is vital. Failure to do so will not only stifle innovation but also delay the availability of critical therapies that could save countless lives and accelerate economic growth in the long run. Prompt policy intervention is essential to capitalize on the current advancements in biotechnology and ensure the development of new life-saving tests, tools, and drugs.

Implementing pull-incentives for data sharing now can help the United States adjust to its new demographic structure, where adults in advanced age prevail, while fertility rates decline.  It can also mitigate the escalating costs and timelines of clinical trials, and accelerate the approval of life-saving, health-extending drugs. If our proposed framework is successfully implemented, a robust pool of biomarker data will be established, significantly facilitating the discovery and validation of biomarkers. This will result in several key advancements, including shortened clinical trial durations, increased R&D investment, faster drug approvals, and even increased drug efficacy. Additionally, new drug classes targeting non-disease endpoints, such as biological aging, could be developed. Just as the discovery of LDL as a surrogate marker of heart health was critical in enabling the testing and development of statins, the discovery of clinical-grade biomarkers may unlock new therapeutics designed to target the mechanisms that drive human aging, slowing down the progression of age-related diseases (like cancers) before they become deadly and socio-economically expensive.

Plan of Action

To address the challenge of inefficient data sharing, validation, and approval of biomarkers, we propose implementing a series of pull-incentives aimed at encouraging pharmaceutical companies to contribute their relevant biomarker data to a shared repository and undertake the necessary research and analysis for public validation. These validated biomarkers can then be formally accepted by regulators as surrogate endpoints for drug approval, accelerating the drug development process and reducing late-life costs.

Recommendation 1. An NIH-FDA initiative for Biomarkers and Surrogate Endpoints Within the NIA

Most existing agencies focus on single, often late-stage diseases. This is at odds with a holistic understanding of human biology. A new initiative within the National Institute on Aging (NIA) could be devoted to the discovery, collection, and validation of biomarkers and surrogate endpoints for overall human health and age-related decline. Most National Institutes of Health funds are currently devoted to the diseases of aging (think cancers, Alzheimer’s, heart disease, or Parkinson’s.) Within the NIA, research on Alzheimer’s disease alone receives roughly eight  times more funding than the biology of aging, with few human-relevant results. Every federal agency and U.S. individual would benefit from better biomarkers of long-term health and from an understanding of how to measure the biology of aging. Yet no single agent has the incentives to collect and validate this data, for instance by shouldering the costs of validating predictive and responsive biomarkers of aging.

This new initiative could also be devoted to the development of preclinical, human-relevant methodologies that could broadly facilitate or streamline drug development. In 2022, the FDA Modernization Act 2.0 approved the use of in vitro and in silico New Approach Methodologies (NAMs) like cell-based assays (e.g. organs-on-chips) or computer models (like virtual cells) in preclinical development to reduce or replace animal studies, especially “where no pharmacologically relevant animal species exists.” This may be the case for human aging, where no single animal model reflects the full complex biology of our aging process. 

At present, these technologies cannot accurately represent the multifactorial processes of aging, and they cannot model entire organisms. Much work remains to be done to even understand how to “code” aging into organs-on-chips. Yet if supplemented by approaches like in vivo pooled screening, next generations of human-relevant in vitro or in silico methodologies (like virtual cells) could be infused with the complex data needed to accelerate clinical trial results and increase drug efficacy. For in vitro and in silico models to reproduce key aspects of aging biology, a better understanding of how human aging works in living organisms  — and what markers to include to represent it either virtually or in vitro — may be needed. Yet pharmaceutical companies, startups, health insurance firms, and even research hospitals again lack the incentives to shoulder the costs of collecting and validating this type of data. This means a new office within a federal agency may be needed to supply these incentives.

Recommendation 2. New Data-sharing Incentives 

The specific incentives used would need to be developed in collaboration with policymakers and industry stakeholders, but a few are outlined below: 

Pull-incentives 

One possibility is offering transferable Priority Review Vouchers (PRVs) or similar pull incentives to companies that share their biomarker data. PRVs are currently awarded by the FDA to companies developing drugs for neglected tropical diseases, rare pediatric diseases, or medical countermeasures. A PRV allows the holder to expedite the FDA review of another drug from 10 months to 6 months, and holds significant financial value. Offering transferable PRVs for drugs designed to target biological aging, for instance, could create the incentives needed for pharmaceutical companies to target early-stage age-related conditions before they turn into diseases.  

The creation of a new PRV category would require legislative action. Our proposed NIH-FDA initiative would be well positioned to oversee the issuance of PRVs, working with government agencies and think tanks to determine, for instance, what an “aging therapeutic” means, and what a company needs to achieve to gain a PRV for a longevity drug. The Alliance for Longevity Initiatives, for instance, has developed an advanced approval pathway for health-extending drugs that directly target the biology of aging. Another possible strategy would be for the FDA to encourage drugs that target multiple disease indications at once, perhaps offering discounts or incentives for every extra biomarker or surrogate endpoint validated. This could effectively encourage the development of drugs that do more than marginally improve on existing interventions. 

We acknowledge that an overabundance of PRVs can saturate the market, decreasing their value and weakening the intended pull-incentive for pharmaceutical innovation. A response would be to demand that proposals to issue additional PRVs include a comprehensive market impact analysis to mitigate unintended economic consequences. Expanding the number of PRVs can also place extra demands on the FDA’s limited resources, potentially leading to longer approval times for other essential medications, even though PRV holders often delay redemption, preventing an immediate influx of priority review applications. The PRV system may inadvertently favor larger, well-established pharmaceutical companies that have the means to acquire and leverage PRVs effectively, creating barriers for smaller firms and startups. These are all spill-over problems worth solving for the potential upshot of mitigating late-life disease costs and encouraging drugs that holistically improve the human healthspan.  

Biomarker Data Sharing as a Condition of Federal Funding

Federal funding recipients are legally obligated to make their research publicly accessible through agency-specific policies aimed at advancing open science. This mandate was strengthened by the 2022 OSTP Memorandum. Despite this clear mandate, the implementation of public access policies has been uneven across federal agencies, with progress varying due to differences in resources, technical infrastructure, and agency-specific priorities. The 2022 OSTP Public Access Memorandum aims to accelerate agency efforts to enhance public access infrastructure and policies. This updated guidance presents an opportunity for agencies to not only meet immediate data-sharing requirements but also to expand policy scopes to include essential clinical data, such as biomarker data from clinical trials. To meet these goals, agencies should ensure that funding agreements explicitly require the publication of comprehensive biomarker data and that suitable repositories are available to store and share these critical datasets effectively.

Case Study: Project NextGen

A prime example of the potential success of such initiatives is Project NextGen, a program led by BARDA in collaboration with the NIH to advance the next generation of COVID-19 vaccines and treatments. As part of its vaccine program, Project NextGen includes centralized immunogenicity assays with the overarching goal of establishing correlates of protection, which could serve as surrogate biomarkers for next-gen vaccines. These assays are collected during Phase 2b vaccine studies sponsored by Project NextGen, which have been designed to measure a number of secondary immunogenicity endpoints including systemic and mucosal immune responses. Developers share their assays so that they can be used as a public good, in return for federal funding. This effort demonstrates the feasibility and benefits of a federally led effort to share assay data to advance biomarker validation and drug development. 

Recommendation 3. Create and Manage a Data Repository 

To enhance collaborative research and ensure the efficient use of publicly funded clinical data, we recommend establishing a secure data repository. This repository will serve as a centralized platform for data submission, storage, and access. Management of the repository could be undertaken by a federal agency, such as the NIH, leveraging their experience with the Biomarkers Consortium, perhaps in partnership with non-governmental organizations like the Biomarkers of Aging Consortium. Drawing from existing models, such as Project NextGen’s assay data management, can provide valuable insights into the implementation and operationalization of the repository. 

The cost of establishing and maintaining this repository, including data storage, management, and access controls, would be dwarfed by the socio-economic returns it could provide. This repository can facilitate data sharing, protect sensitive information, and promote a collaborative environment that accelerates biomarker validation and approval, while ensuring pharmaceutical companies that their hard-earned data is safely stored. 

The securely stored data in the repository would primarily be accessible to qualified researchers, clinicians, and policymakers involved in biomarker research and development, including academic researchers, pharmaceutical companies, and public health agencies. Access would be granted through an application and review process. The benefits of this repository are multifaceted: it accelerates research by providing a centralized database, enhances collaboration among scientists and institutions, increases efficiency by reducing redundancy and improving data management, ensures data security through robust access controls, offers cost-effectiveness with long-term socio-economic returns, and supports regulatory bodies with comprehensive data sets for more informed decision-making.

Recommendation 4. Create A Regulatory Pathway with Broader Application 

To accelerate the adoption of validated biomarkers and surrogate endpoints in drug development, we propose the creation of a streamlined regulatory approval process within the FDA. This new pathway would establish clear criteria and standardized procedures for biomarker evaluation and approval, facilitating their recognition for use across multiple products and therapeutic areas.

Currently, the FDA’s Center for Drug Evaluation and Research (CDER) operates the Biomarker Qualification Program (BQP), which allows drug developers to seek regulatory qualification for specific contexts of use. While this program fosters collaboration between the FDA and external stakeholders, biomarkers are qualified on a case-by-case basis, limiting their broader applicability across different drug development programs.

Additionally, the FDA maintains a Table of Surrogate Endpoints that have been used as the basis for drug approvals under the accelerated approval pathway. However, this table primarily serves as a reference and does not comprehensively address the need for a streamlined approval process for biomarkers and surrogate endpoints.

By developing a framework that moves away from traditional product-by-product assessments, the FDA could reduce existing barriers to biomarker and surrogate endpoint discovery and approval. This approach would encourage data sharing and collaboration among pharmaceutical companies and research institutions, leading to faster validation and broader acceptance of these critical tools in drug development.

This proposal builds upon existing legislative efforts, such as the 21st Century Cures Act of 2016, which includes provisions to accelerate medical product development and supports the use of biomarkers and surrogate endpoints in the regulatory process. Furthermore, it aligns with the FDA’s ongoing efforts to provide clarity on evidentiary criteria for biomarker qualification, as outlined in the 2018 guidance document “Biomarker Qualification: Evidentiary Framework.”

Inspiration for this approach can be drawn from the Advanced Approval Pathway for Longevity Medicines (AAPLM) proposed by the Alliance for Longevity Initiatives (See AAPLM-Whitepaper)​. The AAPLM includes provisions such as a special approval track, a priority review voucher system, and indication-by-indication patent term extensions, which align economic incentives with the transformative health improvements that longevity medicines can provide. These measures offer a valuable template for facilitating the recognition and approval of biomarkers. Adding to the existing FDA table of surrogate endpoints that can serve as the basis for drug approval or licensure, and referencing existing collaborations between the NIH and FDA, such as the Biomarkers Consortium, can provide a robust foundation for new biomarker evaluations. Ultimately, this regulatory innovation will support the development of life-saving drugs, enhance public health outcomes, and meaningfully contribute to economic growth by bringing effective treatments to market more quickly.

Conclusion

Today, over two thirds of all deaths in the United States are the result of an age-related condition. The burden of non-communicable diseases is growing, and better biomarkers and surrogate endpoints are needed to target diseases before they are life-threatening or debilitating. The next administration should implement a comprehensive framework to promote data sharing and incentivize the validation and regulatory approval of biomarkers and surrogate endpoints. This aligns directly with the administration’s goal to make Americans healthy. These solutions can substantially reduce the duration and cost of clinical trials, accelerate the development of life-saving drugs, and improve public health outcomes. It is possible and necessary to create an environment that encourages and rewards pharmaceutical companies to share crucial data that accelerates medical innovation. By discovering and validating predictive and responsive biomarkers of health and disease, new therapeutic classes can be developed to directly target biological aging and prevent most forms of cancers, heart disease, frailty, vulnerability to severe infection, and Alzheimer’s. This will enable the United States to remain at the forefront of medical research, and to respond to the growing demographic crisis of aging populations in declining health.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
Why should the federal government be the entity to act rather than the private sector?

A number of market failures stand in the way of the discovery and validation of predictive, reliable, and responsive biomarkers. First, it’s currently expensive to test drugs in multiple disease indications, which means pharmaceutical companies are often incentivized to focus on late-stage diseases (e.g. delaying death by a terminal cancer by three months), since this drug class is more easily and quickly trialed. The FDA also strongly assumes that a treatment ought to modulate a single outcome. (Think life/death; heart disease/no heart disease.) Therapeutics that target biological aging, for instance, would take decades to test without validated biomarkers or widely accepted surrogate endpoints.


Aging research, for instance, has seen a 70-fold increase in venture capital funding since the last decade. Yet so far—and this is a critical asterisk—misaligned commercial incentives have mostly optimized for unproven supplements, imprecise biological-age-tracking apps, and unsafe experimental therapies or cosmetics. The most well-meaning investors and founders in “longevity” often end up developing drugs for single disease indications (like osteoarthritis, or obesity) to avoid bankruptcy or as a path to self-fund their intent of developing drugs that more holistically target the mechanisms that drive aging. arket incentives need to be aligned to the pressing social needs these therapeutics could respond to.


The federal government is uniquely positioned to coordinate large-scale initiatives that require significant resources and regulatory oversight. While private sector companies play crucial roles in drug development, they often lack the incentives to self-coordinate and the authority to drive comprehensive data-sharing and biomarker validation efforts.


Cohesion from data-collection to regulatory approval of biomarkers is going to be key if surrogate endpoints are actually going to be adopted. Having the federal government oversee all stages will ensure this cohesion.

You mention the Biomarkers Consortium. Why have they not succeeded in addressing this problem? How is your solution different?

The Biomarkers Consortium has made meaningful strides in advancing biomarker research, but they have not succeeded in acquiring sufficient data. The consortium relies on voluntary, precompetitive collaboration without providing strong financial or legislative incentives for data sharing. It does not maintain a centralized, secure data repository, and struggles with fragmented data sharing. It also lacks influence over the FDA’s biomarker qualification process, which remains complex and time-consuming. This has resulted in slow progress due to hesitancy from private entities to share valuable data. Our solution differs by directly addressing this data-sharing hurdle through a series of targeted incentives that reduce the case-by-case assessment currently required, and enable broader application of validated biomarkers across multiple drugs and therapeutic areas.


By introducing legislative changes to authorize patent extensions and expand Priority Review Vouchers (PRVs), we create compelling reasons for companies to share their data. Additionally, our proposal includes the development of a centralized data repository with a streamlined regulatory approval process, inspired by the Advanced Approval Pathway for Longevity Medicines (AAPLM). This approach not only incentivizes data sharing but also provides a clear and efficient pathway for biomarker validation and regulatory acceptance. By leveraging existing frameworks and offering tangible rewards, our solution proposes an increase in incentives, to match the socioeconomic benefits that may be unlocked by more accessibility to the wealth of existing but undersupplied biomedical data.

The FDA already has an Accelerated Approval Pathway. Why do you need another pathway to validate biomarkers?

The FDA’s Accelerated Approval Pathway is indeed a valuable tool that allows for the approval of drugs based on surrogate endpoints that are reasonably likely to predict clinical benefit. This pathway requires substantial evidence showing that these surrogate endpoints are linked to clinical outcomes, usually gathered from rigorous clinical trials. However, it typically applies to surrogate endpoints validated for specific uses or products. Our goal is to establish a new pathway that supports the validation and use of surrogate endpoints across multiple products. By validating biomarkers that can be used across various drugs, we can streamline the drug development process, reducing the time and cost associated with bringing new therapies to market. This broader approach would enhance efficiency, reduce drug development time and costs, and promote innovation by encouraging pharmaceutical companies to invest in research, knowing that successful biomarkers can have wide-reaching applications.

Who is likely to push back on this proposal, and how can that hurdle be overcome?

Pharmaceutical companies could push back on this proposal due to concerns over losing their competitive advantage by sharing proprietary data. They might reasonably fear that sharing valuable biomarker data could erode their market position and intellectual property. By involving pharmaceutical companies in the development of the proposal, we can better understand their concerns and tailor incentives accordingly. One effective strategy would be to offer significant financial incentives, such as Priority Review Vouchers (PRVs) or patent term extensions to companies that share their data. These incentives can offset the perceived risks and provide tangible benefits that make data sharing more attractive. By making PRVs transferable and offering additional incentives to small biotechnology companies, this policy can be implemented without overly favoring large pharmaceutical companies. Another possible strategy would be for the FDA to encourage drugs that target multiple disease indications at once, perhaps offering discounts or incentives for every extra biomarker or surrogate endpoint validated. Fostering a collaborative environment where the benefits of shared data (such as accelerated drug approvals and reduced R&D costs) are clearly communicated can reduce hurdles. Engaging economists to quantify the long-term economic gains to individual pharmaceutical companies as well as to society, while demonstrating how shared data can lead to industry-wide advancements, can further encourage participation. By providing competitive enough incentives, a framework can be created that balances the interests of pharmaceutical companies with the broader goal of advancing medical innovation and public health.

What is the first step needed to get this proposal off the ground? Is there a pilot or scaled-back version of your proposal that could be advanced to start gaining traction and demonstrate proof of concept?

The first step to get this proposal off the ground is to introduce legislative changes that authorize patent extensions and expand the eligibility for Priority Review Vouchers (PRVs). These legislative changes will create the necessary incentives for pharmaceutical companies to participate in the program by offering tangible benefits that offset the risks associated with data sharing.


Simultaneously, developing and launching a pilot program for the centralized data repository is crucial. This pilot should focus on a specific subset of biomarkers for high-priority diseases and non-disease indications to demonstrate the feasibility and benefits of the proposed framework. By starting with a targeted approach, we can gather initial data, test the processes, and make any necessary adjustments before scaling up the program. This pilot will not only help in garnering support from stakeholders by showcasing the practical benefits of the framework but also refine the approach based on real-world feedback, ensuring a smoother and more effective broader implementation.

What has doomed similar efforts in the past, and how will your proposal avoid those pitfalls?

Similar efforts in the past have often been hindered by a lack of incentives for data sharing and collaboration, along with fragmented regulatory processes. Our proposal aims to overcome these obstacles by introducing strong incentives which will encourage companies to share their data. Moreover, we propose creating a standardized regulatory pathway for biomarker approval, which will streamline the process and reduce fragmentation. By involving key federal agencies, we ensure a coordinated and comprehensive implementation, thus avoiding the pitfalls that have doomed past efforts.

What justifies the recommended course of action for the policy’s implementation vs. other possible options?

The status quo is unacceptable. Millions of lives are lost or debilitated every year due to the slow and costly process of bringing new drugs to market, which is hindered by the lack of validated biomarkers and surrogate endpoints. The recommended course of action leverages existing regulatory frameworks and incentives that have proven effective in other contexts, such as the use of Priority Review Vouchers (PRVs) for neglected tropical diseases. By adapting these mechanisms to encourage data sharing and biomarker validation, we can build on established successes while addressing the specific challenges of the current drug development landscape.


This approach ensures that we utilize proven strategies to accelerate drug development and approval, reducing the overall time and cost associated with clinical trials. By fostering a collaborative environment and providing tangible incentives, we can significantly enhance the efficiency and effectiveness of the drug development process. This targeted strategy not only addresses the immediate needs but also sets a foundation for continuous improvement and innovation in the field of medical research, ultimately saving lives and improving public health outcomes.

Creating Competitive Career Pathways for Low-Income Americans through a Sector-Focused Employment Training Initiative

In order to help all American workers and strengthen the national economy, the next administration should establish a Sector-Focused Employment Training Initiative (SETI) to coordinate and expand evidence-based sectoral employment training programs across the U.S. workforce. SETI would help address persistent wage inequality and limited career advancement for low-income workers, equipping millions of Americans to contribute to and prosper alongside critical U.S. industries.

Sectoral employment training programs offer a proven, evidence-based way to generate substantial and long-term employment and earnings gains for participants. These programs provide low-income and non-traditional workers (i.e., workers without a high school or college degree) with access to higher-wage jobs in better paying sectors with opportunities for advancement. There has been encouraging movement towards integrating sectoral approaches into federal job training programs, but without coordination and firm grounding in evidence, these programs risk being fragmented and ineffective. SETI would work closely with federal programs, local workforce development systems, and key industries to coordinate and expand sectoral employment programs in direct response to local workforce needs. Sectoral employment programs target in-demand, high-wage occupations and focus on breaking down barriers to employment through training, mentorship, and comprehensive supports. 

SETI would ultimately create pathways for millions of Americans to enter in-demand careers with long-term growth trajectories, strengthening both the competitiveness and prosperity of U.S. industries. 

Challenge and Opportunity

The state of wage inequality and economic mobility in the United States

Workers in the U.S. have experienced decades of skyrocketing wage inequality, with the highest earners increasingly pulling away from middle- and low-wage workers. From 1979 to 2018, the top 0.1 percent of earners saw their earnings grow fifteen times faster than the bottom ninety percent. In 2022, the median weekly earnings of Black full-time workers was approximately 83 percent that of all full-time workers. These disparities often stem from structural barriers to opportunities faced by people of color in the American job market. Despite the historically fast wage growth that low-wage workers experienced from 2019 to 2023, large racial, educational, and gender wage gaps persist. These gaps are especially pernicious as American workers are encountering major affordability challenges, including meeting basic needs such as housing and healthcare. 

It is increasingly difficult for non-college-educated workers to gain employment in high-paying occupations with career advancement opportunities. Opportunities for upward mobility in many industries with a high concentration of low-wage workers are limited, and though some pathways exist, access to them is unequal. Black, Hispanic, and female workers disproportionately experience low wage mobility. The downsizing of once prosperous industries has also left many Americans, especially those without college degrees with fewer opportunities for jobs with meaningful career advancement. For example, from 1979 to 2019 America lost 6.7 million manufacturing jobs (a 35 percent decrease), which previously gave adults with a high school education a path into the middle class. Many of these jobs were replaced by lower-wage service jobs, but a resurgence of manufacturing jobs are now at risk of being unfilled due to skills gaps

As rapid advancements in automation and artificial intelligence are projected to shift the types of jobs Americans hold, policymakers must act now to ensure that workers can obtain the skills needed to thrive in a changing labor market and to meaningfully shrink wage inequities. Historically, technological change in labor markets has unequally benefitted college-educated workers to the detriment of non-college-educated workers, but it does not have to in the future. AI has the potential to restore middle wage jobs, but only if it is implemented thoughtfully. Policymakers must urgently invest in evidence-based sector-focused employment training programs to ensure workers benefit from, rather than are displaced by, emerging technologies. These targeted training programs will provide workers with in-demand skills for careers with long-term potential for upward mobility. 

Creating competitive career pathways through sector-focused employment programs

Sectoral employment programs train job seekers, typically low-income adults and those with non-traditional backgrounds (i.e. those whose educational and/or training background is different from traditional expectations for their role) for high-quality, in-demand employment with opportunities for longer-term career advancement. In contrast to traditional job training programs, sectoral employment programs target in-demand occupations and focus on breaking down barriers to employment through training, mentorship, and additional supports.  Programs work with local employers to identify in-demand jobs with high starting wages and opportunities for advancement, and equip participants with the technical and general career readiness skills and credentials to succeed in both the targeted jobs and in the labor market more broadly. Sectors typically include healthcare, IT, and manufacturing.

Among many workforce development models, sectoral employment training programs stand out for their proven ability to produce and sustain significant wage gains. A review of four randomized evaluations of several sectoral employment programs highlights their effectiveness in consistently boosting employment and earnings. These programs lead to substantial and lasting earnings gains  (a 12–34 percent increase) primarily by helping workers access better-paying, higher-quality industries and occupations. Additionally, these programs provide training in certifiable and transferable skills which can enable job mobility. 

Sectoral employment programs can also be cost-effective by increasing employee income, which in turn generates additional tax revenue for the government to help offset some of the program costs. Preliminary, ongoing research by Nathan Hendren and co-authors, suggests the returns from this increased tax revenue can be substantial. For example, initial analyses of three key sectoral employment programs (Project QUEST, Year Up, and WorkAdvance) suggest that just using estimated incomes over the observed follow-up time frames, the benefits they provide to participants exceeds the net cost to the government—meaning that the marginal value of public funds (MVPF) is greater than one. What is more, if the increase in earnings observed over the study period persisted for 20 years or more, the increase in tax revenue would offset the program costs entirely. 

Meeting a moment for American workers

SETI would build on recent federal investments and a strong bipartisan movement to support the American worker. There is significant bipartisan support for strengthening national infrastructure and technological advancement by investing in workforce development, as evidenced by the passage of the Bipartisan Infrastructure Law (BIL) and the Creating Helpful Incentives to Produce Semiconductors Act (CHIPS). Nine regional Workforce Hubs help implement federal investments to ensure Americans get connected to the quality jobs created through these significant federal investments. Importantly, additional key infrastructure for advancing workforce development programs already exists through the Workforce Innovation and Opportunity Act (WIOA), which has a goal of bringing about increased federal coordination for workforce development programs. WIOA workforce development programs are provided and coordinated through approximately 3,000 One-Stop centers (also known as American Job Centers) nationwide, governed through local Workforce Development Boards and coordinated through the Department of Labor’s Employment and Training Administration (ETA). 

Furthermore, the U.S. Department of Commerce (DOC) has made a suite of recent investments in workforce development. Through a $500 million allocation from the American Rescue Plan, the DOC’s Economic Development Administration (EDA)’s Good Jobs Challenge awarded 32 industry-led, workforce training partnerships funds to develop workforce training systems in 2022. As of December 2023, 11,000 workers have been trained and 3,000 participants have secured jobs through the Good Jobs Challenge. In FY24, EDA will be providing an additional 5-8 awards to regional workforce training systems that establish sectoral partnerships, though this is still not sufficient to meet the clear demand of Good Jobs Challenge funding, which initially received $6.4 billion in funding requests from over 500 applicants. 

In 2023, the DOL’s Chief Evaluation Office and the ETA funded the Sectoral Strategies and Employer Engagement Portfolio (SSEEP), which includes three grant programs totalling approximately $188 million in funding to workforce development strategies that build relationships with employers in specific sectors to provide tailored training and good jobs to participants. Targeted sectors include renewable energy, transportation, broadband infrastructure, healthcare, climate resiliency, and hospitality. Importantly, evidence and evaluation are embedded within SSEEP. The portfolio includes a formative study, implementation studies, and assessments to identify sites for impact evaluation. The DOL continues to push for increased investment in sectoral employment strategies, putting forth a Sectoral Employment through Career Training for Occupational Readiness (SECTOR) program to seed and scale industry-led and worker-centered sectoral training partnerships in its FY25 budget proposal. SECTOR was included in the FY25 Presidential Budget, but did not make it into either the House or Senate FY25 Labor-HHS-Education appropriations bills.

These significant investments and proposals for expansion of sectoral workforce development approaches are encouraging, but they risk being uncoordinated in a federal employment and training program ecosystem that spans 43 programs across 9 agencies. Since the Government Accountability Office (GAO) recommended reducing overlap and fragmentation between these programs in 2019, DOL has taken several steps to increase coordination. The DOL should build upon this progress and establish a SETI to coordinate and broaden sectoral employment strategies across programs. 

Plan of Action

The next administration should establish a Sector-Focused Employment Training Initiative (SETI), an inter-agency initiative based jointly within the Department of Labor’s Employment and Training Administration and the Department of Commerce’s Economic Development Administration. SETI would work closely with various federal intermediaries, including local Workforce Development Boards and regional Workforce Hubs, to coordinate and expand sector-focused training programs within American Job Centers, Workforce Hubs, DOL’s Sectoral Strategies and Employer Engagement Portfolio (SSEEP) and other federal initiatives, trade associations, community colleges, and local and national nonprofits. SETI would support the expansion of existing evidence-based programs like Per Scholas and Year Up as well as the establishment of new evidence-based sector-focused job training programs. Additionally, SETI would provide technical assistance to local workforce development systems on how to implement these programs and match job seekers with evidence-based training providers. It would also promote continuous improvement by supporting rigorous evaluations of promising new models. To establish SETI, the next administration should take the following specific steps:

Recommendation 1. The President should call upon Congress to direct federal funding to SETI through the annual Labor-HHS-Education appropriations bill.

This could be achieved by securing new funding through the federal budget and/or proposing tax incentives for employers that participate in the initiative. Broadly, the goal of SETI is to fund, coordinate, and expand sector-focused training programs across American Job Centers, federal workforce development initiatives, trade associations, community colleges, and local and national nonprofits. SETI would coordinate existing sector-focused training approaches across agencies to maximize current investments and expand sector-focused approaches through programs including SECTOR (which would be funded as part of SETI). SETI will ensure that the sectoral employment programming is evidence-based, effective, and coordinated. It will also include mechanisms for monitoring and evaluation for continuous program improvement. To help fund this initiative in the future, the federal government could commission an assessment (through GAO) of the array of workforce development programs across the country to identify opportunities to redistribute funding away from less effective models. 

Recommendation 2. Establish the structure of SETI, which will include a guiding task force, an Executive Director, and personnel:

Recommendation 3. Beginning with implementation pilots, SETI should provide technical assistance and funding to local Workforce Development Boards, national Workforce Hubs, and other intermediaries implementing federal workforce development initiatives to launch and scale sectoral employment programming.

Recommendation 4. SETI should encourage and fund rigorous evaluations, including randomized evaluations, in partnership with research labs and consulting firms  to continuously assess and improve SETI’s sectoral employment programs.

Evidence from these evaluations can help policymakers and practitioners identify effective models that should be scaled up. During the technical assistance phase, SETI personnel should embed monitoring and evaluation practices into the setup of sectoral employment programs. SETI should share successful strategies and practices identified through evaluations with states, localities, and training providers to ensure continuous improvement and widespread adoption of effective models. 

Conclusion 

The next administration should establish a Sector-Focused Employment Training Initiative (SETI) to expand access to quality, evidence-based sectoral employment training programs to help millions of American workers prosper. A SETI would coordinate various government job training investments and efforts by setting best practices, providing technical assistance, and delivering further funding to expand sectoral employment programs. An effective, coordinated approach to sectoral employment training programs is critical to reduce wage inequality and ensure the long-term prosperity of workers and in-demand industries during a time of rapid technological advancement.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
What makes an effective sectoral employment program?

The most effective sectoral employment programs include a combination of:



  • Upfront screening for applicants on basic skills and motivation to best target program resources

  • Occupational skills training targeted to high-wage sectors and leading to an industry-recognized certificate and/or credentials

  • Career readiness training (sometimes referred to as soft skills) on things like time management, critical thinking, and conflict management

  • Wraparound support services for participants, such as those related to job placement and retention as well as counseling and support from social workers on personal or other challenges

  • Strong connections to employers in the targeted industries


A key component of ensuring participants are placed in higher paying, more secure employment is the programs’ efforts to build relationships with employers in the targeted industries. Generally, programs leverage relationships with employers in the targeted industries to secure spots for program participants or help them get employed through a referral process.


Some well known examples of effective sectoral employment programs are Year Up and Per Scholas. Year Up is a year-long program for young adults with a high school degree (or equivalent) that starts with a six-month phase of classroom training on occupational and career readiness skills and then has a six-month internship phase where participants work in entry-level positions at local employers, focusing on IT and business and financial operations positions. Per Scholas targets the IT sector and utilizes the WorkAdvance program model, providing career readiness services, occupational skills training, job development and placement services, and post employment retention and advancement services.

How are sectoral employment programs different from traditional job and job training programs, including those provided through WIOA? What does the research say about traditional jobs programs in contrast to sectoral employment programs?

The core idea behind sectoral employment programs is that improvements in employment-related skills are strategically directed towards industries of strong and rising labor demand, with high-wage potential. Additionally, the programs focus on company relationship building and intermediaries like training and mentoring to break down barriers to employment for workers with non-traditional backgrounds for the targeted jobs. These two forces have led to durable gains in earnings and advancement in the labor market. Randomized evaluations of sectoral employment programs have found substantial and lasting earnings gains. A key component of sectoral employment programs is getting participants into in-demand jobs with high-wages and potential for career growth. Earnings gains resulting from sectoral employment programs are driven by increasing the share of participants working in higher-wage jobs rather than increased employment rates or increased hours worked; this is likely from participants gaining employment in the targeted sectors.


Before the rise of sectoral employment programs, job training programs tended to help participants get jobs that they otherwise would have gotten on their own a few months later. Many of these training programs did not break down barriers in accessing careers that typically employed people with college degrees and/or needed the right connections. In addition, some traditional job training programs have taken a more segmented approach – focusing only on providing training, search assistance, or soft skills. This stands in contrast to sectoral employment training programs, which utilize a more holistic approach.

Why should the federal government be the entity to act rather than the private sector or state/local government?

The private sector tends to undersupply sectoral training in transferable skills useful to multiple employers in particular sectors. This is because individual firms face concerns of rival firms poaching their trainees and risk losing the return on investment in training to other employers. On an individual worker level, lack of information about training opportunities and limited resources to invest in training themselves can also serve as barriers. A federally coordinated sectoral training initiative that leverages intermediaries to provide training and other important services can bypass these barriers, and the proposed structure for SETI is aligned with WIOA’s existing approach.The federal government is well positioned to provide national, unified guidance on how to implement the principles of effective programs in line with the evidence, while local Workforce Development Boards can provide expert knowledge on the localized needs of their communities and promising employer partnerships. Additionally, given limited capacity of state and local entities, a federal SETI initiative would provide support for jurisdictions to implement effective sectoral employment programs for their communities.

What are the opportunities for future research related to sectoral employment programs?

Future research about sectoral employment programs can help advance implementation to increase the upward mobility of even more Americans, which is why it is critical a SETI spur further rigorous evaluation. Key opportunities for future research include:



  • Investigating the effectiveness of sectoral employment programs that have a remote component versus more intensive, on-site programs, and whether current programs are effective when expanded through online learning. This will help inform if remote expansion allows for more rapid and lower-cost scaling up of successful evidence-based training programs.

  • Testing whether changes to wraparound supports and other program components are needed in order to maintain the effectiveness of sectoral employment programs if upfront screening criteria is modified to enable a broader population of workers to access them. Such an effort may provide a pathway for more workers to access quality jobs, but it may also demonstrate reduced effectiveness in a broader population.

  • Understanding whether or not employers who hire through sectoral employment programs change their broader hiring practices to be more inclusive of people with non-traditional backgrounds, creating more opportunity for people with non-traditional backgrounds.