Agenda for an American Renewal

Imperative for a Renewed Economic Paradigm

So far, President Trump’s tariff policies have generated significant turbulence and appear to lack a coherent strategy. His original tariff schedule included punitive tariffs on friends and foes alike on the mistaken basis that trade deficits are necessarily the result of an unhealthy relationship. Although they have been gradually paused or reduced since April 2, the uneven rollout (and subsequent rollback) of tariffs continues to generate tremendous uncertainty for policymakers, consumers, and businesses alike. This process has weakened America’s geopolitical standing by encouraging other countries to seek alternative trade, financial, and defense arrangements. 

However, notwithstanding the uncoordinated approach to date, President Trump’s mistaken instinct for protectionism belies an underlying truth: that American manufacturing communities have not fared well in the last 25 years and that China’s dominance in manufacturing poses an ever-growing threat to national security. After China’s admission to the WTO in 2001, its share of global manufacturing grew from less than 10% to over 35% today. At the same time, America’s share of manufacturing shrank from almost 25% to less than 15%, with employment shrinking from more than 17 million at the turn of the century to under 13 million today. These trends also create a deep geopolitical vulnerability for America, as in the event of a conflict with China, we would be severely outmatched in our ability to build critical physical goods: for example, China produces over 80% of the world’s batteries, over 90% of consumer drones, and has a 200:1 shipbuilding capacity advantage over the U.S. While not all manufacturing is geopolitically valuable, the erosion in strategic industries, which went hand-in-hand with the loss of key manufacturing skills in recent decades, poses potential long-term challenges for America.

In addition to its growing manufacturing dominance, China is now competing with America’s preeminence in technology leadership, having leveraged many of the skills gained in science, engineering, and manufacturing for lower-value add industries to compete in higher-end sectors. DeepSeek demonstrated that China can natively generate high-quality artificial intelligence models, an area in which the U.S. took its lead for granted. Meanwhile, BYD rocketed past Tesla in EV sales and accounted for 22% of global sales in 2024 as compared to Tesla’s 10%. China has also been operating an extensive satellite-enabled secure quantum communications channel since 2016, preventing others from eavesdropping. 

China’s growing leadership in advanced research may give it a sustained edge beyond its initial gains: according to one recent analysis of frontier research publications across 64 critical technologies, global leadership has shifted dramatically to China, which now leads in 57 research domains. These are not recent developments: they have been part of a series of five year plans, the most well known of which is Made in China 2025, giving China an edge in many critical technologies that will continue to grow if not addressed by an equally determined American response.

An Integrated Innovation, Economic Foreign Policy, and Community Development Approach 

Despite China’s growing challenge and recent self-inflicted damage to America’s economic and geopolitical relationships, America still retains many ingrained advantages. The U.S. still has the largest economy, the deepest public and private capital pools for promising companies and technologies, and the world’s leading universities; it has the most advanced military, continues to count most of the world’s other leading armed forces as formal treaty allies, and remains the global reserve currency. Ordinary Americans have benefited greatly from these advantages in the form of access to cutting edge products and cheaper goods that increase their effective purchasing power and quality of life – notwithstanding Secretary Bessent’s statements to the contrary.

The U.S. would be wise to leverage its privileged position in high-end innovation and in global financial markets to build “industries of the future.” However, the next economic and geopolitical paradigm must be genuinely equitable, especially to domestic communities that have been previously neglected or harmed by globalization. For these communities, policies such as the now-defunct Trade Adjustment Assistance program were too slow and too reactive to help workers displaced by the “China Shock,” which is estimated to have caused up to 2.4 million direct and indirect job losses. 

Although jobs in trade-affected communities were eventually “replaced,” the jobs that came after were disproportionately lower-earning roles, accrued largely to individuals who had college degrees, and were taken by new labor force entrants rather than providing new opportunities for those who had originally been displaced. Moreover, as a result of ineffective policy responses, this replacement took over a decade and has contributed to heinous effects: look no further than the rate at which “deaths of despair” for white individuals without a college degree skyrocketed after 2000.

Nonetheless, surrendering America’s hard-won advantages in technology and international commerce, especially in the face of a growing challenge from China, would be an existential error. Rather, our goal is to address the shortcomings of previous policy approaches to the negative externalities caused by globalization. Previous approaches have focused on maximizing growth and redistributing the gains, but in practice, America failed to do either by underinvesting in the foundational policies that enable both. Thus, we are proposing a two-pronged approach that focuses on spurring cutting-edge technologies, growing novel industries, and enhancing production capabilities while investing in communities in a way that provides family-supporting, upwardly mobile jobs as well as critical childcare, education, housing, and healthcare services. By investing in broad-based prosperity and productivity, we can build a more equitable and dynamic economy.

Our agenda is intentionally broad (and correspondingly ambitious) rather than narrow in focus on manufacturing communities, even though current discourse is focused on trade. This is not simply a “political bargain” that provides greater welfare or lip-service concessions to hollowed-out communities in exchange for a return to the prior geoeconomic paradigm. Rather, we genuinely believe that economic dynamism which is led by an empowered middle-class worker, whether they work in manufacturing or in a service industry, is essential to America’s future prosperity and national security – one in which economic outcomes are not determined by parental income and one where black-white disparities are closed in far less than the current pace of 150+ years.

Thus, the ideas and agenda presented here are neither traditionally “liberal” nor “conservative,” “Democrat” nor “Republican.” Instead, we draw upon the intellectual traditions of both segments of the political spectrum. We agree with Ezra Klein’s and Derek Thompson’s vision in Abundance for a technology-enabled future in which America remembers how to build; at the same time, we take seriously Oren Cass’s view in The Once and Future Worker that the dignity of work is paramount and that public policy should empower the middle-class worker. What we offer in the sections below is our vision for a renewed America that crosses traditional policy boundaries to create an economic and political paradigm that works for all.

Policy Recommendations 

Investing in American Innovation

Given recent trends, it is clear that there is no better time to re-invigorate America’s innovation edge by investing in R&D to create and capture “industries of the future,” re-shoring capital and expertise, and working closely with allies to expand our capabilities while safeguarding those technologies that are critical to our security. These investments will enable America to grow its economic potential, providing fertile ground for future shared prosperity. We emphasize five key components to renewing America’s technological edge and manufacturing base:

Invest in R&D. Increase federally funded R&D, which has declined from 1.8% of GDP in the 1960s to 0.6% of GDP today. Of the $200 billion federal R&D budget, just $16 billion is allocated to non-healthcare basic science, an area in which the government is better suited to fund than the private sector due to positive spillover effects from public funding. A good start is fully funding the CHIPS and Science Act, which authorized over $200 billion over 10 years for competitiveness-enhancing R&D investments that Congress has yet to appropriate. Funding these efforts will be critical to developing and winning the race for future-defining technologies, such as next-gen battery chemistries, quantum computing, and robotics, among others.

Capability-Building. Develop a coordinated mechanism for supporting translation and early commercialization of cutting-edge technologies. Otherwise, the U.S. will cede scale-up in “industries of the future” to competitors: for example, Exxon developed the lithium-ion battery, but lost commercialization to China due to the erosion of manufacturing skills in America that are belatedly being rebuilt. However, these investments are not intended to be a top-down approach that selects winners and losers: rather, America should set a coordinated list of priorities (leveraging roadmaps such as the DoD’s Critical Technology Areas), foster competition amongst many players, and then provide targeted, lightweight financial support to industry clusters and companies that bubble to the top.

Financial support could take the form of a federally-funded strategic investment fund (SIF) that partners with private sector actors by providing catalytic funding (e.g., first-loss loans). This fund would focus on bridging the financing gap in the “valley of death” as companies transition from prototype to first-of-a-kind / “nth-of-a-kind” commercial product. In contrast to previous attempts at industrial policy, such as the Inflation Reduction Act (IRA) or CHIPS Act, they should have minimal compliance burdens and focus on rapidly deploying capital to communities and organizations that have proven to possess a durable competitive advantage.

Encourage Foreign Direct Investment (FDI). Provide tax incentives and matching funds (potentially from the SIF) for companies who build manufacturing plants in America. This will bring critical expertise that domestic manufacturers can adopt, especially in industries that require deep technical expertise that America would need to redevelop (e.g., shipbuilding). By striking investment deals with foreign partners, America can “learn from the best” and subsequently improve upon them domestically. In some cases, it may be more efficient to “share” production, with certain components being manufactured or assembled abroad, while America ramps up its own capabilities.

For example, in shipbuilding, the U.S. could focus on developing propulsion, sensor, and weapon systems, while allies such as South Korea and Japan, who together build almost as much tonnage as China, convert some shipyards to defense production and send technical experts to accelerate development of American shipyards. In exchange, they would receive select additional access to cutting-edge systems and financially benefit from investing in American shipbuilding facilities and supply chains.

Immigration. America has long been described as a “nation of immigrants.” Their role in innovation is impossible to deny: 46% of companies in the Fortune 500 were founded by immigrants and accounted for 24% of all founders; they are 19% of the overall STEM workforce but account for nearly 60% of doctorates in computer science, mathematics, and engineering. Rather than spurning them, the U.S. should attract more highly educated immigrants by removing barriers to working in STEM roles and offering accelerated paths to citizenship. At the same time, American policymakers should acknowledge the challenges caused by illegal immigration. One such solution is to pass legislation such as the Border Control Act of 2024, which had bipartisan support and increased border security, supplemented by a “points-based” immigration system such as Canada’s which emphasizes educational credentials and in-country work experience.

Create Targeted Fences. Employ tariffs and export controls to defend nascent, strategically important industries such as advanced chips, fusion energy, or quantum communications. However, rather than employing these indiscriminately, tariffs and export controls should be focused on ensuring that only America and its allies have access to cutting-edge technologies that shape the global economic and security landscape. They are not intended to keep foreign competition out wholesale; rather, they should ensure that burgeoning technology developers gain sufficient scale and traction by accelerating through the “learn curve.”

Building Strong Communities

Strong communities are the foundation of a strong workforce, without which new industries will not thrive beyond a small number of established tech hubs. However, strengthening American communities will require the country to address the core needs of a family-sustaining life. Childcare, education, housing, and healthcare are among the largest budget items for families and have been proven time and again to be critical to economic mobility. Nevertheless, they are precisely the areas in which costs have skyrocketed the most, as has been frequently chronicled by the American Enterprise Institute’s “Chart of the Century.” These essential services have been underinvested in for far too long, creating painful shortages for communities that need them most. As such, addressing these issues form the core pillars of our domestic reinvestment plan. Addressing them means grappling with the underlying drivers of their cost and scarcity. These include issues of state capacity, regulatory and licensing barriers, and low productivity growth in service-heavy care sectors. A new policy agenda that addresses the fundamental supply-side issues is needed to reshape the contours of this debate.

Expand Childcare. Inadequate childcare costs the U.S. economy $122 billion in lost wages and productivity as otherwise capable workers, especially women, are forced to reduce hours or leave the labor force. Access is further exacerbated by supply shortages: more than half the population lives in a “childcare desert,” where there are more than three times as many children as licensed slots. Addressing these shortages will alleviate the affordability issue, enabling workers to stay in the workforce and allow families to move up the income ladder.

Fund Early Education. Investments in early childhood education have been demonstrated to generate compelling ROI, with high-quality studies such as the Perry preschool study demonstrating up to $7 – $12 of social return for every $1 invested. While these gains are broadly applicable across the country, they would make an even greater difference in helping to rebuild manufacturing communities by making it easier to grow and sustain families. Given the return on investment and impact on social mobility, American policymakers should consider investing in universal pre-K.

Invest in Workforce Training and Community Colleges. The cost of a four-year college education now exceeds $38K per year, indicating a clear need for cheaper BA degrees but also credible alternatives. At the same time, community colleges can be reimagined and better funded to enable them to focus on high-paying jobs in sectors with critical labor shortages, many of which are in or adjacent to “industries of the future.” Some of these roles, such as IT specialists and skilled tradespeople, are essential to manufacturing. Others, such as nursing and allied healthcare roles, will help build and sustain strong communities.

Build Housing Stock. America has a shortage of 3.2 million homes. Simply put, the country needs to build more houses to address the cost of living and enable Americans to work and raise families. While housing policy is generally decided at lower levels of government, the federal government should provide grants and other incentives to states and municipalities to defray the cost of developing affordable housing; in exchange, state and local jurisdictions should relax zoning regulations to enable more multi-family and high-density single-family housing. 

Expand Healthcare Access. American healthcare is plagued with many problems, including uneven access and shortages in primary care. For example, the U.S. has 3.1 primary care physicians (PCPs) per 10,000 people, whereas Germany has 7.1 and France has 9.0. As such, the federal government should focus on expanding the number of healthcare practitioners (especially primary care physicians and nurses), building a physical presence for essential healthcare services in underserved regions, and incentivizing the development of digital care solutions that deliver affordable care.

Allocating Funds to Invest in Tomorrow’s Growth

Investment Requirements

While we view these policies as essential to America’s reinvigoration, they also represent enormous investments that must be paid for at a time when fiscal constraints are likely to tighten. To create a sense of the size of the financial requirements and trade-offs required, we lay out each of the key policy prescriptions above and use bipartisan proposals wherever possible, many of which have been scored by the Congressional Budget Office (CBO) or another reputable institution or agency. Where this is not possible, we created estimates based on key policy goals to be accomplished. Although trade deals and targeted tariffs are likely to have some budget impact, we did not evaluate them given multiple countervailing forces and political uncertainties (e.g., currency impacts).

Allocating Funds to Invest in Tomorrow’s Growth | Investment Requirements
Core PillarPolicy ProposalsLow Estimate (amortized per annum)High Estimate (amortized per annum)
Increase R&DIncrease of 10 – 25% of 2024 baseline federal R&D budget of ~$200B, with greater focus on basic science, healthcare, energy, and next-gen computing. At minimum, Congress should appropriate the $200 billion that has been authorized for R&D over the next 10 years, but could earmark further increases for additional R&D, translation, and commercialization support$20 billion (10% increase in the budget)$50 billion (25% increase in the budget)
FDI SubsidiesIn an ideal world, the amount of FDI subsidies would adapt to the size of the investment required. However, a more practical approach would set aside funding each year for strategic purposes using creative financing structures (e.g., first-loss financing) that crowds in other market participants. These private-public partnerships would be much more focused than existing approaches, which take a “peanut butter” approach to allocating fundingCongress could reallocate capital from other sources (e.g., the Export Import Bank, which expects to disburse $11.7 billion in 2025)$10 – $15 billion
Childcare InvestmentsThe original House version of Build Back Better included provisions that would have capped childcare payments to 7% of income for families under 75% of state median income. Furthermore, it would have provided funding to expand the supply of care.$20 – $30B for “supply side” provisions (based on FY22 apportionment of funding)$50 – $60B for all provisions, including demand-side direct assistance
Universal Pre-KUniversal pre-K for children of 3 – 4 years of age. The 10-year window for the budget estimate includes build costs for new facilities$20 billion for only 4-year olds$35 billion for 3 and 4-year olds
Higher Ed InvestmentsUniversal community college for individuals who are attending for the first time or re-skilling. “Last dollar” refers to using federal funds after all other sources, while “first dollar” refers to using federal funds before other sources$4.5 billion on a last dollar basis (assumes 50% ratio)$9.0 billion on a first-dollar basis
Housing SupplyAt minimum, Congress should pass several bipartisan bills, including the Yes In My Backyard Act, Affordable Housing Credit Improvement Act, and the Choice in Affordable Housing Act. These bills would require community development grantees to adopt high-density zoning, expand the low-income housing tax credit, and support landlords who accept housing vouchers by reducing administrative burdens$630 million for low-income tax credit (based on similar bill). Voucher bill includes a $500 million upfront fund investment for voucher adminMore ambitious approaches could further subsidize housing builds. $67 billion was appropriated for affordable housing in 2024; a 10% increase would add ~$7 billion
Primary Care ExpansionCongress could pass the Senate Bipartisan Primary Care and Health Workforce Act, which would build additional community health centers and expand the PCP workforce by 4,800 doctors and 60,000 nurses. In addition, Congress could pass the House Medicaid Primary Care Improvement Act would allow Medicaid beneficiaries to access primary care for a flat fee$1.6 billion for the Primary Care and Health Workforce ActThe Primary Care Improvement Act has not yet been scored by the CBO, but is likely to cost several billion per year
Immigration ReformCongress should pass the Border Control Act and implement a points-based immigration system. This would require funds for border enforcement as well as immigration processing. Would have appropriated $20B for improvements in border securityNot scored by CBO. Spending was one-time in nature and would likely require several billion of ongoing appropriations
Total~$70 – $80 billion at steady state~$150 – $175 billion at steady state

Potential Pay-Fors

Given the budgetary requirements of these proposals, we looked for opportunities to prune the federal budget. The CBO laid out a set of budgetary options that collectively could save several trillion over the next decade. In laying out the potential pay-fors, we used two approaches that focused on streamlining mandatory spending and optimizing tax revenues in an economically efficient manner. Our first approach is to include budgetary options that eliminate unnecessary spending that are distortionary in nature or are unlikely to have a meaningful direct impact on the population that they are trying to serve (e.g., kickback payments to state health plans). Our second approach is to include budgetary options in which the burden would fall upon higher-earning populations (e.g., raising the cap on payroll and Social Security taxes). 

As the table below shows, there is a menu of options available to policymakers that raise funding well in excess of the required investment amounts above, allowing them to pick and choose which are most economically efficient and politically viable. In addition, they can modify many of these options to reduce the size or magnitude of the effect of the policy (e.g., adjust the point at which Social Security benefits for “high earners” is tapered or raise capital gains by 1% instead of 2%). While some of these proposals are potentially controversial, there is a clear and pressing need to reexamine America’s foundational policy assumptions without expanding the deficit, which is already more than 6% of GDP.

Allocating Funds to Invest in Tomorrow’s Growth | Potential Pay-Fors
Pay-ForDescription and RationaleCBO Estimate (amortized per annum)
Limit state tax “kickbacks” to health care plansHistorically, state Medicaid plans have been financed by taxes on health plans. These taxed amounts were then matched by federal funds. To “double up” their funding, states increased taxes on health plans and provided them with a “hold harmless” promise to provide rebates at least equivalent to the taxed amount (often more). Although the initial reconciliation budget proposal would freeze the current tax rates, closing the loophole entirely would eliminate the “kickback” like effect that current policy provides to health plans$61 billion
Modify Medicare Advantage payments for riskThere are two types of Medicare: Medicare Advantage (MA), which provides capitated payments (fixed amounts) for coverage, and traditional Fee For Service (FFS), which pays on a usage basis. However, the current design of MA has led to upcoding and selection bias, whereby MA patients appear sicker than they actually are, and compared to similar risk pools, MA is overpaid by as much as 39%. Although CMS already applied a 5.9% payout reduction for MA plans, increasing the reduction to as much as 20% could levelize payouts and correct for inefficiencies in risk pools$16 billion (if modifying risk-based payout reduction to from 5.9% to 8%)

$160 billion (if modifying risk-based payout reduction to from 5.9% to 20%)
Reduce Social Security for high earnersThis policy would begin reducing Social Security benefits above the 70th wage percentile and reduce payout factors over a 9-year window. Savings opportunities could be increased by reducing benefits above the 50th wage percentile and reducing payout factors over a 5-year window$5 billion
Raise taxable share of Social Security payroll taxes to 90% of earningsIn 2024, the maximum wage subject to Social Security taxes was $168,000, after which employees are no longer subject to the 6.2% tax. However, this cutoff is regressive and shifts the tax burden to lower earners. Increasing the taxable share to 90% would more evenly spread out the tax burden, and further revenues could be generated by taxing all income$73 billion
Limit itemized deductions to 15% of total valueTaxpayers are allowed to itemize certain expenses, including mortgage interest, state and local taxes, and charitable donations, among others. However, these deductions are frequently claimed by high earners, with nearly two-thirds of individuals over $500K itemizing expenses. Given the deeply regressive nature of itemization and their distortive effects on key markets (e.g., housing), limitations on itemization could provide additional funding sources, address market inefficiencies, and promote equity$191 billion
Change taxation of assets transferred at deathWhen a deceased individual passes on their assets to an heir at death, the value of the assets is marked at “fair value” to the market (the “basis”). Future capital gains taxes at sale of the asset are based on this value. However, this allows heirs to avoid paying capital gains on value accrued during the deceased individual’s lifetime. Thus, we recommend re-setting the basis based on events that occurred during the lifetime of the deceased individual, which typically results in a value lower than that at death$20 billion
Impose net investment tax on limited partnerships and S corporations’ net profitsIndividuals who earn over $200,000 are subject to a 3.8% net investment tax (NIT) on qualifying investment income, such as interest, dividends, and capital gains. However, partnership income and S corporation income is not subject to this tax. This policy would make NIT tax applicable to these forms of investment income$42 billion
Total~$400+ billion of potential savings

Conclusion

America is in need of a new economic paradigm that renews and refreshes rather than dismantles its hard-won geopolitical and technological advantages. Trump’s tariffs, should they be fully enacted, would be a self-defeating act that would damage America’s economy while leaving it more vulnerable, not less, to rivals and adversaries. However, we also recognize that the previous free trade paradigm was not truly equitable and did not do enough to support manufacturing communities and their core strengths. We believe that our two-pronged approach of investing in American innovation alongside our allies along with critical community investments in childcare, higher education, housing, and healthcare bridges the gap and provides a framework for re-orienting the economy towards a more prosperous, fair, and secure future.

De-Risking the U.S. Bioeconomy by Establishing Financial Mechanisms to Drive Growth and Innovation

The bioeconomy is a pivotal economic sector driving national growth, technological innovation, and global competitiveness. However, the biotechnology innovation and biomanufacturing sector faces significant challenges, particularly in scaling technologies and overcoming long development timelines that don’t align with short-term return expectations from investors. These extended timelines and the inherent risks involved lead to funding gaps that hinder the successful commercialization of technologies and bio-based products. If obstacles like the ‘Valleys of Death, a lack of capital at crucial development junctures, that companies and technology struggle to overcome are not addressed, this could result in economic stagnation and the U.S. losing its competitive edge in the global bioeconomy.

Government programs like SBIR and STTR lessen the financial gap inherent in the U.S. bioeconomy, but existing financial mechanisms have proven insufficient to fully de-risk the sector and attract the necessary private investment. In FY24, the National Defense Authorization Act established the Office of Strategic Capital within the Department of Defense to provide financial and technical support for its 31 ‘Covered Technology Categories’, which includes biotechnology and biomanufacturing. To address the challenges associated with de-risking biotechnology and biomanufacturing within the U.S. bioeconomy, the Office of Strategic Capital within the Department of Defense should house a Bioeconomy Finance Program. This program would offer tailored financial incentives such as loans, tax credits, and volume guarantees, targeting both short-term and long-term scale-up needs in biomanufacturing and biotechnology. 

By providing these essential funding mechanisms, the Bioeconomy Finance Program will reduce the risks inherent in biotechnology innovation, encouraging more private sector investment. In parallel, states and regions across the country should develop regional specific strategies, like investing in necessary infrastructure, and fostering public-private partnerships, to complement the federal government’s initiatives to de-risk the sector. Together, these coordinated efforts will create a sustainable, competitive bioeconomy that supports economic growth, and strengthens U.S. national security.

Challenge & Opportunity

The U.S. bioeconomy encompasses economic activity derived from the life sciences, particularly in biotechnology and biomanufacturing. The sector plays an important role in driving national growth and innovation. Given its broad reach across industries, impact on  job creation, potential for technological advancements, and requirement for global  competitiveness, the U.S. bioeconomy is a critical sector for U.S. policymakers to support. With continued development and growth, the U.S. bioeconomy promises not only economic benefits, but also strengthens national security, health outcomes, and environmental sustainability for the country.

Ongoing advancements in biotechnology, including artificial intelligence and automation, have accelerated the growth of the bioeconomy, making the sector both globally competitive and an important domestic economic sector. In 2023, the U.S. bioeconomy supported nearly 644,000 domestic jobs, contributed $210 billion to the GDP, and generated $49 billion in wages. Biomanufactured products within the bioeconomy span multiple categories (Figure 1). Growth here will drive future economic development and address societal challenges, making the bioeconomy  a key priority for government investment and strategic focus.

Figure 1. Bioeconomy Valorization Cascade

Biomanufactured products span a wide range of categories, from pharmaceuticals and chemicals, which require small volumes of biomass but yield high-value products, to energy and heat, which require larger volumes of biomass but result in lower-value products. Additionally, there are common infrastructure synergies, bioprocesses, and complementary input-output relationships that facilitate a circular bioeconomy within bioproduct manufacturing. Source: https://edepot.wur.nl/407896

An important driving force for the U.S. bioeconomy is biotechnology and biomanufacturing innovation. However, bringing biotechnologies to market requires substantial investment, capital, and most importantly, time. Unlike other technology sectors which see returns on investment within a short period of time, often, there is a misalignment between scientific and capitalistic expectations. Many biotechnology based companies rely on venture capital, a form of private equity investments, to finance their operations. However, venture capitalists (VCs) typically operate on short return on investment timelines, which may not align with the longer development cycles characteristic of the biotechnology sector (Figure 2). Additionally, the need for large-scale and the high capital expenditures (CAPEX) required for commercially profitable production, along with the low-profit margins in high-volume commodity production, create further barriers to obtaining investment. While this misalignment is not universal, it remains a challenge for many biotech startups.

The U.S. government has implemented several programs to address the financing void that often arises during the biotechnology innovation process. These include the Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) programs, which provide phased funding across all Technology Readiness Levels (TRLs); the DOE Loan Program Office, which offers debt financing for energy-related innovations; the DOE Office of Clean Energy Demonstrations which provides funding for demonstration-scale projects that provide proof of concept; and the newly established Office of Strategic Capital (OSC) within the DOD (as outlined in the FY24 National Defense Authorization Act), which is tasked with issuing loans and loan guarantees to stimulate private investment in critical technologies. An example is the office’s new Equipment Loan Financing through OSC’s Credit Program.

Figure 2. Representative Biotechnology & Non-Biotechnology Development Cycle Timeline

Biotechnology development timelines typically take around ~10+ years to complete and reach the market due to longer R&D and Demonstration & Scale-Up phases, while non-biotechnology development timelines are generally much shorter, averaging around ~5+ years.

While these efforts are important, they are insufficient on their own to de-risk the sector to the degree which is needed to realize the full potential of the U.S. bioeconomy. To effectively support the biotechnology innovation pipeline at critical stages, the government must explore and implement additional financial mechanisms that attract more private investment and mitigate the inherent risks associated with biotechnology innovation. Building on existing resources like the Regional Technology and Innovation Hubs, NSF Regional Innovation Engines, and Manufacturing USA Institutes, help stimulate private sector investment and are crucial for strengthening the nation’s economic competitiveness. 

The newly established Office of Strategic Capital (OSC) within the DOD is well-positioned to enhance resilience in critical sectors for national security, including biotechnology and biomanufacturing, through large-scale investments. Biotechnology and biomanufacturing inherently require significant CAPEX, expenses related to the purchase, upgrade, or maintenance of physical assets. This requires substantial amounts of strategic and concessional capital to de-risk and accelerate the biomanufacturing process. By creating, implementing, and leveraging various financial incentives and resources, the Office of Strategic Capital can help build the robust infrastructure necessary for private sector engagement.

To achieve this, the U.S. government should create the Bioeconomy Finance Program (BFP) within the OSC, specifically tasked with enabling and de-risking the biotechnology and biomanufacturing sectors through financial incentives and programs. The BFP should focus on different levels of funding based on the time required to scale, addressing potential ‘Valleys of Death’ that occur during the biomanufacturing and biotechnology innovation process. These funding levels would target short-term (1-2 years) scale-up hurdles to accelerate the biotechnology and biomanufacturing process, as well as long-term (3-5 years) scale-up challenges, providing transformative funding mechanisms that could either make or break entire sectors.

In addition to the federal programs within the BFP to de-risk the sector, states and regions must also make substantial investments and collaborate with federal efforts to accelerate biomanufacturing and biotechnology ecosystems within their own areas. While the federal government can provide a top-down strategy, regional efforts are critical for supporting the sector with bottom-up strategies that complement and align with federal investments and programs, ultimately enabling a sustainable and competitive biotechnology and biomanufacturing industry regionally. To facilitate this, regions should develop and implement state-wide investment initiatives like resource analysis, infrastructure programs, and a cohesive, long-term strategy focused on public-private partnerships. The federal government can encourage these regional efforts by ensuring continued funding for biotechnology hubs and creating additional opportunities for federal investment in the future.

Plan of Action

To strengthen and increase the competitiveness of the U.S. bioeconomy, a coordinated approach is needed that combines federal leadership with state-level action. This includes establishing a dedicated Bioeconomy Finance Program within the Office of Strategic Capital to create targeted financial mechanisms, such as loan programs, tax incentives, and volume guarantees. Additionally, states must be empowered to support commercial-scale biomanufacturing and infrastructure development, leveraging tech hubs, cross-regional partnerships, and building public-private partnerships to build capacity and foster innovation nationwide.

Recommendation 1. Establish and Fund a Bioeconomy Finance Program

Congress, in the next National Defense Authorization Act, should codify the Office of Strategic Capital (OSC) within DOD and authorize the creation of a Bioeconomy Finance Program (BFP) within the OSC to provide centralized federal structure for addressing financial gaps in the bioeconomy, thereby increasing productivity and competitiveness globally. In 2024, Congress expanded the OSCs mission to offer financial and technical support to entities within its 31 ‘Covered Technology Categories,’ including biotechnology and biomanufacturing. Additionally, in order to build resilience in the sector and maintain a competitive advantage globally while also strengthening national security, these substantial expenditures should be housed within the OSC. Establishing the BFP within the OSC at the DOD would allow for a targeted focus on these critical sectors, ensuring long-term stability and resilience against political shifts. 

The DOD and OSC should leverage its own funding as well as its existing partnership with the Small Business Administration to direct $1 billion to set up the BFP to create and implement initiatives aimed at de-risking the U.S. bioeconomy. The Bioeconomy Finance Program should work closely with relevant federal agencies, such as the DOE, Department of Agriculture (USDA), and the Department of Commerce (DOC), to ensure a long-term cohesive strategy for financing bioeconomy innovation and biomanufacturing capacity.

Recommendation 2. Task the Bioeconomy Finance Program with Key Initiatives

A key element of the OSC’s mission and investment strategy is to provide financial incentives and support to entities within its 31 ‘Core Technology Categories’. By having BFP design and manage these financial initiatives for the biotechnology and biomanufacturing sectors, the OSC can leverage lessons from similar programs, such as the DOE’s loan program, to address the unique needs of these critical industries, which are essential for national security and economic growth.

Currently, the OSC has launched a credit program for equipment financing. While this is a necessary first step in fulfilling the office’s mission, the program is open to all 31 ‘Core Technology Categories’, resulting in broad, dilutive funding. To accelerate the bioeconomy and reduce risks in biotechnology and biomanufacturing, it is crucial to allocate resources specifically to these sectors. Therefore, BFP should take the lead in several key financial initiatives to support the growth of the bioeconomy, including:

Loan Programs

The BFP should develop specific biotechnology enabling loan programs, in addition to the new equipment loan financing program run by the OSC. These loan programs should be modeled after those in the DOE LPO, focusing on biomanufacturing scale-up, technology transfer, and overcoming financing gaps that hinder commercialization.

Example loan programs:

Tax Incentives

The BFP office should create tax incentives tailored to the bioeconomy, such as, transferable investment  and production tax credits. For example, the 45V tax credit for production of clean hydrogen could serve as a model for similar incentives aimed at other bioproducts.

Example tax incentives:

Volume Guarantees & Procurement Support

To mitigate risks in biomanufacturing, the office should establish volume guarantees for various bioproducts, offering financial assurance to manufacturers and encouraging private sector investment. An initial assessment should be conducted to identify which bioproducts are best suited for such guarantees. Additionally, the office should explore the possibility of procurement programs to increase government demand for bio-based products, further incentivizing industry growth and innovation. This effort should be undertaken in coordination with the USDA’s BioPreferred Program to minimize redundancy and to create a cohesive procurement strategy. In addition, the BFP should look to the procurement innovations promoted by the Office of Federal Procurement Policy to find solutions for forward funding to create a functioning market.

Example Volume Guarantees & Procurement Support:

Recommendation 3. Develop Pipeline Programs to Address Financial and Time Horizon Needs

Utilizing the key initiatives highlighted above, the BFP should create a two-tiered financial mechanisms pipeline and program to address both the short-term and long-term financial needs. The different financial levels could potentially include:

Recommendation 4.  State-Level Initiatives, Infrastructure Development, and Public-Private Partnerships

While federal efforts are crucial, a bottom-up approach is needed to support biomanufacturing and the bioeconomy at the state level. The federal government can support these regional activities by providing targeted funding, policy guidance, and financial incentives that align with regional priorities, ensuring a coordinated effort toward industry growth. States should be encouraged to complement federal initiatives by developing programs that support commercial-scale biomanufacturing. Key actions include:

By addressing these steps at both the federal and state levels, the U.S. can create a robust, scalable framework for financing biomanufacturing and the broader bioeconomy, supporting the transition from early-stage innovation to commercial success and ensuring long-term economic competitiveness. A good example of how this approach works is the DOE Loan Program Office, which collaborates with state energy financing institutions. This partnership has successfully supported various projects by leveraging both federal and state resources to accelerate innovation and drive economic growth. This model makes sense for biomanufacturing and biotechnology within the BFP in the OSC, as it ensures coordination between federal and state efforts, de-risks the sector, and facilitates the scaling of transformative technologies.

Conclusion

Biotechnology innovation and biomanufacturing are critical components of the U.S. bioeconomy which drives innovation, economic growth, and global competitiveness, but these sectors face significant challenges due to the misalignment of development timelines and investment cycles. The sector’s inherent risks and long development processes create funding gaps, hindering the commercialization of vital biotechnologies and products. These challenges, including the ‘Valleys of Death,’ could stifle innovation, slow down progress, and result in the U.S. losing its global leadership in biotechnology if left unaddressed.

To overcome these obstacles, a coordinated and comprehensive approach to de-risk the sector is necessary. The establishment of the Bioeconomy Finance Program (BFP)  within the DOD’s Office of Strategic Capital (OSC) offers a robust solution by providing targeted financial incentives, such as loans, tax credits, and volume guarantees, designed to de-risk the sector and attract private investment. These financial mechanisms would address both short-term and long-term scale-up needs, helping to bridge funding gaps and accelerate the transition from innovation to commercialization. Furthermore, building on existing government resources, alongside fostering state-level initiatives such as infrastructure development, and public-private partnerships, will create a holistic ecosystem that supports biotechnology and biomanufacturing at every stage and will substantially de-risk the sector. By empowering regions to develop their own bioeconomy strategies and leverage local federal government programs, like the EDA Tech Hubs, the U.S. can create a sustainable, scalable framework for growth. By taking these steps, the U.S. can strengthen both its economic position but also lead the world in development of transformative biotechnologies.

Frequently Asked Questions
BioMADE’s mission focuses on building scale-up infrastructure. Why not channel investments into that instead of creating a separate financing program for the bioeconomy?

BioMADE, a Manufacturing Innovation Institute sponsored by the U.S. Department of Defense, plays an important role in advancing and developing the U.S. bioeconomy. Yet, BioMADE currently funds pilot to intermediate-scale projects, rather than commercial-scale projects. This leaves a significant funding gap, creating a distinct and significant challenge for the bioeconomy.. By contrast, the BFP within OSC would complement existing efforts by specifically targeting and mitigating risks in the biotechnology and biomanufacturing pipeline that current programs do not address. Furthermore, given that BioMADE is also funded by the DOD, enhanced coordination between these programs willenable a more robust and cohesive strategy to accelerate the growth of the U.S. bioeconomy.

Aren’t EDA Tech Hubs and other regional programs already using Private-Public Partnerships? Why should states and regions focus on them beyond what is already being done?

While Private-Public Partnerships (PPPs) are already embedded in some federal regional programs, such as the EDA Tech Hubs, not all states or regions have access to these initiatives or funding. To ensure equitable growth and fully harness the economic potential of the bioeconomy across the nation, it will be important for regions and states to actively seek additional partnerships beyond federally-driven programs. This will empower them to build their own regional bioeconomies, or microbioeconomies, by tapping into regional strengths, resources, and expertise to drive localized innovation. Moreover, federal programs like EDA Tech Hubs are often focused on advancing existing technologies, rather than fostering the development of new ones. By expanding PPPs across the biotech sector, states and regions can spur broader economic growth and innovation by holistically developing all areas of biotechnology and biomanufacturing, enhancing the overall bioeconomy.

Creating a US Innovation Accelerator Modeled On In-Q-Tel

The U.S. should create a new non-governmental Innovation Accelerator modeled after the successful In-Q-Tel program to invest in small and mid-cap companies creating technologies that address critical needs of the United States. Doing so would directly address the bottleneck in our innovation pipeline that limits innovative companies from bringing their products to market. 

Challenge and Opportunity 

While the federal government funds basic, early-stage R&D, it leaves product development and commercialization to the private sector. This paradigm has created a so-called innovation Valley of Death: a lack of capital support for the transition to early commercialization, and one that  stalls economic growth for many innovation-driven sectors. The U.S. currently leads the world in the formation of companies, but the limitations on capital sources artificially restrict growth. For example, the U.S. currently leads the world in biotechnology and biomedical innovation. The U.S. market alone is worth $600B, and is projected to exceed $1.5 trillion.  However, international rivals  are catching up: China is projected to close the biotechnology innovation gap in 2028. The U.S. must act quickly to protect its lead.

Typically, early and mid-stage innovations are too immature for private capital investors because they present an outsized risk. In addition, private capital tends to be more conservative in rough economic times, which further dries up the innovation pipeline. Investment “fads” tend to starve other fields of capital investment for potentially years at a time So, though the U.S. government provides significant early-stage discovery funding for innovation through its various agencies, the grant lifecycle is such that after the creation and initial development of new technologies, there are few mechanisms for continued support to drive products to market. 

It is this period – after R&D but before commercial demonstration – that creates a substantial bottleneck for entrepreneurs where their work is too advanced for the usual government research and development grant funding but not developed enough to draw private investment. Existing SBIR and STTR grant programs that the government provides for this purpose are typically too small to significantly advance such innovations, while the application process is too cumbersome and slow to draw the interests of many companies. As a result, small businesses created around these technologies often fail because of funding challenges, rather than any faults of the innovations they are developing. 

The federal government, therefore, has an opportunity to make the path from lab to market smoother by establishing a mechanism for supporting smaller companies developing innovative products that will substantially improve the lives of Americans. A new U.S. Innovation Accelerator will provide R&D funding to promising companies to accelerate innovations critical to the U.S. by de-risking them as they move toward private sector funding and commercialization. 

Creating the U.S. Innovation Accelerator 

We propose creating a new federally guided entity modeled on In-Q-Tel, the government funded not-for-profit venture capital firm that invests in companies that are developing technologies that can be used by intelligence agencies. Similar to In-Q-Tel, the U.S. Innovation Accelerator would operate independently of the government, but leverage federal investments in research and development to ensure that promising new technologies make it to market. 

By having the organization live outside of the government, it will be able to pay staff a wage that is commensurate with their experience and draw top talent interested in driving innovation across the R&D spectrum. The organization would invest in the development of technology companies, and would partner with private capital sources to help develop critical technologies that are too risky for private capital entities to fund on their own. Such capital would allow innovation to flourish. In exchange, the organization could establish requirements for keeping such companies, and their manufacturing operations, in the U.S. for some period after receiving public funding (10 years, for example) to prevent the offshoring of technologies that are developed with public dollars. The agency would use a variety of funding vehicles to support companies to best match their needs and increase their chances of success. 

Scope

The new U.S. Innovation Accelerator could be established as a sector-specific entity, (for example as biotechnology and healthcare-focused fund), or it could include a series of portfolios that invest in companies across the innovation spectrum. Both approaches have merits worth exploring: a narrower biomedical fund would have the benefit of quickly deploying capital to accelerate key areas of strategic U.S. interest while proving the concept and setting the stage to expand to other sectors of the economy; alternatively, if a larger pool of funding is available initially, a broader investment portfolio would allow for targeted investments across sectors ranging from biotechnology and agriculture to advanced materials and energy.  

Sources of Capital 

The U.S. Innovation Accelerator can be funded in several ways to create a robust investment vehicle for advancing biotechnology and healthcare innovation. Two potential models include a publicly-funded revolving fund, similar to In-Q-Tel, while the other would draw capital from retirement and pension funds providing a return on investment to voluntary investors. 

Appropriations driven revolving fund. Like In-Q-Tel, Congress could kick start the Innovation Accelerator though direct appropriations. This annual investment could be curtailed and repaid to the treasury once the fund starts to realize returns on the investments it makes.

Thrift Savings Plan allocations. The federal employee retirement savings plan, the Thrift Savings Plan, holds approximately $700 billion in assets across various investment funds. By allowing for voluntary investment allocation in the Innovation Fund by federal employees, even a small percentage of TSP assets could provide billions in initial capital. This allocation would be structured as part of the TSP’s broader investment strategy, through the creation of a new specialized SBF fund option for participants.

U.S. State & Public Pension Plans. State and local government pension plans hold assets totaling roughly $6.25 trillion. The Innovation Fund could work with state pension administrators to create investment vehicles that align with their risk-return profiles and support both financial and social impact goals. These would be made available to plan participants in a similar manner to the TSP or through more traditional allocation. 

Reforming the SBIR/STTR Programs. The SBIR and STTR programs represent 3.2% of the total Federal R&D budget for 11 agencies, but struggle to attract suitable applicants. This is not because there is a lack of need in early-stage innovation. Typically, these grants are judged and awarded by program managers that have little or no private sector experience, take too long from application to award, and provide insufficient funds for many companies to consider them. Those dollars could instead be allocated to the Innovation Accelerator program, and invested in more promising small businesses through a streamlined program that creates a revolving fund through returns on initial investment that can be then reinvested in additional promising companies. The program now uses ceilings for different phases of SBIR grants. These phases are artificial, and do not reflect the reality of the needs of different types of companies and thus should be eliminated and replaced with needs-based funding. USG agencies can issue technology priority guidance to the U.S. Innovation Accelerator and completely off-load the burden of having to run multiple SBIR programs. 

Part of the proposed US Sovereign Wealth Fund. In February of this year, President Trump issued an Executive Order directing the Secretaries of Commerce and Treasury to develop plans for the creation of a sovereign wealth fund. The plan will include recommendations for funding mechanisms, investment strategies, fund structure, and a governance model. Such funds exist across many countries as a mechanism for amplifying the financial return on the nation’s assets and to leverage those returns for strategic benefit and economic growth. We propose that the U.S. Innovation Accelerator falls squarely in the remit of a sovereign fund and that the fund could serve as a sustainable source of capital to fund the development of innovative companies and products that address critical national challenges and directly benefit Americans in tangible ways. 

Structure and operations

The Innovation Accelerator program will be structured similar to a lean private venture capital entity, with oversight from the U.S. government to inform strategic deployment of capital towards innovative companies that address unmet national needs. As an  independent, non-profit organization or public benefit corporation (PBC), overhead can be kept low, and it can be guided by a small entrepreneurial Board of Directors representing innovative industries and investment professionals to ensure that the organization stays on mission. Further, the organization should collaborate with federal agencies to identify areas of national need and ensure that promising companies that originate from other federal research and development programs will have the capital necessary to bring their innovations to market, thus ensuring a stable innovation pipeline and addressing a longstanding bottleneck that has driven American companies to seek foreign capital or to offshore their operations. 

The professional investment team would include expertise in a broad set of domains and with a proven track record of commercial success. The organization would have a high degree of autonomy but maintain alignment with national technology priorities and competitive strategy. Transparency and accountability will be paramount and include constant full public accounting of all investments and strategy. 

The primary objective of the Innovation Accelerator will be to deliver game changing innovations that generate exceptional returns on investment while supporting the development of strategically important technologies. The U.S. Innovation Accelerator will also insulate domestic innovation from the delays and inefficiency caused by the private sector funding cycle.

Conclusion

The U.S. Innovation Accelerator would address a critical gap in the current U.S. innovation pipeline that was created as an artifact of the way we fund research. Most public dollars are dedicated to early-stage research but development and commercialization are normally left to the private sector, which is vulnerable to macroeconomic trends that can stall innovation for years. The U.S. Innovation Accelerator would open up that bottleneck by driving innovation and economic growth while addressing critical national needs. Because the U.S. Innovation Accelerator would exist outside of the federal government, it can be created without an act of Congress. The President could direct his administration through an executive action to develop plans and create the U.S. Innovation Accelerator as either part of the sovereign fund he has proposed or independent of that action. However, to get it initially funded and backed by the U.S. government, (see funding mechanisms above), Congress would have to appropriate dollars through an existing federal agency. Part of the charter for establishing the U.S. Innovation Accelerator could be repayment of the initial investments back to the U.S. Treasury from fund returns.

Frequently Asked Questions
Why are In-Q-Tel models worth replicating?

In-Q-Tel’s mission is to support a specific need of the U.S. government, to invest in companies that build information technologies that are of use to the intelligence community. Without such a model, intelligence agencies would have to rely on in-house expertise to develop such technologies. In the case of the U.S. Innovation Accelerator, the organization would invest in companies that are addressing critical technology gaps facing the entire nation. This would both de-risk such investments for private capital and drive forward innovations that might be out of favor with private capital investors that lack long-term strategic vision. It would also create a continuum from advanced research projects agencies through to the marketplace. This has been a particularly vexing issue for these agencies who generally invest in the research and development of new innovations, but not their advanced development and commercialization.

Why Should the Government Support Another Venture Fund When Private Capital Already Exists?

While there is indeed a significant amount of private capital available, private investors often exhibit risk aversion, particularly when it comes to groundbreaking innovations. Even in times of economic prosperity, private capital tends to gravitate toward trending sectors, driven by groupthink and the desire for near term exits. This lack of strategic patience completely neglects certain technology areas that are critical to solving national challenges. For instance, while private funding is readily available for AI/ML healthcare startups, companies developing new antibiotics often struggle to secure investment. This is a prime example of misalignment between private capital incentives and national health priorities. The proposed U.S. Innovation Accelerator would play a vital role in bridging this gap. It would act as a catalyst for pioneering innovations that tackle critical challenges, are truly novel, and have strong potential for success—areas where private capital might hesitate to invest due to a lack of strategic vision.

Would the U.S. Innovation Accelerator require annual funds in perpetuity?

While In-Q-Tel still receives annual funding from the U.S. government, we propose a model where the accelerator draws dollars from a variety of sources and repays those sources over time as the businesses they fund succeed. The objective would be for the accelerator to repay those funds within the first 10 years and then remain completely independent financially.

What can we learn from the In-Q-Tel model?

In-Q-Tel decides on its investment theses based on its government agency partners’ perceived strategic needs. These are sometimes highly focused needs with small market potential. This can limit the potential for large exits because those companies would be unable to raise additional investment to make products or modifications to products with such a small market opportunity. The U.S. Innovation Accelerator would prioritize investments in innovative companies making products that have a clearly defined public market and dual-use benefit.

How else could the U.S. Innovation Accelerator build on the In-Q-Tel model?
How would the U.S. Innovation Accelerator decide which companies to invest in?
A framework for investments would be essential to the success of the US Innovation Accelerator. The intent is to go where private capital cannot go or will not go; creating companies and products that address critical American challenges. The key attributes of these companies would be 1) That they are addressing a critical unmet challenge for the nation. 2) That the private sector is unwilling or unable to fund without backing of the U.S. Innovation Accelerator, and 3) That there is a promising path to market and profitability. The U.S. Innovation Accelerator can co-lead rounds to leverage private sector diligence.
Would the U.S. Innovation Accelerator collaborate with federal agencies on funding initiatives?
Yes. Not only would the accelerator be able to identify critical strategic challenges by working with federal agencies, but it would also be able to identify promising companies that have received federal research and development funding from agencies that are now seeking commercialization support. This is particularly true for the advanced research projects agencies that support health and defense research and development that are often unable to continue support after a proof-of-concept innovation is established.
Would the U.S. Innovation Accelerator be able to collaborate with other investors?
This will be essential for the success of the U.S. Innovation Accelerator. Evidence is already accumulating that similarly structured co-funding of federally backed venture funding with private sector dollars allows both to invest in more innovative companies.
How is this different from Federally Funded Research and Development Centers?
Federally funded research and development centers (FFRDCs) conduct research and development for the government. They are operated by universities and corporations to fulfill specific needs of government. They are not intended to create companies, drive economic growth, or commercialize innovations.

Empowering States for Resilient Infrastructure by Diffusing Federal Responsibility for Flood Risk Management

State and local failure to appropriately integrate flood risk into planning is a massive national liability – and a massive contributor to national debt. Though flooding is well recognized as a growing problem, our nation continues to address this threat through reactive, costly disaster responses instead of proactive, cost-saving investments in resilient infrastructure.

President Trump’s Executive Order (EO) on Achieving Efficiency Through State and Local Preparedness introduces a nationally strategic opportunity to rethink how state and local governments manage flood risk. The EO calls for the development and implementation of a National Resilience Strategy and National Risk Register, emphasizing the need for a decentralized approach to preparedness. To support this approach, the Trump Administration should mandate that state governments establish and fund flood infrastructure vulnerability assessment programs as a prerequisite for accessing federal flood mitigation funds. Modeled on the Resilient Florida Program, this policy would both improve coordination among federal, state, and local governments and yield long-term cost savings.

Challenge and Opportunity 

President Trump’s aforementioned EO signals a shift in national infrastructure policy. The order moves away from a traditional “all-hazards” approach to a more focused, risk-informed strategy. This new framework prioritizes proactive, targeted measures to address infrastructure risks. It also underscores the crucial role of state and local governments in enhancing national security and building a more resilient nation—emphasizing that preparedness is most effectively managed at subnational levels, with the federal government providing competent, accessible, and efficient support.

A core provision of the EO is the creation of a National Resilience Strategy to guide efforts in strengthening infrastructure against risks. The order mandates a comprehensive review of existing infrastructure policies, with the goal of recommending risk-informed approaches. The EO also directs development of a National Risk Register to document and assess risks to critical infrastructure, thereby providing a foundation for informed decision-making in infrastructure planning and funding.

In carrying out these directives, the risks of flooding on critical infrastructure must not be overlooked. The frequency and cost of weather- and flood-related disasters are increasing nationwide due to a combination of heightened exposure (infrastructure growth due to population and economic expansion) and vulnerability (susceptibility to damage). As shown in Figure 1, the cost of responding to disaster events such as flooding, severe storms, and tropical cyclones has risen exponentially since 1980, often reaching hundreds of billions of dollars annually.

Financial implications for the U.S. budget have also grown. As illustrated in Figure 2, federal appropriations to the Disaster Relief Fund (DRF) have surged in recent decades, driven by the demand for critical response and recovery services.

Infrastructure across the United States remains increasingly vulnerable to flooding. Critical infrastructure – including roads, utilities, and emergency services – is often inadequately equipped to withstand these heightened risks. Many critical infrastructure systems were designed decades ago when flood risks were lower, and have not been upgraded or replaced to account for changing conditions. The upshot is that significant deficiencies, reduced performance, and catastrophic economic consequences often result when floods occur today.

The costs of bailing out and patching up this infrastructure time and time again under today’s flood risk environment have become unsustainable. While agencies like the Federal Emergency Management Agency (FEMA), National Oceanic and Atmospheric Administration (NOAA), and U.S. Army Corps of Engineers (USACE) maintain and publish extensive flood risk datasets, no federal requirements mandate state and local governments to integrate this data with critical infrastructure data through flood infrastructure vulnerability assessments. This gap in policy demonstrates a disconnect between federal, state, and local efforts to protect critical infrastructure from flooding risks.

The only way to address this disconnect, and the recurring cost problem, is through a new paradigm – one that proactively integrates flood risk management and infrastructure resilience planning through mandatory, comprehensive flood infrastructure vulnerability assessments (FIVAs).

Multiple state programs demonstrate the benefits of such assessments. Most notably, the Resilient Florida Program, established in 2021, represents a significant investment in enhancing the resilience of critical infrastructure to flooding, rainfall, and extreme storms. Section 380.093 of the Florida Statutes requires all municipalities and counties across the state to conduct comprehensive FIVAs in order to qualify for state flood mitigation funding. These assessments identify risks to publicly owned critical and regionally significant assets, including transportation networks; evacuation routes; critical infrastructure; community and emergency facilities; and natural, cultural, and historical resources. To support this requirement, the Florida Legislature allocated funding to ensure municipalities and counties could complete the FIVAs. The findings then quickly informed statewide flood mitigation projects, with over $1.8 billion invested between 2021 and 2024 to reduce flooding risks across 365 implementation projects. 

To support the National Resilience Strategy and Risk Register, the Trump Administration should consider leveraging Florida’s model on a national scale. By requiring all states to conduct FIVAs, the federal government can limit its financial liability while advancing a more efficient and effective model of flood resilience that puts states and localities at the fore.

Rather than relying on federal funds to conduct these assessments, the federal government should implement a policy mandate requiring state governments to establish and fund their own FIVA programs. This mandate would diffuse federal responsibility of identifying flood risks to the state and local levels, ensuring that the assessments are tailored to the unique geographic conditions of each region. By decentralizing flood risk management, states can adopt localized strategies that better reflect their specific vulnerabilities and priorities. 

These state-led assessments would, in turn, provide a critical foundation for informed decision-making in national infrastructure planning, ensuring that federal investments in flood mitigation and resilience are targeted and effective. Specifically, the federal government would use the compiled data from state and local assessments to prioritize funding for projects that address the most pressing infrastructure vulnerabilities. This would enable federal agencies to allocate resources more efficiently, directing investments to areas with the highest risk exposure and the greatest potential for cost-effective mitigation. A standardized federal FIVA framework would ensure consistency in data collection, risk evaluation, and reporting across states. This would facilitate better coordination among federal, state, and local entities while improving integration of flood risk data into national infrastructure planning.

By implementing this strategy, the Trump Administration would reinforce the principle of shared responsibility in disaster preparedness and resilience, encouraging state and local governments to take the lead in safeguarding critical infrastructure. State-led FIVAs would also deliver significant long-term cost savings, given that investments in resilient infrastructure yield a substantial return on investment. (Studies show a 1:4 ratio of return on investment, meaning every dollar spent on resilience and preparedness saves $4 in future losses.) Finally, requiring FIVAs would build a more resilient nation, ensuring that communities are better equipped to withstand the increasing challenges posed by flooding and that federal investments are safeguarded. 

Plan of Action

The Trump Administration can support the National Resilience Strategy and National Risk Register by taking the following actions to promote state-led development and adoption of FIVAs. 

Recommendation 1. Create a Standardized FIVA Framework.

President Trump should direct his Administration, through an interagency FIVA Task Force, to create a standardized FIVA framework, drawing on successful models like the Resilient Florida Program. This framework will establish consistent methodologies for data collection, risk evaluation, and reporting, ensuring that assessments are both thorough and adaptable to state and local needs. An essential function of the task force should be to compile and review all existing federally maintained datasets on flood risks, which are maintained by agencies such as FEMA, NOAA, and USACE. By centralizing this information and providing streamlined access to high-quality, accurate data on flood risks, the task force will reduce the burden on state and local agencies.

Recommendation 2. Create Model Legislation.

The FIVA Task Force, working with leading organizations such as the American Flood Coalition (AFC), and Association of State Floodplain Managers (ASFPM), should create model legislation that state governments can adapt and enact to require local development and adoption of FIVAs. This legislation should outline the requirements for conducting assessments, including which infrastructure types need to be evaluated, what flood risk scenarios need to be considered, and how the findings must be used to guide infrastructure planning and investments.

Recommendation 3. Spur Uptake and Establish Accountability and Reporting Mechanisms.

Once the FIVA framework and model legislation are created, the Administration should require states to enact FIVA laws in order to be eligible for receiving federal infrastructure funding. This requirement should be phased in on clear and feasible timelines, with clear criteria for what provisions FIVA laws must include. Regular reporting requirements should also be established, whereby states must provide updates on their progress in conducting FIVAs and integrating findings into infrastructure planning. Updates should be captured in a public tracking system to ensure transparency and hold states accountable for completing assessments on time. Federal agencies should evaluate federal infrastructure funding requests based on the findings from state-led FIVAs to ensure that investments are targeted at areas with the highest flood risks and the greatest potential for resilience improvements.

Recommendation 4. Use State and Local Data to Shape Federal Policy.

Ensure that the results of state-led FIVAs are incorporated into future updates of the National Resilience Strategy and Risk Register, as well as other relevant federal policy and programs. This integration will provide a comprehensive view of national infrastructure risks and help inform federal decision-making and resource allocation for disaster preparedness and response.

Conclusion

The Trump Administration’s EO on Achieving Efficiency Through State and Local Preparedness opens the door to comprehensively rethink how we as a nation approach planning, disaster risk management, and resilience. Scaling successful approaches from states like Florida can deliver on the goals of the EO in at least five ways:

  1. Empowering state and local governments to take the lead in managing flood risks, ensuring that assessments and strategies are more reflective of local needs and conditions.
  2. Distributing the responsibility for identifying and mitigating flood risks across all levels of government, reducing the burden on the federal government and allowing more tailored, efficient responses.
  3. Reducing disaster response costs by prioritizing proactive, risk-informed planning over reactive recovery efforts, leading to long-term savings.
  4. Strengthening infrastructure resilience by making vulnerability assessments a condition for federal funding, driving investments that protect communities from flooding risks.
  5. Fostering greater accountability at the state and local levels, as governments will be directly responsible for ensuring that infrastructure is resilient to flooding, leading to more targeted and effective investments.

Melbourne Florida Flooding” by highlander411 is licensed under CC BY 2.0.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
Have any states already implemented requirements for conducting flood infrastructure vulnerability assessments (FIVAs) or similar types of evaluations to improve critical infrastructure resilience?

  1. Several states have enacted policies advancing FIVAs or resilience programming, demonstrating this type of program could readily achieve bipartisan support.


The Resilient Florida Program, established in 2021, marks the state’s largest investment in preparing communities for the impacts of intensified storms and flooding. This program includes mandates and grants to analyze, prepare for, and implement resilience projects across the state. A key element of the program is the required vulnerability assessment, which focuses on identifying risks to critical infrastructure. Counties and municipalities must analyze the vulnerability of regionally significant assets and submit geospatial mapping data to the Florida Department of Environmental Protection (FDEP). This data is used to create a comprehensive, statewide flooding dataset, updated every five years, followed by an annual Resilience Plan to prioritize and fund critical mitigation projects. 


In Texas, the State Flood Plan, enacted in 2019, initiated the first-ever regional and state flood planning process. This legislation established the Flood Infrastructure Fund to support financing for flood-related projects. Regional flood planning groups are tasked with submitting their regional flood plans to the Texas Water Development Board (TWDB), starting in January 2023 and every five years thereafter. A central component of these plans is identifying vulnerabilities in communities and critical facilities within each region. Texas has also developed a flood planning data hub with minimum geodatabase standards to ensure consistent data collection across regions, ultimately synthesizing this information into a unified statewide flood plan.


The Massachusetts Municipal Vulnerability Preparedness (MVP) Program, established in 2016, requires all state agencies and authorities, and all cities and town, to assess vulnerabilities and adopt strategies to increase the adaptive capacity and resilience of critical infrastructure assets. The Massachusetts model reflects an incentive-based approach that encourages municipalities to conduct vulnerability assessments and create actionable resilience plans with technical assistance and funding. The state awards communities with funding to complete vulnerability assessments and develop action-oriented resilience plans. Communities that complete the MVP program become certified as an MVP community and are eligible for grant funding and other opportunities.

How are infrastructure vulnerability assessments different than the existing federally mandated hazard mitigation planning programs?

  1. Infrastructure vulnerability assessments differ from federally mandated hazard mitigation planning programs in both scope and focus. While both aim to enhance resilience, they target different aspects of risk management.


Infrastructure vulnerability assessments are highly specific, concentrating on the resilience of individual critical infrastructure systems—such as water supply, transportation networks, energy grids, and emergency response systems. These assessments analyze the specific vulnerabilities of these assets to both acute shocks, such as extreme weather events or floods, and chronic stressors, such as aging infrastructure. The process typically involves detailed technical analyses, including simulations, modeling, and system-level evaluations, to identify weaknesses in each asset. The results inform tailored, asset-specific interventions, like reinforcing flood barriers, upgrading infrastructure, or improving emergency response capacity. These assessments are focused on ensuring that essential systems are resilient to specific risks, and they typically involve detailed contingency planning for each identified vulnerability.


In contrast, federally mandated hazard mitigation planning, such as FEMA’s programs under the Disaster Mitigation Act of 2000, focuses on community-wide risk reduction. These programs aim to reduce overall exposure to natural hazards, like floods, wildfires, or earthquakes, by developing broad strategies that apply to entire communities or regions. Hazard mitigation planning involves public input, policy changes, and community-wide infrastructure improvements, which may include measures like zoning regulations, public awareness campaigns, or building codes that aim to reduce vulnerability on a large scale. While these plans may identify specific hazards, the solutions they propose are generally community-focused and may not address the nuanced vulnerabilities of individual infrastructure systems. Rather than offering a deep dive into the resilience of specific assets, hazard mitigation planning focuses on reducing overall risk and improving long-term resilience for the community as a whole.

What assessment methodology could be used to inform a nationwide mandate for state-led flood infrastructure vulnerability assessments?

  • A proven methodology can be drawn from the Resilient Florida Program’s Standard Vulnerability Assessment Scope of Work Guidance. This methodology integrates geospatial mapping data with modeling outputs for a range of flood risks, including storm surge, tidal flooding, rainfall, and compound flooding. Communities overlay this flood risk data with their local infrastructure information – such as roads, utilities, and bridges – to identify vulnerable assets and prioritize resilience strategies.


  • For the nationwide mandate, this framework can be adapted, with technical assistance from federal agencies like FEMA, NOAA, and USACE to ensure consistency across regions and the integration of up-to-date flood risk data. FEMA could assist localities in adopting this methodology, ensuring that their vulnerability assessments are comprehensive and aligned with the latest flood risk data. This approach would help standardize assessments across the country while allowing for region-specific considerations, ensuring the mandate’s effectiveness in building resilience across the local, state, and national levels.

How will requiring FIVAs as a precondition for federal infrastructure funding diffuse the responsibility of flood risk management to state and local governments?

  • This requirement will diffuse the responsibility of flood risk management to state and local governments by requiring them to take the lead in conducting FIVAs. Under this approach, the federal government will shift from being the primary entity responsible for identifying flood risks to a more supportive role, providing resources and guidance to state and local governments.


State governments will be required to establish and fund their own FIVAs, ensuring that each region’s unique geographic, climatic, and socioeconomic factors are considered when identifying and addressing flood risks. By decentralizing the process, states can tailor their strategies to local needs, which improves the efficiency of flood risk management efforts.


Local governments will also play a key role by implementing these assessments at the community level, ensuring that critical infrastructure is evaluated for its vulnerability to flooding. This will allow for more targeted interventions and investments that reflect local priorities and risks.


The federal government will use the data from these state and local assessments to prioritize funding and allocate resources more efficiently, ensuring that infrastructure resilience projects address the highest flood risks with the greatest potential for long-term savings. 

Increasing Responsible Data Sharing Capacity throughout Government

Deriving insights from data is essential for effective governance. However, collecting and sharing data—if not managed properly—can pose privacy risks for individuals. Current scientific understanding shows that so-called “anonymization” methods that have been widely used in the past are inadequate for protecting privacy in the era of big data and artificial intelligence. The evolving field of Privacy-Enhancing Technologies (PETs), including differential privacy and secure multiparty computation, offers a way forward for sharing data safely and responsibly.

The administration should prioritize the use of PETs by integrating them into data-sharing processes and strengthening the executive branch’s capacity to deploy PET solutions.

Challenge and Opportunity

A key function of modern government is the collection and dissemination of data. This role of government is enshrined in Article 1, Section 2 of the U.S. Constitution in the form of the decennial census—and has only increased with recent initiatives to modernize the federal statistical system and expand evidence-based policymaking. The number of datasets itself has also grown; there are now over 300,000 datasets on data.gov, covering everything from border crossings to healthcare. The release of these datasets not only accomplishes important transparency goals, but also represents an important step toward advancing American society fairer, as data are a key ingredient in identifying policies that benefit the public. 

Unfortunately, the collection and dissemination of data comes with significant privacy risks. Even with access to aggregated information, motivated attackers can extract information specific to individual data subjects and cause concrete harm. A famous illustration of this risk occurred in 1997 when Latanya Sweeney was able to identify the medical record of then-Governor of Massachusetts, William Weld, from a public, anonymized dataset. Since then, the power of data re-identification techniques—and incentives for third parties to learn sensitive information about individuals—have only increased, compounding this risk. As a democratic, civil-rights respecting nation, it is irresponsible for our government agencies to continue to collect and disseminate datasets without careful consideration of the privacy implications of data sharing.

While there may appear to be an irreconcilable tension between facilitating data-driven insight and protecting the privacy of individual’s data, an emerging scientific consensus shows that Privacy-Enhancing Technologies (PETs) offer a path forward. PETs are a collection of techniques that enable data to be used while tightly controlling the risk incurred by individual data subjects. One particular PET, differential privacy (DP), was recently used by the U.S. Census Bureau within their disclosure avoidance system for the 2020 decennial census in order to meet their dual mandates of data release and confidentiality. Other PETs, including variations of secure multiparty computation, have been used experimentally by other agencies, including to link long-term income data to college records and understand mental health outcomes for individuals who have earned doctorates. The National Institute of Standards and Technology (NIST) has produced frameworks and reports on data and information privacy, including PETs topics such as DP (see Q&A section). However, these reports still lack a comprehensive and actionable framework on how organizations should consider, use and deploy PETs in organizations. 

As artificial intelligence becomes more prevalent inside and outside government and relies on increasingly large datasets, the need for responsible data sharing is growing more urgent. The federal government is uniquely positioned to foster responsible innovation and set a strong example by promoting the use of PETs. The use of DP in the 2020 decennial census was an extraordinary example of the government’s capacity to lead global innovation in responsible data sharing practices. While the promise of continuing this trend is immense, expanding the use of PETs within government poses twin challenges: (1) sharing data within government comes with unique challenges—both technical and legal—that are only starting to be fully understood and (2) expertise on using PETs within government is limited. In this proposal, we outline a concrete plan to overcome these challenges and unlock the potential of PETs within government.

Plan of Action

Using PETs when sharing data should be a key priority for the executive branch. The new administration should encourage agencies to consider the use of PETs when sharing data and build a United States DOGE Service (USDS) “Responsible Data Sharing Corps” of professionals who can provide in-house guidance around responsible data sharing.

We believe that enabling data sharing with PETs requires (1) gradual, iterative refinement of norms and (2) increased capacity in government. With these in mind, we propose the following recommendations for the executive branch.

Strategy Component 1. Build consideration of PETs into the process of data sharing

Recommendation 1. NIST should produce a decision-making framework for organizations to rely on when evaluating the use of PETs.

NIST should provide a step-by-step decision-making framework for determining the appropriate use of PETs within organizations, including whether PETs should be used, and if so, which PET and how it should be deployed. Specifically, this guidance should be at the same level of granularity as NIST Risk Management Framework for Cybersecurity. NIST should consult with a range of stakeholders from the broad data sharing ecosystem to create this framework. This includes data curators (i.e., organizations that collect and share data, within and outside the government); data users (i.e., organizations that consume, use and rely on shared data, including government agencies, special interest groups and researchers); data subjects; experts across fields such as information studies, computer science, and statistics; and decision makers within public and private organizations who have prior experience using PETs for data sharing. The report may build on NIST’s existing related publications and other guides for policymakers considering the use of specific PETs, and should provide actionable guidance on factors to consider when using PETs. The output of this process should be not only a decision, but also a report documenting the execution of decision-making framework (which will be instrumental for Recommendation 3).

Recommendation 2. The Office of Management and Budget (OMB) should mandate government agencies interested in data sharing to use the NIST’s decision-making framework developed in Recommendation 1 to determine the appropriateness of PETs to protect their data pipelines.

The risks to data subjects associated with data releases can be significantly mitigated with the use of PETs, such as differential privacy. Along with considering other mechanisms of disclosure control (e.g., tiered access, limiting data availability), agencies should investigate the feasibility and tradeoffs around using PETs to protect data subjects while sharing data for policymaking and public use. To that end, OMB should require government agencies to use the decision-making framework produced by NIST (in Recommendation 1) for each instance of data sharing. We emphasize that this decision-making process may lead to a decision not to use PETs, as appropriate. Agencies should compile the produced reports such that they can be accessed by OMB as part of Recommendation 3.

Recommendation 3. OMB should produce a PET Use Case Inventory and annual reports that provide insights on the use of PETs in government data-sharing contexts.

To promote transparency and shared learning, agencies should share the reports produced as part of their PET deployments and associated decision-making processes with OMB. Using these reports, OMB should (1) publish a federal government PET Use Case Inventory (similar to the recently established Federal AI Use Case Inventory) and (2) synthesize these findings into an annual report. These findings should provide high-level insights into the decisions that are being made across agencies regarding responsible data sharing, and highlight the barriers to adoption of PETs within various government data pipelines. These reports can then be used to update the decision-making frameworks we propose that NIST should produce (Recommendation 1) and inspire further technical innovation in academia and the private sector.

Strategy Component 2. Build capacity around responsible data sharing expertise 

Increasing in-depth decision-making around responsible data sharing—including the use of PETs—will require specialized expertise. While there are some government agencies with teams well-trained in these topics (e.g., the Census Bureau and its team of DP experts), expertise across government is still lacking. Hence, we propose a capacity-building initiative that increases the number of experts in responsible data sharing across government.

Recommendation 4. Announce the creation of a “Responsible Data Sharing Corps.”

We propose that the USDS create a “Responsible Data Sharing Corps” (RDSC). This team will be composed of experts in responsible data sharing practices and PETs. RDSC experts can be deployed into other government agencies as needed to support decision-making about data sharing. They may also be available for as-needed consultations with agencies to answer questions or provide guidance around PETs or other relevant areas of expertise.

Recommendation 5. Build opportunities for continuing education and training for RDSC members.

Given the evolving nature of responsible data practices, including the rapid development of PETs and other privacy and security best practices, members of the RDSC should have 20% effort reserved for continuing education and training. This may involve taking online courses or attending workshops and conferences that describe state-of-the-art PETs and other relevant technologies and methodologies.

Recommendation 6. Launch a fellowship program to maintain the RDSC’s cutting-edge expertise in deploying PETS.

Finally, to ensure that the RDSC stays at the cutting edge of relevant technologies, we propose an RDSC fellowship program similar to or part of the Presidential Innovation Fellows. Fellows may be selected from academia or industry, but should have expertise in PETs and propose a novel use of PETs in a government data-sharing context. During their one-year terms, fellows will perform their proposed work and bring new knowledge to the RDSC.

Conclusion

Data sharing has become a key priority for the government in recent years, but privacy concerns make it critical to modernize technology for responsible data use to leverage data for policymaking and transparency. PETs such as differential privacy, secure multiparty computation, and others offer a promising way forward. However, deploying PETs at a broad scale requires changing norms and increasing capacity in government. The executive branch should lead these efforts by encouraging agencies to consider PETs when making data-sharing decisions and building a “Responsible Data Sharing Corps” who can provide expertise and support for agencies in this effort. By encouraging the deployment of PETs, the government can increase fairness, utility and transparency of data while protecting itself—and its data subjects—from privacy harms.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
What are the concrete risks associated with data sharing?

Data sharing requires a careful balance of multiple factors, with privacy and utility being particularly important.



  • Data products released without appropriate and modern privacy protection measures could facilitate abuse, as attackers can weaponize information contained in these data products against individuals, e.g., blackmail, stalking, or publicly harassing those individuals.

  • On the other hand, the lack of accessible data can also cause harm due to reduced utility: various actors, such as state and local government entities, may have limited access to accurate or granular data, resulting in the inefficient allocation of resources to small or marginalized communities.

What are some examples of PETs to consider?

Privacy-Enhancing Technologies is a broad umbrella category that includes many different technical tools. Leading examples of these tools include differential privacy, secure multiparty computation, trusted execution environments, and federated learning. Each one of these technologies is designed to address different privacy threats. For additional information, we suggest the UN Guide on Privacy-Enhancing Technologies for Official Statistics and the ICO’s resources on Privacy-Enhancing Technologies.

What NIST publications are relevant to PETs?

NIST has multiple publications related to data privacy, such as the Risk Management Framework for Cybersecurity and the Privacy Framework. The report De-Identifying Government Datasets: Techniques and Governance focuses on responsible data sharing by government organizations, while the Guidelines for Evaluating Differential Privacy Guarantees provides a framework to assess the privacy protection level provided by differential privacy for any organization.

What is differential privacy (DP)?

Differential privacy is a framework for controlling the amount of information leaked about individuals during a statistical analysis. Typically, random noise is injected into the results of the analysis to hide individual people’s specific information while maintaining overall statistical patterns in the data. For additional information, we suggest Differential Privacy: A Primer for a Non-technical Audience.

What is secure multiparty computation (MPC)?

Secure multiparty computation is a technique that allows several actors to jointly aggregate information while protecting each actor’s data from disclosure. In other words, it allows parties to jointly perform computations on their data while ensuring that each party learns only the result of the computation. For additional information, we suggest Secure Multiparty Computation FAQ for Non-Experts.

How have privacy-enhancing technologies been used in government before, domestically and internationally?

There are multiple examples of PET deployments at both the federal and local levels both domestically and internationally. We list several examples below, and refer interested readers to the in-depth reports by Advisory Committee on Data for Evidence Building (report 1 and report 2):



  • The Census Bureau used differential privacy in their disclosure avoidance system to release results from the 2020 decennial census data. Using differential privacy allowed the bureau to provide formal disclosure avoidance guarantees as well as precise information about the impact of this system on the accuracy of the data.

  • The Boston Women’s Workforce Council (BWWC) measures wage disparities among employers in the greater Boston area using secure multiparty computation (MPC).

  • The Israeli Ministry of Health publicly released its National Life Birth Registry using differential privacy.

  • Privacy-preserving record linkage, a variant of secure multiparty computation, has been used experimentally by both the U.S. Department of Education and the National Center for Health Statistics. Additionally, it has been used at the county level in Allegheny County, PA.


Additional examples can also be found in the UN’s case-study repository of PET deployments.

What type of expertise is required to deploy PETs solutions?

Data-sharing projects are not new to the government, and pockets of relevant expertise—particularly in statistics, software engineering, subject matter areas, and law—already exist. Deploying PET solutions requires technical computer science expertise for building and integrating PETs into larger systems, as well as sociotechnical expertise in communicating the use of PETs to relevant parties and facilitating decision-making around critical choices.

Reforming the Federal Advisory Committee Landscape for Improved Evidence-based Decision Making and Increasing Public Trust

Federal Advisory Committees (FACs) are the single point of entry for the American public to provide consensus-based advice and recommendations to the federal government. These Advisory Committees are composed of experts from various fields who serve as Special Government Employees (SGEs), attending committee meetings, writing reports, and voting on potential government actions.

Advisory Committees are needed for the federal decision-making process because they provide additional expertise and in-depth knowledge for the Agency on complex topics, aid the government in gathering information from the public, and allow the public the opportunity to participate in meetings about the Agency’s activities. As currently organized, FACs are not equipped to provide the best evidence-based advice. This is because FACs do not meet transparency requirements set forth by GAO: making pertinent decisions during public meetings, reporting inaccurate cost data, providing  official meeting documents publicly available online, and more. FACs have also experienced difficulty with recruiting and retaining top talent to assist with decision making. For these reasons, it is critical that FACs are reformed and equipped with the necessary tools to continue providing the government with the best evidence-based advice. Specifically, advice as it relates to issues such as 1) decreasing the burden of hiring special government employees 2) simplifying the financial disclosure process 3) increasing understanding of reporting requirements and conflict of interest processes 4) expanding training for Advisory Committee members 5) broadening  the roles of Committee chairs and designated federal officials 6) increasing public awareness of Advisory Committee roles 7) engaging the public outside of official meetings 8) standardizing representation from Committee representatives 9) ensuring that Advisory Committees are meeting per their charters and 10) bolstering Agency budgets for critical Advisory Committee issues. 

Challenge and Opportunity

Protecting the health and safety of the American public and ensuring that the public has the opportunity to participate in the federal decision-making process is crucial. We must evaluate the operations and activities of federal agencies that require the government to solicit evidence-based advice and feedback from various experts through the use of federal Advisory Committees (FACs). These Committees are instrumental in facilitating transparent and collaborative deliberation between the federal government, the advisory body, and the American public and cannot be done through the use of any other mechanism. Advisory Committee recommendations are integral to strengthening public trust and reinforcing the credibility of federal agencies. Nonetheless, public trust in government has been waning and efforts should be made to increase public trust. Public trust is known as the pillar of democracy and fosters trust between parties, particularly when one party is external to the federal government. Therefore, the use of Advisory Committees, when appropriately used, can assist with increasing public trust and ensuring compliance with the law. 

There have also been many success stories demonstrating the benefits of Advisory Committees. When Advisory Committees are appropriately staffed based on their charge, they can decrease the workload of federal employees, assist with developing policies for some of our most challenging issues, involve the public in the decision-making process, and more. However, the state of Advisory Committees and the need for reform have been under question, and even more so as we transition to a new administration. Advisory Committees have contributed to the improvement in the quality of life for some Americans through scientific advice, as well as the monitoring of cybersecurity. For example, an FDA Advisory Committee reviewed data and saw promising results for the treatment of sickle cell disease (SCD) which has been a debilitating disease with limited treatment for years. The Committee voted in favor of gene therapy drugs Casgevy and Lyfgenia which were the first to be approved by the FDA for SCD. 

Under the first Trump administration, Executive Order (EO) 13875 resulted in a significant decrease in the number of federal advisory meetings. This  limited agencies’ ability to convene external advisors. Federal science advisory committees met less during this administration than any prior administration, met less than what was required from their charter, disbanded long standing Advisory Committees, and scientists receiving agency grants were barred from serving on Advisory Committees. Federal Advisory Committee membership also decreased by 14%, demonstrating the issue of recruiting and retaining top talent. The disbandment of Advisory Committees, exclusion of key scientific external experts from Advisory Committees, and burdensome procedures can potentially trigger  severe consequences that affect the health and safety of Americans. 

Going into a second Trump administration, it is imperative that Advisory Committees have the opportunity to assist federal agencies with the evidence-based advice needed to make critical decisions that affect the American public. The suggested reforms that follow can work to improve the overall operations of Advisory Committees while still providing the government with necessary evidence-based advice. With successful implementation of the following recommendations, the federal government will be able to reduce administrative burden on staff through the recruitment, onboarding, and conflict of interest processes. 

The U.S. Open Government Initiative encourages the promotion and participation of public and community engagement in  governmental affairs. However, individual Agencies can and should do more to engage the public. This policy memo identifies several areas of potential reform for Advisory Committees and aims to provide recommendations for improving the overall process without compromising Agency or Advisory Committee membership integrity. 

Plan of Action

The proposed plan of action identifies several policy recommendations to reform the federal Advisory Committee (Advisory Committee) process, improving both operations and efficiency. Successful implementation of these policies will  1) improve the Advisory Committee member experience, 2) increase transparency in federal government decision-making, and 3) bolster trust between the federal government, its Advisory Committees, and the public. 

Streamline Joining Advisory Committees

Recommendation 1. Decrease the burden of hiring special government employees in an effort to (1) reduce the administrative burden for the Agency and (2) encourage Advisory Committee members, who are also known as special government employees (SGEs), to continue providing the best evidence-based advice to the federal government through reduced onerous procedures

The Ethics in Government Act of 1978 and Executive Order 12674 lists OGE-450 reporting as the required public financial disclosure for all executive branch and special government employees. This Act provides the Office of Government Ethics (OGE) the authority to implement and regulate a financial disclosure system for executive branch and special government employees whose duties have “heightened risk of potential or actual conflicts of interest”. Nonetheless, the reporting process becomes onerous when Advisory Committee members have to complete the OGE-450 before every meeting even if their information remains unchanged. This presents a challenge for Advisory Committee members who wish to continue serving, but are burdened by time constraints. The process also burdens federal staff who manage the financial disclosure system. 

Policy Pathway 1. Increase funding for enhanced federal staffing capacity to undertake excessive administrative duties for financial reporting.

Policy Pathway 2. All federal agencies that deploy Advisory Committees can conduct a review of the current OGC-450 process, budget support for this process, and work to develop an electronic process that will eliminate the use of forms and allow participants to select dropdown options indicating if their financial interests have changed.  

Recommendation 2. Create and use public platforms such as OpenPayments by CMS to (1) aid in simplifying the financial disclosure reporting process and (2) increase transparency for disclosure procedures

Federal agencies should create a financial disclosure platform that streamlines the process and allows Advisory Committee members to submit their disclosures and easily make updates. This system should also be created to monitor and compare financial conflicts. In addition, agencies that utilize the expertise of Advisory Committees for drugs and devices should identify additional ways in which they can promote financial transparency. These agencies can use Open Payments, a system operated by Centers for Medicare & Medicaid Services (CMS), to “promote a more financially transparent and accountable healthcare system”. The Open Payments system makes payments from medical and drug device companies to individuals, healthcare providers, and teaching hospitals accessible to the public. If for any reason financial disclosure forms are called into question, the Open Payments platform can act as a check and balance in identifying any potential financial interests of Advisory Committee members. Further steps that can be taken to simplify the financial disclosure process would be to utilize conflict of interest software such as Ethico which is a comprehensive tool that allows for customizable disclosure forms, disclosure analytics for comparisons, and process automation.   

Policy Pathway. The Office of Government Ethics should require all federal agencies that operate Advisory Committees to develop their own financial disclosure system and include a second step in the financial disclosure reporting process as due diligence, which includes reviewing the Open Payments by CMS system for potential financial conflicts or deploying conflict of interest monitoring software to streamline the process.

Streamline Participation in an Advisory Committee

Recommendation 3. Increase understanding of annual reporting requirements for conflict of interest (COI)

Agencies should develop guidance that explicitly states the roles of Ethics Officers, also known as Designated Agency Ethics Officials (DAEO), within the federal government. Understanding the roles and responsibilities of Advisory Committee members and the public will help reduce the spread of misinformation regarding the purpose of Advisory Committees. In addition, agencies should be encouraged by the Office of Government Ethics to develop guidance that indicates the criteria for inclusion or exclusion of participation in Committee meetings. Currently, there is no public guidance that states what types of conflicts of interests are granted waivers for participation. Full disclosure of selection and approval criteria will improve transparency with the public and draw clear delineations between how Agencies determine who is eligible to participate. 

Policy Pathway. Develop conflict of interest (COI) and financial disclosure guidance specifically for SGEs that states under what circumstances SGEs are allowed to receive waivers for participation in Advisory Committee meetings.

Recommendation 4. Expand training for Advisory Committee members to include (1) ethics and (2) criteria for making good recommendations to policymakers

Training should be expanded for all federal Advisory Committee members to include ethics training which details the role of Designated Agency Ethics Officials, rules and regulations for financial interest disclosures, and criteria for making evidence-based recommendations to policymakers. Training for incoming Advisory Committee members ensures that all members have the same knowledge base and can effectively contribute to the evidence-based recommendations process.

Policy Pathway. Agencies should collaborate with the OGE and Agency Heads to develop comprehensive training programs for all incoming Advisory Committee members to ensure an understanding of ethics as contributing members, best practices for providing evidence-based recommendations, and other pertinent areas that are deemed essential to the Advisory Committee process.

Leverage Advisory Committee Membership

Recommendation 5. Uplifting roles of the Committee Chairs and Designated Federal Officials

Expanding the roles of Committee Chairs and Designated Federal Officers (DFOs) may assist federal Agencies with recruiting and retaining top talent and maximizing the Committee’s ability to stay abreast of critical public concerns. Considering the fact that the General Services Administration has to be consulted for the formation of new Committees, renewal, or alteration of Committees, they can be instrumental in this change.

Policy Pathway. The General Services Administration (GSA) should encourage federal Agencies to collaborate with Committee Chairs and DFOs to recruit permanent and ad hoc Committee members who may have broad network reach and community ties that will bolster trust amongst Committees and the public. 

Recommendation 6. Clarify intended roles for Advisory Committee members and the public

There are misconceptions among the public and Advisory Committee members about Advisory Committee roles and responsibilities. There is also ambiguity regarding the types of Advisory Committee roles such as ad hoc members, consulting, providing feedback for policies, or making recommendations. 

Policy Pathway. GSA should encourage federal Agencies to develop guidance that delineates the differences between permanent and temporary Advisory Committee members, as well as their roles and responsibilities depending on if they’re providing feedback for policies or providing recommendations for policy decision-making.

Recommendation 7. Utilize and engage expertise and the public outside of public meetings

In an effort to continue receiving the best evidence-based advice, federal Agencies should develop alternate ways to receive advice outside of public Committee meetings. Allowing additional opportunities for engagement and feedback from Committee experts or the public will allow Agencies to expand their knowledge base and gather information from communities who their decisions will affect.

Policy Pathway. The General Services Administration should encourage federal Agencies to create opportunities outside of scheduled Advisory Committee meetings to engage Committee members and the public on areas of concern and interest as one form of engagement. 

Recommendation 8. Standardize representation from Committee representatives (i.e., industry), as well as representation limits

The Federal Advisory Committee Act (FACA) does not specify the types of expertise that should be represented on all federal Advisory Committees, but allows for many types of expertise. Incorporating various sets of expertise that are representative of the American public will ensure the government is receiving the most accurate, innovative, and evidence-based recommendations for issues and products that affect Americans. 

Policy Pathway. Congress should include standardized language in the FACA that states all federal Advisory Committees should include various sets of expertise depending on their charge. This change should then be enforced by the GSA.

Support a Vibrant and Functioning Advisory Committee System

Recommendation 9. Decrease the burden to creating an Advisory Committee and make sure Advisory Committees are meeting per their charters

The process to establish an Advisory Committee should be simplified in an effort to curtail the amount of onerous processes that lead to a delay in the government receiving evidence based advice.

Advisory Committee charters state the purpose of Advisory Committees, their duties, and all aspirational aspects. These charters are developed by agency staff or DFOs with consultation from their agency Committee Management Office. Charters are needed to forge the path for all FACs.

Policy Pathway. Designated Federal Officers (DFOs) within federal agencies should work with their Agency head to review and modify steps to establishing FACs. Eliminate the requirement for FACs to require consultation and/or approval from GSA for the formation, renewal, or alteration of Advisory Committees.

Recommendation 10. Bolster agency budgets to support FACs on critical issues where regular engagement and trust building with the public is essential for good policy

Federal Advisory Committees are an essential component to receive evidence-based recommendations that will help guide decisions at all stages of the policy process. These Advisory Committees are oftentimes the single entry point external experts and the public have to comment and participate in the decision-making process. However, FACs take considerable resources to operate depending on the frequency of meetings, the number of Advisory Committee members, and supporting FDA staff. Without proper appropriations, they have a diminished ability to recruit and retain top talent for Advisory Committees. The Government Accountability Office (GAO) reported that in 2019, approximately $373 million dollars was spent to operate a total of 960 federal Advisory Committees. Some Agencies have experienced a decrease in the number of Advisory Committee convenings. Individual Agency heads should conduct a budget review of average operating and projected costs and develop proposals for increased funding to submit to the Appropriations Committee.  

Policy Pathway. Congress should consider increasing appropriations to support FACs so they can continue to enhance federal decision-making, improve public policy, boost public credibility, and Agency morale. 

Conclusion

Advisory Committees are necessary to the federal evidence-based decision-making ecosystem. Enlisting the advice and recommendations of experts, while also including input from the American public, allows the government to continue making decisions that will truly benefit its constituents. Nonetheless, there are areas of FACs that can be improved to ensure it continues to be a participatory, evidence-based process. Additional funding is needed to compensate the appropriate Agency staff for Committee support, provide potential incentives for experts who are volunteering their time, and finance other expenditures.

Frequently Asked Questions
How will Federal Advisory Committees (Advisory Committees) increase government efficiency?

With reform of Advisory Committees, the process for receiving evidence-based advice will be streamlined, allowing the government to receive this advice in a faster and less burdensome manner. Reform will be implemented by reducing the administrative burden for federal employees through the streamlining of recruitment, financial disclosure, and reporting processes.

A Federal Center of Excellence to Expand State and Local Government Capacity for AI Procurement and Use

The administration should create a federal center of excellence for state and local artificial intelligence (AI) procurement and use—a hub for expertise and resources on public sector AI procurement and use at the state, local, tribal, and territorial (SLTT) government levels. The center could be created by expanding the General Services Administration’s (GSA) existing Artificial Intelligence Center of Excellence (AI CoE). As new waves of AI technologies enter the market, shifting both practice and policy, such a center of excellence would help bridge the gap between existing federal resources on responsible AI and the specific, grounded challenges that individual agencies face. In the decades ahead, new AI technologies will touch an expanding breadth of government services—including public health, child welfare, and housing—vital to the wellbeing of the American people. An AI CoE federal center would equip public sector agencies with sustainable expertise and set a consistent standard for practicing responsible AI procurement and use. This resource ensures that AI truly enhances services, protects the public interest, and builds public trust in AI-integrated state and local government services. 

Challenge and Opportunity 

State, local, tribal, and territorial (SLTT) governments provide services that are critical to the welfare of our society. Among these: providing housing, child support, healthcare, credit lending, and teaching. SLTT governments are increasingly interested in using AI to assist with providing these services. However, they face immense challenges in responsibly procuring and using new AI technologies. While grappling with limited technical expertise and budget constraints, SLTT government agencies considering or deploying AI must navigate data privacy concerns, anticipate and mitigate biased model outputs, ensure model outputs are interpretable to workers, and comply with sector-specific regulatory requirements, among other responsibilities. 

The emergence of foundation models (large AI systems adaptable to many different tasks) for public sector use exacerbates these existing challenges. Technology companies are now rapidly developing new generative AI services tailored towards public sector organizations. For example, earlier this year, Microsoft announced that Azure OpenAI Service would be newly added to Azure Government—a set of AI services that target government customers. These types of services are not specifically created for public sector applications and use contexts, but instead are meant to serve as a foundation for developing specific applications. 

For SLTT government agencies, these generative AI services blur the line between procurement and development: Beyond procuring specific AI services, we anticipate that agencies will increasingly be tasked with the responsible use of general AI services to develop specific AI applications. Moreover, recent AI regulations suggest that responsibility and liability for the use and impacts of procured AI technologies will be shared by the public sector agency that deploys them, rather than just resting with the vendor supplying them.

SLTT agencies must be well-equipped with responsible procurement practices and accountability mechanisms pivotal to moving forward given the shifts across products, practice, and policy. Federal agencies have started to provide guidelines for responsible AI procurement (e.g., Executive Order 13960, OMB-M-21-06, NIST RMF). But research shows that SLTT governments need additional support to apply these resources.: Whereas existing federal resources provide high-level, general guidance, SLTT government agencies must navigate a host of challenges that are context-specific (e.g., specific to regional laws, agency practices, etc.). SLTT government agency leaders have voiced a need for individualized support in accounting for these context-specific considerations when navigating procurement decisions. 

Today, private companies are promising state and local government agencies that using their AI services can transform the public sector. They describe diverse potential applications, from supporting complex decision-making to automating administrative tasks. However, there is minimal evidence that these new AI technologies can improve the quality and efficiency of public services. There is evidence, on the other hand, that AI in public services can have unintended consequences, and when these technologies go wrong, they often worsen the problems they are aimed at solving. For example, by increasing disparities in decision-making when attempting to reduce them. 

Challenges to responsible technology procurement follow a historical trend: Government technology has frequently been critiqued for failures in the past decades. Because public services such as healthcare, social work, and credit lending have such high stakes, failures in these areas can have far-reaching consequences. They also entail significant financial costs, with millions of dollars wasted on technologies that ultimately get abandoned. Even when subpar solutions remain in use, agency staff may be forced to work with them for extended periods despite their poor performance.

The new administration is presented with a critical opportunity to redirect these trends. Training each relevant individual within SLTT government agencies, or hiring new experts within each agency, is not cost- or resource-effective. Without appropriate training and support from the federal government, AI adoption is likely to be concentrated in well-resourced SLTT agencies, leaving others with fewer resources (and potentially more low income communities) behind. This could lead to disparate AI adoption and practices among SLTT agencies, further exacerbating existing inequalities. The administration urgently needs a plan that supports SLTT agencies in learning how to handle responsible AI procurement and use–to develop sustainable knowledge about how to navigate these processes over time—without requiring that each relevant individual in the public sector is trained. This plan also needs to ensure that, over time, the public sector workforce is transformed in their ability to navigate complicated AI procurement processes and relationships—without requiring constant retraining of new waves of workforces. 

In the context of federal and SLTT governments, a federal center of excellence for state and local AI procurement would accomplish these goals through a “hub and spoke” model. This center of excellence would serve as the “hub” that houses a small number of selected experts from academia, non-profit organizations, and government. These experts would then train “spokes”—existing state and local public sector agency workers—in navigating responsible procurement practices. To support public sector agencies in learning from each others’ practices and challenges, this federal center of excellence could additionally create communication channels for information- and resource-sharing across the state and local agencies. 

Procured AI technologies in government will serve as the backbone of local public services for decades to come. Upskilling government agencies to make smart decisions about which AI technologies to procure (and which are best avoided) would not only protect the public from harmful AI systems but would also save the government money by decreasing the likelihood of adopting expensive AI technologies that end up getting dropped. 

Plan of Action 

A federal center of excellence for state and local AI procurement would ensure that procured AI technologies are responsibly selected and used to serve as a strong and reliable backbone for public sector services. This federal center of excellence can support both intra-agency and inter-agency capacity-building and learning about AI procurement and use—that is, mechanisms to support expertise development within a given public sector agency and between multiple public sector agencies. This federal center of excellence would not be deliberative (i.e., SLTT governments would receive guidance and support but would not have to seek approval on their practices). Rather, the goal would be to upskill SLTT agencies so they are better equipped to navigate their own AI procurement and use endeavors. 

To upskill SLTT agencies through inter-agency capacity-building, the federal center of excellence would house experts in relevant domain areas (e.g., responsible AI, public interest technology, and related topics). Fellows would work with cohorts of public sector agencies to provide training and consultation services. These fellows, who would come from government, academia, and civil society, would build on their existing expertise and experiences with responsible AI procurement, integrating new considerations proposed by federal standards for responsible AI (e.g., Executive Order 13960, OMB-M-21-06, NIST RMF). The fellows would serve as advisors to help operationalize these guidelines into practical steps and strategies, helping to set a consistent bar for responsible AI procurement and use practices along the way. 

Cohorts of SLTT government agency workers, including existing agency leaders, data officers, and procurement experts, would work together with an assigned advisor to receive consultation and training support on specific tasks that their agency is currently facing. For example, for agencies or programs with low AI maturity or familiarity (e.g., departments that are beginning to explore the adoption of new AI tools), the center of excellence can help navigate the procurement decision-making process, help them understand their agency-specific technology needs, draft procurement contracts, select amongst proposals, and negotiate plans for maintenance. For agencies and programs with high AI maturity or familiarity, the advisor can train the programs about unexpected AI behaviors and mitigation strategies, when this arises. These communication pathways would allow federal agencies to better understand the challenges state and local governments face in AI procurement and maintenance, which can help seed ideas for improving existing resources and create new resources for AI procurement support.

To scaffold intra-agency capacity-building, the center of excellence can build the foundations for cross-agency knowledge-sharing. In particular, it would include a communication platform and an online hub of procurement resources, both shared amongst agencies. The communication platform would allow state and local government agency leaders who are navigating AI procurement to share challenges, learned lessons, and tacit knowledge to support each other. The online hub of resources can be collected by the center of excellence and SLTT government agencies. Through the online hub, agencies can upload and learn about new responsible AI resources and toolkits (e.g., such as those created by government and the research community), as well as examples of procurement contracts that agencies themselves used. 

To implement this vision, the new administration should expand the U.S. General Services Administration’s (GSA) existing Artificial Intelligence Center of Excellence (AI CoE), which provides resources and infrastructural support for AI adoption across the federal government. We propose expanding this existing AI CoE to include the components of our proposed center of excellence for state and local AI procurement and use. This would direct support towards SLTT government agencies—which are currently unaccounted for in the existing AI CoE—specifically via our proposed capacity-building model.

Over the next 12 months, the goals of expanding the AI CoE would be three-fold:

1. Develop the core components of our proposed center of excellence within the AI CoE. 

2. Launch collaborations for the first sample of SLTT government agencies. Focus on building a path for successful collaborations: 

3. Build a path for our proposed center of excellence to grow and gain experience. If the first few collaborations show strong reviews, design a scaling strategy that will: 

Conclusion

Expanding the existing AI CoE to include our proposed federal center of excellence for AI procurement and use can help ensure that SLTT governments are equipped to make informed, responsible decisions about integrating AI technologies into public services. This body would provide necessary guidance and training, helping to bridge the gap between high-level federal resources and the context-specific needs of SLTT agencies. By fostering both intra-agency and inter-agency capacity-building for responsible AI procurement and use, this approach builds sustainable expertise, promotes equitable AI adoption, and protects public interest. This ensures that AI enhances—rather than harms—the efficiency and quality of public services. As new waves of AI technologies continue to enter the public sector, touching a breadth of services critical to the welfare of the American people, this center of excellence will help maintain high standards for responsible public sector AI for decades to come.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
What is existing guidance for responsible SLTT procurement and use of AI technologies?

Federal agencies have published numerous resources to support responsible AI procurement, including the Executive Order 13960, OMB-M-21-06, NIST RMF. Some of these resources provide guidance on responsible AI development in organizations broadly, across the public, private, and non-profit sectors. For example, the NIST RMF provides organizations with guidelines to identify, assess, and manage risks in AI systems to promote the deployment of more trustworthy and fair AI systems. Others focus on public sector AI applications. For instance, the OMB Memorandum published by the Office of Management and Budget describes strategies for federal agencies to follow responsible AI procurement and use practices.

Why a federal center? Can’t SLTT governments do this on their own?

Research describes how these forms of resources often require additional skills and knowledge that make it challenging for agencies to effectively use on their own. A federal center of excellence for state and local AI procurement could help agencies learn to use these resources. Adapting these guidelines to specific SLTT agency contexts necessitates a careful task of interpretation which may, in turn, require specialized expertise or resources. The creation of this federal center of excellence to guide responsible SLTT procurement on-the-ground can help bridge this critical gap. Fellows in the center of excellence and SLTT procurement agencies can build on this existing pool of guidance to build a strong theoretical foundation to guide their practices.

How has this “hub and spoke” model been used before?

The hub and spoke model has been used across a range of applications to support efficient management of resources and services. For instance, in healthcare, providers have used the hub and spoke model to organize their network of services; specialized, intensive services would be located in “hub” healthcare establishments whereas secondary services would be provided in “spoke” establishments, allowing for more efficient and accessible healthcare services. Similar organizational networks have been followed in transportation, retail, and cybersecurity. Microsoft follows a hub and spoke model to govern responsible AI practices and disseminate relevant resources. Microsoft has a single centralized “hub” within the company that houses responsible AI experts—those with expertise on the implementation of the company’s responsible AI goals. These responsible AI experts then train “spokes”—workers residing in product and sales teams across the company, who learn about best practices and support their team in implementing them.

Who would be the experts selected as fellows by the center of excellence? What kind of training would they receive?

During the training, experts would form a stronger foundation for (1) on-the-ground challenges and practices that public sector agencies grapple with when developing, procuring, and using AI technologies and (2) existing AI procurement and use guidelines provided by federal agencies. The content of the training would be taken from syntheses of prior research on public sector AI procurement and use challenges, as well as existing federal resources available to guide responsible AI development. For example, prior research has explored public sector challenges to supporting algorithmic fairness and accountability and responsible AI design and adoption decisions, amongst other topics.


The experts who would serve as fellows for the federal center of excellence would be individuals with expertise and experience studying the impacts of AI technologies and designing interventions to support more responsible AI development, procurement, and use. Given the interdisciplinary nature of the expertise required for the role, individuals should have an applied, socio-technical background on responsible AI practices, ideally (but not necessarily) for the public sector. The individual would be expected to have the skills needed to share emerging responsible AI practices, strategies, and tacit knowledge with public sector employees developing or procuring AI technologies. This covers a broad range of potential backgrounds.

What are some examples of the skills or competencies fellows might bring to the Center?

For example, a professor in academia who studies how to develop public sector AI systems that are more fair and aligned with community needs may be a good fit. A socio-technical researcher in civil society with direct experience studying or developing new tools to support more responsible AI development, who has intuition over which tools and practices may be more or less effective, may also be a good candidate. A data officer in a state government agency who has direct experience procuring and governing AI technologies in their department, with an ability to readily anticipate AI-related challenges other agencies may face, may also be a good fit. The cohort of fellows should include a balanced mix of individuals coming from government, academia, and civil society.

Strengthening Information Integrity with Provenance for AI-Generated Text Using ‘Fuzzy Provenance’ Solutions

Synthetic text generated by artificial intelligence (AI) can pose significant threats to information integrity. When users accept deceptive AI-generated content—such as large-scale false social media posts by malign foreign actors—as factual, national security is put at risk. One way to help mitigate this danger is by giving users a clear understanding of the provenance of the information they encounter online. 

Here, provenance refers to any verifiable indication of whether text was generated by a human or by AI, for example by using a watermark. However, given the limitations of watermarking AI-generated text, this memo also introduces the concept of fuzzy provenance, which involves identifying exact text matches that appear elsewhere on the internet. As these matches will not always be available, the descriptor “fuzzy” is used. While this information will not always establish authenticity with certainty, it offers users additional clues about the origins of a piece of text.

To ensure platforms can effectively provide this information to users, the National Institute of Standards and Technology (NIST)’s AI Safety Institute should develop guidance on how to display to users both provenance and fuzzy provenance—where available—within no more than one click. To expand the utility of fuzzy provenance, NIST could also issue guidance on how generative AI companies could allow the records of their free AI models to be crawled and indexed by search engines, thereby making potential matches to AI-generated text easier to discover. Tradeoffs surrounding this approach are explored further in the FAQ section.

By creating a reliable, user-friendly framework for surfacing these details, NIST would empower readers to better discern the trustworthiness of the text they encounter, thereby helping to counteract the risks posed by deceptive AI-generated content.

Challenge and Opportunity

Synthetic Text and Information Integrity

In the past two years, generative AI models have become widely accessible, allowing users to produce customized text simply by providing prompts. As a result, there has been a rapid proliferation of “synthetic” text—AI-generated content—across the internet. As NIST’s Generative Artificial Intelligence Profile notes, this means that there is a “[l]owered barrier of entry to generated text that may not distinguish fact from opinion or fiction or acknowledge uncertainties, or could be leveraged for large scale dis- and mis-information campaigns.”

Information integrity risks stemming from synthetic text—particularly when generated for non-creative purposes—can pose a serious threat to national security. For example, in July 2024 the Justice Department disrupted Russian generative-AI-enabled disinformation bot farms. These Russian bots produced synthetic text, including in the form of social media posts by fake personas, meant to promote messages aligned with the interests of the Russian government. 

Provenance Methods For Reducing Information Integrity Risks

NIST has an opportunity to provide community guidance to reduce the information integrity risks posed by all types of synthetic content. The main solution currently being considered by NIST for reducing the risks of synthetic content in general is provenance, which refers to whether a piece of content was generated by AI or a human. As described by NIST, provenance is often ascertained by creating a non-fungible watermark, or a cryptographic signature for a piece of content. The watermark is permanently associated with the piece of content. Where available, provenance information is helpful because knowing the origin of text can help a user know whether to rely on the facts it contains. For example, an AI-generated news report may currently be less trustworthy than a human news report because the former is more prone to fabrications.

However, there are currently no methods widely accepted as effective for determining the provenance of synthetic text. As NIST’s report, Reducing Risks Posed by Synthetic Content, details, “[t]he effectiveness of synthetic text detection is subject to ongoing debate” (Sec. 3.2.2.4). Even if a piece of text is originally AI-generated with a watermark (e.g., by generating words with a unique statistical pattern), people can easily copy a piece of text by paraphrasing (especially via AI), without transferring the original watermark. Text watermarks are also vulnerable to adversarial attacks, with malicious actors able to mimic the watermark signature and make text appear watermarked when it is not.  

Plan of Action

To capture the benefits of provenance, while mitigating some of its weaknesses, NIST should issue guidance on how platforms can make available to users both provenance and “fuzzy provenance” of text. Fuzzy provenance is coined here to refer to exact text matches on the internet, which can sometimes reflect provenance but not necessarily (thus “fuzzy”). Optionally, NIST could also consider issuing guidance on how generative AI companies can make their free models’ records available to be crawled and indexed by search engines, so that fuzzy provenance information would show text matches with generative AI model records. There are tradeoffs to this recommendation, which is why it is optional; see FAQs for further discussion. Making both provenance and fuzzy provenance information available (in no more than one click) will give users more information to help them evaluate how trustworthy a piece of text is and reduce information integrity risks. 

Combined Provenance and Fuzzy Provenance Approach

Figure 1. Mock implementation of combined provenance and fuzzy provenance
The above image captures what an implementation of the combined provenance and fuzzy provenance guidance might include. When a user highlights a piece of text that is sufficiently long, they can click “learn more about this text” to find more information.

There are ways to communicate provenance and fuzzy provenance so that it is both useful and easy-to-understand. In this concept showing the provenance of text, for example:

Benefits of the Combined Approach

Showing both provenance and fuzzy provenance information provides users with critical context to evaluate the trustworthiness of a piece of text. Between provenance and fuzzy provenance, users would have access to information about many pieces of high-impact text, especially claims that could be particularly harmful for individuals, groups, or society at large. Making all this information immediately available also reduces friction for users so that they can get this information right where they encounter text.

Provenance information can be helpful to provide to users when it is available. For instance, knowing that a tech support company’s website description was AI-generated may encourage users to check other sources (like reviews) to see if the company is a real entity (and AI was used just to generate the description) or a fake entity entirely, before giving a deposit to hire the company (see user journey 1 in this video for an example).

Where clear provenance information is not available, fuzzy provenance can help fill the gap by providing valuable context to users in several ways:

Fuzzy provenance is also effective because it shows context and gives users autonomy to decide how to interpret that context. Academic studies have found that users tend to be more receptive when presented with further information they can use for their own critical thinking compared to being shown a conclusion directly (like a label), which can even backfire or be misinterpreted. This is why users may trust contextual methods like crowdsourced information more than provenance labels.

Finally, fuzzy provenance methods are generally feasible at scale, since they can be easily implemented with existing search engine capabilities (via an exact text match search). Furthermore, since fuzzy provenance only relies on exact text matching with other sources on the internet, it works without needing coordination among text-producers or compliance from bad actors. 

Conclusion

To reduce the information integrity risks posed by synthetic text in a scalable and effective way, the National Institute for Standards and Technology (NIST) should develop community guidance on how platforms hosting text-based digital content can make accessible (in no more than one click) the provenance and “fuzzy provenance” of the piece of text, when available. NIST should also consider issuing guidance on how AI companies could make their free generative AI records available to be crawled by search engines, to amplify the effectiveness of “fuzzy provenance”.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Making free generative AI records available to be crawled by search engines includes tradeoffs, which is why it is an optional recommendation to consider. Below are some questions regarding implementation guidance, and trade offs including privacy and proprietary information.

Frequently Asked Questions on free generative AI record optional recommendation
What are some examples of implementation guidance for AI model companies?

Guidance could instruct AI model companies how to make their free generative AI conversation records available to be crawled and indexed by search engines. Similar to ChatGPT logs or Perplexity threads, a unique URL would be created for each conversation, capturing the date it occurred. The key difference is that all free model conversation records would be made available, but only with the AI outputs of the conversation, after removing personally identifiable information (PII) (see “Privacy considerations” section below). Because users can already choose to share conversations with each other (meaning the conversation logs are retained), and conversation logs for major model providers do not currently appear to have an expiration date, this requirement shouldn’t impose an additional storage burden for AI model companies.

What are some examples of implementation guidance for search engines?

Guidance could instruct search engines how to crawl and index these model logs so that queries with exact text matches to the AI outputs would surface the appropriate model logs. This would not be very different from search engines crawling/indexing other types of new URLs and should be well-within existing search engine capabilities. In terms of storage, since only free model logs would be crawled and indexed, and most free models rate-limit the number of user messages allowed, storage should also not be a concern. For instance, even with 200 million weekly active users for ChatGPT, the number of conversations in a year would only be on the order of billions, which is well-within the current scale that existing search engines have to operate to enable users to “search the web”.

How can we ensure user privacy when making free AI model records available?

  • Output filtering on the AI outputs should be done to remove any personal identifiable information (PII) present in the model’s responses. However, it might still be possible to extrapolate who the original user was just by looking at the AI outputs taken together and inferring some of the user prompts. This is a privacy concern that should be further investigated. Some possible mitigations include additionally removing any location references of a certain granularity (i.e. removing mentions of neighborhoods, but retaining mentions of states) and presenting AI responses in the conversation in a randomized order.

  • Removals should be made possible by a user-initiated process demonstrating privacy concerns, similar to existing search engine removal protocols.

  • User consent would also be an important consideration here. NIST could propose that free model users must “opt-in”, or that free model record crawling/indexing be “opt-out” by default for users, though this may greatly compromise the reliability of fuzzy provenance.

What proprietary tradeoffs should be considered when making free AI model outputs available to be crawled and indexed by search engines?

  • Training on AI-generated text: AI companies are concerned about accidentally picking up on too much AI-generated text on the web and training on that instead of higher human-generated text, thus degrading the quality of their own generative models. However, because they would have identifiable domain prefixes (ie chatgpt.com, perplexity.ai), it would be easy to exclude these AI-generated conversation logs if desired during training. Indeed, provenance and fuzzy provenance may help AI companies avoid unintentionally training on AI-generated text.

  • Sharing model outputs: On the flipside, AI companies might be concerned that making so many AI-generated model outputs available for competitors to access may result in helping competitors improve their own models. This is a fair concern, though partially mitigated by a) specific user inputs are not available, and only the AI outputs; and b) only free model outputs would be logged, rather than any premium models, thus providing some proprietary protection. However, it is still possible that competitors may be able to enhance their own responses by training on the structure of AI outputs from other models at scale.

Tending Tomorrow’s Soil: Investing in Learning Ecosystems

“Tending soil.”

That’s how Fred Rogers described Mister Rogers’ Neighborhood, his beloved television program that aired from 1968 to 2001. Grounded in principles gleaned from top learning scientists, the Neighborhood offered a model for how “learning ecosystems” can work in tandem to tend the soil of learning. 

Today, a growing body of evidence suggests that Rogers’ model was not only effective, but that real-life learning ecosystems – networks that include classrooms, living rooms, libraries, museums, and more – may be the most promising approach for preparing learners for tomorrow. As such, cities and regions around the world are constructing thoughtfully designed ecosystems that leverage and connect their communities’ assets, responding to the aptitudes, needs, and dreams of the learners they serve. 

Efforts to study and scale these ecosystems at local, state, and federal levels would position the nation’s students as globally competitive, future-ready learners.

The Challenge

For decades, America’s primary tool for “tending soil” has been its public schools, which are (and will continue to be) the country’s best hope for fulfilling its promise of opportunity. At the same time, the nation’s industrial-era soil has shifted. From the way our communities function to the way our economy works, dramatic social and technological upheavals have remade modern society. This incongruity – between the world as it is and the world that schools were designed for – has blunted the effectiveness of education reforms; heaped systemic, society-wide problems on individual teachers; and shortchanged the students who need the most support.

“Public education in the United States is at a crossroads,” notes a report published by the Alliance for Learning Innovation, Education Reimagined, and Transcend: “to ensure future generations’ success in a globally competitive economy, it must move beyond a one-size-fits-all model towards a new paradigm that prioritizes innovation that holds promise to meet the needs, interests, and aspirations of each and every learner.”

What’s needed is the more holistic paradigm epitomized by Mister Rogers’ Neighborhood: a collaborative ecosystem that sparks engaged, motivated learners by providing the tools, resources, and relationships that every young person deserves.

The Opportunity

With components both public and private, virtual and natural, “learning ecosystems” found in communities around the world reflect today’s connected, interdependent society. These ecosystems are not replacements for schools – rather, they embrace and support all that schools can be, while also tending to the vital links between the many places where kids and families learn: parks, libraries, museums, afterschool programs, businesses, and beyond. The best of these ecosystems function as real-life versions of Mister Rogers’ Neighborhood: places where learning happens everywhere, both in and out of school. Where every learner can turn to people and programs that help them become, as Rogers used to say, “the best of whoever you are.”

Nearly every community contains the components of effective learning ecosystems. The partnerships forged within them can – when properly tended – spark and spread high-impact innovations; support collaboration among formal and informal educators; provide opportunities for young people to solve real-world problems; and create pathways to success in a fast-changing modern economy. By studying and investing in the mechanisms that connect these ecosystems, policymakers can build “neighborhoods” of learning that prepare students for citizenship, work, and life.

Plan of Action

Learning ecosystems can be cultivated at every level. Whether local, state, or federal, interested policymakers should:

Establish a commission on learning ecosystems. Tasked with studying learning ecosystems in the U.S. and abroad, the commission would identify best practices and recommend policy that 1) strengthens an area’s existing learning ecosystems and/or 2) nurtures new connections. Launched at the federal, state, or local level and led by someone with a track record for getting things done, the commission should include representatives from various sectors, including early childhood educators, K-12 teachers and administrators, librarians, researchers, CEOs and business leaders, artists, makers, and leaders from philanthropic and community-based organizations. The commission will help identify existing activities, research, and funding for learning ecosystems and will foster coordination and collaboration to maximize the effectiveness of the ecosystem’s resources.

A 2024 report by Knowledge to Power Catalysts notes that these cross-sector commissions are increasingly common at various levels of government, from county councils to city halls. As policymakers establish interagency working groups, departments of children and youth, and networks of human services providers, “such offices at the county or municipal level often play a role in cross-sector collaboratives that engage the nonprofit, faith, philanthropic, and business communities as well.”

Pittsburgh’s Remake Learning ecosystem, for example, is steered by the Remake Learning Council, a blue-ribbon commission of Southwestern Pennsylvania leaders from education, government, business, and the civic sector committed to “working together to support teaching, mentoring, and design – across formal and informal educational settings – that spark creativity in kids, activating them to acquire knowledge and skills necessary for navigating lifelong learning, the workforce, and citizenship.”

Establish a competitive grant program to support pilot projects. These grants could seed new ecosystems and/or support innovation among proven ecosystems. (Several promising ecosystems are operating throughout the country already; however, many are excluded from funding opportunities by narrowly focused RFPs.) This grant program can be administered by the commission to catalyze and strengthen learning ecosystems at the federal, state, or local levels. Such a program could be modeled after:

Host a summit on learning ecosystems. Leveraging the gravitas of a government and/or civic institution such as the White House, a governor’s mansion, or a city hall, bring members of the commission together with learning ecosystem leaders and practitioners, along with cross-sector community leaders. A summit will underscore promising practices, share lessons learned, and highlight monetary and in-kind commitments to support ecosystems. The summit could leverage for learning ecosystems the philanthropic commitments model developed and used by previous presidential administrations to secure private and philanthropic support. Visit remakelearning.org/forge to see an example of one summit’s schedule, activities, and grantmaking opportunities.

Establish an ongoing learning ecosystem grant program for scaling and implementing lessons learned. This grant program could be administered at the federal, state, or local level – by a city government, for example, or by partnerships like the Appalachian Regional Commission. As new learning ecosystems form and existing ones evolve, policymakers should continue to provide grants that support learning ecosystem partnerships between communities that allow innovations in one city or region to take root in another. 

Invest in research, publications, convenings, outreach, and engagement efforts that highlight local ecosystems and make their work more visible, especially for families. The ongoing grant program can include funding for opportunities that elevate the benefits of learning ecosystems. Events such as Remake Learning Days – an annual festival billed as “the world’s largest open house for teaching and learning” and drawing an estimated 300,000 attendees worldwide – build demand for learning ecosystems among parents, caregivers, and community leaders, ensuring grassroots buy-in and lasting change.

This memo was developed in partnership with the Alliance for Learning Innovation, a coalition dedicated to advocating for building a better research and development infrastructure in education for the benefit of all students. 

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
How do learning ecosystems benefit students?

Within a learning ecosystem, students aren’t limited to classrooms, schools, or even their own districts – nor do they have to travel far to find opportunities that light them up. By blurring the lines between “in school” and “out of school,” ecosystems make learning more engaging, more relevant, and even more joyful. Pittsburgh’s Remake Learning ecosystem, for example, connects robotics professionals with classroom teachers to teach coding and STEM. Librarians partner with teaching artists to offer weeklong deep dives into topics attractive to young people. A school district launches a program – say, a drone academy for girls – and opens it up to learners from neighboring districts.


As ecosystems expand to include more members, the partnerships formed within them spark exciting, ever-evolving opportunities for learners.

How do learning ecosystems benefit communities?

Within an ecosystem, learning isn’t just for young people. An ecosystem’s out-of-school components – businesses, universities, makerspaces, and more – bring real-world problems directly to learners, leading to tangible change in communities and a more talented, competitive future workforce. In greater Washington, D.C., for example, teachers partner with cultural institutions to develop curricula based on students’ suggestions for improving the city. In Kansas City, high schoolers partner with entrepreneurs and health care professionals to develop solutions for everything from salmonella poisoning to ectopic pregnancy. And in Pittsburgh, public school students are studying cybersecurity, training for aviation careers, conducting cutting-edge cancer research, and more.

How do learning ecosystems benefit educators?

Learning ecosystems also support educators. In Pittsburgh, educators involved in Remake Learning note that “they feel celebrated and validated in their work,” writes researcher Erin Gatz. Moreover, the ecosystem’s “shared learning and supportive environment were shown to help educators define or reinforce their professional identity.”

How do learning ecosystems benefit local economies?

Learning ecosystems can aid local economies, too. In eastern Kentucky, an ecosystem of school districts, universities, and economic development organizations empowers students to reimagine former coal land for entrepreneurial purposes. And in West Virginia, an ecosystem of student-run companies has helped the state recover from natural disasters.

Where are examples of learning ecosystems already operating in the United States?

Since 2007, Pittsburgh’s Remake Learning has emerged as the most talked-about learning ecosystem in the world. Studied by scholars, recognized by heads of state, and expanding to include more then 700 schools, libraries, museums, and other sites of learning, Remake Learning has – through two decades of stewardship – inspired more than 40 additional learning ecosystems. Meanwhile, the network’s Moonshot Grants are seeding new ecosystems across the nation and around the world.

What inspiration can we draw from globally?

Global demand for learning ecosystems is growing. A 2020 report released by HundrED, a Finland-based nonprofit, profiles 16 of the most promising examples operating in the United States. Likewise, the World Innovation Summit for Education explores nine learning ecosystems operating worldwide: “Across the globe, there is a growing consensus that education demands radical transformation if we want all citizens to become future-ready in the face of a more digitally enabled, uncertain, and fast-changing world,” the summit notes. “Education has the potential to be the greatest enabler of preparing everyone, young and old, for the future, yet supporting learning too often remains an issue for schools alone.”

What about public schools?

Learning ecosystems support collaboration and community among public schools, connecting classrooms, schools, and educators across diverse districts. Within Remake Learning, for example, a cohort of 42 school districts works together – and in partnership with afterschool programs, health care providers, universities, and others – to make Western Pennsylvania a model for the future of learning.


The cohort’s collaborative approach has led to a dazzling array of awards and opportunities for students: A traditional classroom becomes a futuristic flight simulator. A school district opens its doors to therapy dogs and farm animals. Students in dual-credit classes earn college degrees before they’ve even finished high school. Thanks in part to the ecosystem’s efforts, Western Pennsylvania is now recognized as home to the largest cluster of nationally celebrated school districts in the country.

I’m interested in starting or supporting a learning ecosystem in my community. Where do I start?

As demand for learning ecosystems continues to gather momentum, several organizations have released playbooks and white papers designed to guide policymakers, practitioners, and other interested parties. Helpful resources include:


What are some additional resources?

In addition, Remake Learning has released three publications that draw on more than twenty years of “tending soil.” The publications share methods and mindsets for navigating some of the most critical questions that face ecosystems’ stewards:


Protecting Infant Nutrition Security:
Shifting the Paradigm on Breastfeeding to Build a Healthier Future for all Americans

The health and wellbeing of American babies have been put at risk in recent years, and we can do better. Recent events have revealed deep vulnerabilities in our nation’s infant nutritional security. For example: Pandemic-induced disruptions in maternity care practices that support  the establishment of breastfeeding; the infant formula recall and resulting shortage; and a spate of weather-related natural disasters have demonstrated infrastructure gaps and a lack of resilience to safety and supply chain challenges. All put babies in danger during times of crisis.

Breastfeeding is foundational to lifelong health and wellness, but systemic barriers prevent many families from meeting their breastfeeding goals. The policies and infrastructure surrounding postpartum families often limit their ability to succeed in breastfeeding. Despite important benefits, new data from the CDC shows that while 84.1% of infants start out breastfeeding, these numbers fall dramatically in the weeks after birth, with only 57.5% of infants breastfeeding exclusively at one month of age. Disparities persist across geographic location, and other sociodemographic factors, including race/ethnicity, maternal age, and education. Breastfeeding rates in North America are the lowest in the world. Longstanding evidence shows that it is not a lack of desire but rather a lack of support, access, and resources that creates these barriers.

This administration has an opportunity to take a systems approach to increasing support for breastfeeding and making parenting easier for new mothers. Key policy changes to address systemic barriers include providing guidance to states on expanding Medicaid coverage of donor milk, building breastfeeding support and protection into the existing emergency response framework at the Federal Emergency Management Agency, and expressing support for establishing a national paid leave program. 

Policymakers on both sides of the aisle agree that no baby should ever go hungry, as evidenced by the bipartisan passage of recent breastfeeding legislation (detailed below) and widely supported regulations. However, significant barriers remain. This administration has the power to address long-standing inequities and set the stage for the next generation of parents and infants to thrive. Ensuring that every family has the support they need to make the best decisions for their child’s health and wellness benefits the individual, the family, the community, and the economy. 

Challenge and Opportunity

Breastfeeding plays an essential role in establishing good nutrition and healthy weight, reducing the risk of chronic disease and infant mortality, and improving maternal and infant health outcomes. Breastfed children have a decreased risk of obesity, type 1 and 2 diabetes, asthma, and childhood leukemia. Women who breastfeed reduce their risk of specific chronic diseases, including type 2 diabetes, cardiovascular disease, and breast and ovarian cancers. On a relational level, the hormones produced while breastfeeding, like oxytocin, enhance the maternal-infant bond and emotional well-being. The American Academy of Pediatrics recommends infants be exclusively breastfed for approximately six months with continued breastfeeding while introducing complementary foods for two years or as long as mutually desired by the mother and child.  

Despite the well-documented health benefits of breastfeeding, deep inequities in healthcare, community, and employment settings impede success. Systemic barriers disproportionately impact Black, Indigenous, and other communities of color, as well as families in rural and economically distressed areas. These populations already bear the weight of numerous health inequities, including limited access to nutritious foods and higher rates of chronic disease—issues that breastfeeding could help mitigate. 

Breastfeeding Saves Dollars and Makes Sense 

Low breastfeeding rates in the United States cost our nation millions of dollars through higher health system costs, lost productivity, and higher household expenditures. Not breastfeeding is associated with economic losses of about $302 billion annually or 0.49% of world gross national income. At the national level, improving breastfeeding practices through programs and policies is one of the best investments a country can make, as every dollar invested is estimated to result in a $35 economic return

In the United States, chronic disease management results in trillions of dollars in annual healthcare costs, which increased breastfeeding rates could help reduce. In the workplace setting, employers see significant cost savings when their workers are able to maintain breastfeeding after returning to work. Increased breastfeeding rates are also associated with reduced environmental impact and associated expenses. Savings can be seen at home as well, as following optimal breastfeeding practices reduces household expenditures. Investments in infant nutrition last a lifetime, paying long-term dividends critical for economic and human development. Economists have completed cost-benefit analyses, finding that investments in nutrition are one of the best value-for-money development actions, laying the groundwork for the success of investments in other sectors.

Ongoing trends in breastfeeding outcomes indicate that there are entrenched policy-level challenges and barriers that need to be addressed to ensure that all infants have an opportunity to benefit from access to human milk. Currently, for too many families, the odds are stacked against them. It’s not a question of individual choice but one of systemic injustice. Families are often forced into feeding decisions that do not reflect their true desires due to a lack of accessible resources, support, and infrastructure.

While the current landscape is rife with challenges, the solutions are known and the potential benefits are tremendous. This administration has the opportunity to realize these benefits and implement a smart and strategic response to the urgent situation that our nation is facing just as the political will is at an all-time high. 

The History of Breastfeeding Policy

In the late 1960s and early 1970s less than 30 percent of infants were breastfed. The concerted efforts of individuals and organizations and the emergence of the field of lactation have worked to counteract or remove many barriers, and policymakers have sent a clear and consistent message that breastfeeding is bipartisan. This is evident in the range of recent lactation-friendly legislation, including: 

Administrative efforts ranging from the Business Case for Breastfeeding to The Surgeon General’s Call to Action to Support Breastfeeding and the armed services updates on uniform requirements for lactating soldiers demonstrate a clear commitment to breastfeeding support across the decades. 

These policy changes have made a difference. But additional attention and investment, with a particular focus on the birth and early postpartum period, as well as during and after emergencies, is needed to secure the potential health and economic benefits of comprehensive societal support for breastfeeding. This administration can take considerable steps toward improving  U.S. health and wellness and protecting infant nutrition security.  

Plan of Action

A range of federal agencies coordinate programs, services, and initiatives impacting the breastfeeding journey for new parents. Expanding and building on existing efforts through the following steps can help address some of today’s most pressing barriers to breastfeeding. 

Each of the recommended actions can be implemented independently and would create meaningful, incremental change for families. However, a comprehensive approach that implements all these recommendations would create the marked shift in the landscape needed to improve breastfeeding initiation and duration rates and establish this administration as a champion for breastfeeding families. 

AgencyAgency RoleRecommend ActionAnticipated Outcome
Federal Emergency Management Agency (FEMA)


FEMA coordinates within the federal government to make sure America is equipped to prepare for and respond to disasters.Require FEMA to participate in the Federal Interagency Breastfeeding Workgroup, a collection of federal agencies that come together to connect and collaborate on breastfeeding issues.Increased connection and coordination across agencies.
Federal Emergency Management Agency (FEMA)


FEMA coordinates within the federal government to make sure America is equipped to prepare for and respond to disasters.Update the FEMA Public Assistance Program and Policy Guide to include breastfeeding and lactation as a functional need so that emergency response efforts can include services from lactation support providers.Integration of breastfeeding support into emergency response and recovery efforts.
Office of Management & Budget (OMB)The OMB oversees the implementation of the President’s vision across the Executive Branch, including through budget development and execution.Include funding for the establishment of a national paid family and medical leave program as a priority in the President’s Budget.Setting the stage for Congressional action.
Domestic Policy Council (DPC)The DPC drives the development and implementation of the President’s domestic policy agenda in the White House and across the Federal government.Support the efforts of the bipartisan, bicameral congressional Paid Leave Working Group.Setting the stage for Congressional action.
This table summarizes the recommendations, grouped by the federal agency that would be responsible for implementing the change to increase breastfeeding rates in the U.S. for improved health and economic outcomes.

Recommendation 1. Increase access to pasteurized donor human milk by directing the Centers for Medicare & Medicaid Services (CMS) to provide guidance to states on expanding Medicaid coverage. 

Pasteurized donor human milk is lifesaving for vulnerable infants, particularly those born preterm or with serious health complications. Across the United States, milk banks gently pasteurize donated human milk and distribute it to fragile infants in need. This lifesaving liquid gold reduces mortality rates, lowers healthcare costs, and shortens hospital stays. Specifically, the use of donor milk is associated with increased survival rates and lowered rates of infections, sepsis, serious lung disease, and gastrointestinal complications. In 2022, there were 380,548 preterm births in the United States, representing 10.4% of live births, so the potential for health and cost savings is substantial. Data from one study shows that the cost of a neonatal intensive care unit stay for infants at very low birth weight is nearly $220,000 for 56 days. The use of donor human milk can reduce hospital length of stay by 18-50 days by preventing the development of necrotizing enterocolitis in preterm infants. The benefits of human milk extend beyond the inpatient stay, with infants receiving all human milk diets in the NICU experiencing fewer hospital readmissions and better overall long-term outcomes.

Although donor milk has important health implications for vulnerable infants in all communities and can result in significant economic benefit, donor milk is not equitably accessible. While milk banks serve all states, not all communities have easy access to donated human milk. Moreover, many insurers are not required to cover the cost, creating significant barriers to access and contributing to racial and geographic disparities.

To ensure that more babies in need have access to lifesaving donor milk, the administration should work with CMS to expand donor milk coverage under state Medicaid programs. Medicaid covers approximately 40% of all US births and 50% of all early preterm births. Medicaid programs in at least 17 states and the District of Columbia already include coverage of donor milk. The administration can expand access to this precious milk, help reduce health care costs, and address racial and geographic disparities by releasing guidance for the remaining states regarding coverage options in Medicaid.

Recommendation 2. Include infant feeding in Federal Emergency Management Agency (FEMA) emergency planning and response.

Infants and children are among the most vulnerable in an emergency, so it is critical that their unique needs are considered and included in emergency planning and response guidance. Breastfeeding provides clean, optimal nutrition, requires no fuel, water, or electricity, and is available, even in the direst circumstances. Human milk contains antibodies that fight infection, including diarrhea and respiratory infections common among infants in emergency situations. Yet efforts to protect infant and young child feeding in emergencies are sorely lacking, particularly in the immediate aftermath of disasters and emergencies. 

Ensuring access to lactation support and supplies as part of emergency response efforts is essential for protecting the health and safety of infants. Active support and coordination between federal, state, and local governments, the commercial milk formula industry, lactation support providers, and all other relevant actors involved in response to emergencies is needed to ensure safe infant and young child feeding practices and equitable access to support. There are two simple, cost-effective steps that FEMA can take to protect breastfeeding, preserve resources, and thus save additional lives during emergencies.

Recommendation 3. Expand access to paid family & medical leave by including paid leave as a priority in the President’s Budget and supporting the efforts of the bipartisan, bicameral congressional Paid Leave Working Group. 

Employment policies in the United States make breastfeeding harder than it needs to be. The United States is one of the only countries in the world without a national paid family and medical leave program. Many parents return to work quickly after birth, before a strong breastfeeding relationship is established, because they cannot afford to take unpaid leave or because they do not qualify for paid leave programs with their employer or through state or local programs. Nearly 1 in 4 employed mothers return to work within two weeks of childbirth.

Paid family leave programs make it possible for employees to take time for childbirth recovery, bond with their baby, establish feeding routines, and adjust to life with a new child without threatening their family’s economic well-being. This precious time provides the foundation for success, contributing to improved rates of breastfeeding initiation and duration, yet only a small portion of workers are able to access it. There are significant disparities in access to paid leave among racial and ethnic groups, with Black and Hispanic employees less likely than their white non-Hispanic counterparts to have access to paid parental leave. There are similar disparities in breastfeeding outcomes among racial groups.  

The momentum is building substantially to improve the paid family and medical leave landscape in the United States. Thirteen states and the District of Columbia have established mandatory state paid family leave systems. Supporting paid leave has become an important component of candidate campaign plans, and bipartisan support for establishing a national program remains strong among voters. The formation of Bipartisan Paid Family Leave Working Groups in both the House and Senate demonstrate commitment from policymakers on both sides of the aisle. 

By directing the Office of Management and Budget to include funding for paid leave in the President’s Budget recommendation and working collaboratively with the Congressional Paid Leave Working Groups, the administration can advance federal efforts to increase access to paid family and medical leave, improving public health and helping American businesses.  

Conclusion

These three strategies offer the opportunity for the White House to make an immediate and lasting impact by protecting infant nutrition security and addressing disparities in breastfeeding rates, on day one of the Presidential term. A systems approach that utilizes multiple strategies for integrating breastfeeding into existing programs and efforts would help shift the paradigm for new families by addressing long-standing barriers that disproportionately affect marginalized communities—particularly Black, Indigenous, and families of color. A clear and concerted effort from the Administration, as outlined, offers the opportunity to benefit all families and future generations of American babies. 

The administration’s focused and strategic efforts will create a healthier, more supportive world for babies, families, and breastfeeding parents, improve maternal and child health outcomes, and strengthen the economy. This administration has the chance to positively shape the future for generations of American families, ensuring that every baby gets the best possible start in life and that every parent feels empowered and supported.

Now is the time to build on recent momentum and create a world where families have true autonomy in infant feeding decisions. A world where paid family leave allows parents the time to heal, bond, and establish feeding routines; communities provide equitable access to donor milk; and federal, state, and local agencies have formal plans to protect infant feeding during emergencies, ensuring no baby is left vulnerable. Every family deserves to feel empowered and supported in making the best choices for their children, with equitable access to resources and support systems.

This policy memo was written with support from Suzan Ajlouni, Public Health Writing Specialist at the U.S. Breastfeeding Committee. The policy recommendations have been identified through the collective learning, idea sharing, and expertise of USBC members and partners.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
Isn’t the choice to breastfeed a personal one?

Rather than being a matter of personal choice, infant feeding practice is informed by circumstance and level (or lack) of support. When roadblocks exist at every turn, families are backed into a decision because the alternatives are not available, attainable, or viable. United States policies and infrastructure were not built with the realities of breastfeeding in mind. Change is needed to ensure that all who choose to breastfeed are able to meet their personal breastfeeding goals, and society at large reaps the beneficial social and economic outcomes.

How much would it cost to establish a national paid family and medical leave program?

The Fiscal Year 2024 President’s Budget proposed to establish a national, comprehensive paid family and medical leave program, providing up to 12 weeks of leave to allow eligible workers to take time off to care for and bond with a new child; care for a seriously ill loved one; heal from their own serious illness; address circumstances arising from a loved one’s military deployment; or find safety from domestic violence, sexual assault, or stalking. The budget recommendation included $325 billion for this program. It’s important to look at this with the return on investment in mind, including improved labor force attachment and increased earnings for women; better outcomes and reduced health care costs for ill, injured, or disabled loved ones; savings to other tax-funded programs, including Medicaid, SNAP, and other forms of public assistance; and national economic growth, jobs growth, and increased economic activity.

How will we know if these efforts are having an impact?

There are a variety of national monitoring and surveillance efforts tracking breastfeeding initiation, duration, and exclusivity rates that will inform how well these actions are working for the American people, including the National Immunization Survey (NIS), Pregnancy Risk Assessment and Monitoring System (PRAMS), Infant Feeding Practices Study, and National Vital Statistics System. The CDC Breastfeeding Report card is published biannually to bring these key data points together and place them into context. Significant improvements in the data have already been seen across recent decades, with breastfeeding initiation rates increasing from 73.1 percent in 2004 to 84.1 percent in 2021.

Is there enough buy-in from organizations and individuals to support these systemic changes?

The U.S. Breastfeeding Committee is a coalition bringing together approximately 140 organizations from coast to coast representing the grassroots to the treetops – including federal agencies, national, state, tribal, and territorial organizations, and for-profit businesses – that support the USBC mission to create a landscape of breastfeeding support across the United States. Nationwide, a network of hundreds of thousands of grassroots advocates from across the political spectrum support efforts like these. Together, we are committed to ensuring that all families in the U.S. have the support, resources, and accommodations to achieve their breastfeeding goals in the communities where they live, learn, work, and play. The U.S. Breastfeeding Committee and our network stand ready to work with the administration to advance this plan of action.

Supporting Device Reprocessing to Reduce Waste in Health Care

The U.S. healthcare system produces 5 million tons of waste annually, or approximately 29 pounds per hospital bed daily. Roughly 80 percent of the healthcare industry’s carbon footprint comes from the production, transportation, use, and disposal of single-use devices (SUDs), which are pervasive in the hospital. Notably, 95% of the environmental impact of single-use medical products results from the production of those products. 

While the Food and Drug Administration (FDA) oversees new devices being brought to market, it is up to the manufacturer to determine whether a device will be marketed as single-use or multiple-use. Manufacturers have a financial incentive to market devices for “single-use” or “disposable” as marketing a device as reusable requires expensive cleaning validations.

In order to decrease healthcare waste and environmental impact, FDA leads on identifying reusable devices that can be safely reprocessed and incentivizing manufacturers to test the reprocessing of their device. This will require the FDA to strengthen its management of single-use and reusable device labeling. Further, the Veterans Health Administration, the nation’s largest healthcare system, should reverse the prohibition on reprocessed SUDs and become a national leader in the reprocessing of medical devices.

Challenge and Opportunity

While healthcare institutions are embracing decarbonization and waste reduction plans, they cannot do this effectively without addressing the enormous impact of single-use devices (SUDs). The majority of research literature concludes that SUDs are associated with higher levels of environmental impact than reusable products. 

FDA regulations governing SUD reprocessing make it extremely challenging for hospitals to reprocess low-risk SUDs, which is inconsistent with the FDA’s “least burdensome provisions.” The FDA requires hospitals or commercial SUD reprocessing facilities to act as the device’s manufacturer, meaning they must follow the FDA’s rules for medical device manufacturers’ requirements and take on the associated liabilities. Hospitals are not keen to take on the liability of a manufacturer, yet commercial reprocessors do not offer many lower-risk devices that can be reprocessed. 

As a result, hospitals and clinics are no longer willing to sterilize SUDs through methods like autoclaving even despite documentation showing that sterilization is safe and precedent showing that similar devices have been safely sterilized and reused for many years without adverse events. Many devices, including pessaries for pelvic organ prolapse and titanium phacoemulsification tips for cataract surgery, can be safely reprocessed in their clinical use. These products, given their risk profile, need not be subject to the FDA’s full medical device manufacturer requirements.  

Further, manufacturers are incentivized to bring SUDs to market quicker than those that may be reprocessed. Manufacturers often market devices as single-use solely because the original manufacturer chose not to conduct expensive cleaning and sterilization validations, not because such cleaning and sterilization validations cannot be done. FDA regulations that govern SUDs should be better tailored to each device so that clinicians on the frontlines can provide appropriate and environmentally sustainable health care. 

Reprocessed devices cost 25 to 40% less. Thus, the use of reprocessed SUDs can reduce costs in hospitals significantly — about $465 million in 2023. Per the Association of Medical Device Reprocessors, if the reprocessing practices of the top 10% performing hospitals were maximized across all hospitals that use reprocessed devices, U.S. hospitals could have saved an additional $2.28 billion that same year. Indeed, enabling and encouraging the use of reprocessed SUDs can also yield significant cost reductions without compromising patient care. 

Plan of Action

As the FDA began regulating SUD reprocessing in 2000, it is imperative that the FDA take the lead on creating a clear, streamlined process for clearing or approving reusable devices in order to ensure the safety and efficacy of reprocessed devices. These recommendations would permit healthcare systems to reprocess and reuse medical devices without fear of noncompliance by the Joint Commission or Centers for Medicare and Medicaid Services that reply on FDA regulations. Further, the nation’s largest healthcare system, the Veterans Health Administration, should become a leader in medical device reprocessing, and lead on showcasing the standard of practice for sustainable healthcare.

  1. FDA should publish a list of SUDs that have a proven track record of safe reprocessing to empower hospitals to reduce waste, costs, and environmental impact without compromising patient safety. The FDA should change the labels of single-use devices to multi-use when reuse by hospitals is possible and validated via clinical studies, as the “single-use” label has promoted the mistaken belief that SUDs cannot be safely reprocessed. Per the FDA, the single-use label simply means a given device has not undergone the original equipment manufacturer (OEM) validation tests necessary to label a device “reusable.” The label does not mean the device cannot be cleared for reprocessing. 
  1. In order to help governments and healthcare systems prioritize the environmental and cost benefits of reusable devices over SUDs, FDA should incentivize applications of reusable or commercially reprocessable devices, such as through expediting review. The FDA can also incentivize use of reprocessed devices through payments to hospitals for meeting reprocessing benchmarks. 
  1. The FDA should not subject low-risk devices that can be safely reprocessed for clinical use to full device manufacturer requirements. The FDA should further encourage healthcare procurement staff by creating an accessible database of devices cleared for reprocessing and alerting healthcare systems about regulated reprocessing options. In doing so, the FDA can help reduce the burden on hospitals in reprocessing low-risk SUDs and encourage healthcare systems to sterilize SUDs through methods like autoclaving. 
  1. As the only major health system in the U.S. to prohibit the use of reprocessed SUDs, the U.S. Veterans Health Administration should reverse its prohibition as soon as possible. This prohibition likely remains because of outdated determinations of risks, which comes at major costs for the environment and Americans. Doing so would be consistent with the FDA’s conclusions that reprocessed SUDs are safe and effective.  
  1. FDA should recommend that manufacturers publicly report the materials used in the composition of devices so that end-users can more easily compare products and determine the environmental impact of devices. As explained by AMDR, some Original Equipment Manufacturer (OEM) practices discourage or fully prevent the use of reprocessed devices. It is imperative that the FDA vigorously track and impede these practices. Not only will requiring public reporting device composition help healthcare buyers make more informed decisions, it will also help promote a more circular economy that uplifts sustainability efforts. 

Conclusion

To decrease costs, waste, and environmental impact, the healthcare sector urgently needs to increase its use of reusable devices. One of the largest barriers is FDA requirements that result in needlessly stringent requirements of hospitals, hindering the adoption of less wasteful, less costly reprocessed devices.

FDA’s critical role in medical device labeling, clearing, or approving more devices as reusable, has down market implications and influences many other regulatory and oversight bodies, including the Centers for Medicare & Medicaid Services (CMS), the Association for the Advancement of Medical Instrumentation (AAMI), the Joint Commission, hospitals, health care offices, and health care providers. It is essential for the FDA to step up and take the lead in revising the device reprocessing pipeline. 

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Taking on the World’s Factory: A Path to Contain China on Legacy Chips

Last year the Federation of American Scientists (FAS), Jordan Schneider (of ChinaTalk), Chris Miller (author of Chip War) and Noah Smith (of Noahpinion) hosted a call for ideas to address the U.S. chip shortage and Chinese competition. A handful of ideas were selected based on the feasibility of the idea and its and bipartisan nature. This memo is one of them.

Challenge and Opportunity

The intelligent and autonomous functioning of physical machinery is one of the key societal developments of the 21st century, changing and assisting in the way we live our lives. In this context, semiconductors, once a niche good, now form the physical backbone of automated and intelligent systems. The supply chain disruptions of 2020 laid bare the vulnerability of the global economy in the face of a chip shortage, which created scarcity and inflation in everything from smartphones to automobiles. In an even more extreme case, a lack of chips could impact critical infrastructure, such as squeezing the supply of medical devices necessary for many modern procedures. 

The deployment of partially- or fully-automated warfighting further means that Artificial Intelligence (AI) systems now have direct and inescapable impacts on national security. With great power conflict opening on the horizon, threats toward and emanating from the semiconductor supply chain have become even more evident. 

In this context, the crucial role of the People’s Republic of China (PRC) in chip production represents a clear and present danger to global security. Although the PRC currently trails in the production of cutting-edge sub-16 nm chips used for the development of AI models, the country’s market dominance in the field of so-called “trailing edge chips” of 28 nm or above has a much wider impact due to their ubiquity in all traditional use cases outside of AI. 

The most important harm of this is clear: by leveraging its control of a keystone international industry, the Chinese Communist Party will be able to exert greater coercive pressure on other nations. In a hypothetical invasion of Taiwan, this could mean credibly threatening the U.S. and other democratic countries not to intervene under the threat of a semiconductor embargo. Even more dramatically, given the reliance of modern military manufacture on digital equipment, in the case of a full-scale war between the People’s Republic of China and the United States, China could produce enormous amounts of materiel while severely capping the ability of the rest of the world to meet its challenge. 

A secondary but significant risk involves the ability of China to build defects or vulnerabilities into its manufactured hardware. Control over the physical components that underlie critical infrastructure, or even military hardware, could allow targeted action to paralyze U.S. society or government in the face of a crisis. While defense and critical infrastructure supply chains represent only a small fraction of all semiconductor-reliant industrial products, mitigation of this harm represents a baseline test of the ability of the United States to screen imports relevant to national security. 

Beyond Subsidies: A Blueprint for Global Manufacturing

Wresting back control of the traditional semiconductor supply chain from China is widely recognized as a prime policy goal for the United States and allied democratic countries. The U.S. has already begun with the passage of the CHIPS and Science Act in 2022, providing subsidies and tax incentives to encourage the creation of new fabrication plants (fabs) in the United States. But a strategic industry cannot survive on subsidies alone. Preferential tax treatment and government consumption may stand up some degree of semiconductor manufacture. But it cannot rival the size of China’s if the PRC is able to establish itself as the primary chip supplier in both its domestic market and the rest of the world.

Nascent American foundries and the multinational companies that operate them must be able to survive in a competitive international environment without relying on unpredictable future support. They must do this while fighting against PRC-backed chip manufacturers operating with both a strong domestic market and massively developed economies of scale. Given the sheer size of both the Chinese manufacturing base and its domestic market, the U.S. cannot hope to accomplish this goal alone. Only a united coalition of developed and developing countries can hope to compete. 

The good news is that the United States and its partners in Europe and the Indo-Pacific have all the necessary ingredients to realize this vision. Developing countries in South and Southeast Asia and the Pacific have a vast and expanding industrial base, augmented by Special Economic Zones and technical universities. America and its developed partners bring the capital investment and intellectual property necessary to kickstart semiconductor production abroad. 

The goal of a rest-of-world semiconductor alliance will be twofold: to drive down the cost of chips made in the U.S. and its allies while simultaneously pushing up the cost of purchasing legacy semiconductors produced in China to meet it. Only when these two intersect will the balance of global trade begin to tip back toward the democratic world. The first two slates of policy recommendations will focus on decreasing the cost of non-China production and increasing the cost of Chinese imports, respectively. 

Finally, even in the case in which non-Chinese-influenced semiconductors become competitive with those made in the PRC, it will likely be impossible to fully exclude Chinese hardware from American and allied markets. Therefore, the final raft of policy recommendations will focus on mitigating the threat of Chinese chips in American and allied markets, including possible risks of inbuilt cyber vulnerability. 

The creation of an autonomous and secure supply chain entirely outside of China is possible. The challenge will be to achieve semiconductor independence in time to prevent China from successively weaponizing chip dominance in a future war. With clashes escalating in the South China Sea and threats across the Taiwan Strait growing ever more ominous, the clock is ticking. But America’s Indo-Pacific partners are also increasingly convinced of the urgency of cooperation. The policies presented aim to make maximum use of this critical time to build strategic independence and ensure peace. 

Plan of Action

Recommendation #1: Boosting non-China Manufacturing 

The first and most crucial step toward semiconductor sovereignty is to build and strengthen a semiconductor supply chain outside of China. No matter the ability to protect domestic markets from Chinese competition, U.S. industrial productivity relies on cheap and reliable access to chips. Without this, it is impossible to ramp up industrial production in key industries from defense contracting to consumer electronics. 

According to a report released by the CSIS Wadhwani Center for AI and Advanced Technologies, the global semiconductor value chain broadly involves three components. At the top is design, which involves creating Electronic Design Automation (EDA) software, generating IP, and producing manufacturing equipment. Next is fabrication, which entails printing and manufacturing the wafers that are the key ingredient for finished semiconductors. The final stage is assembly, test, and packaging (ATP), which entails packaging wafers into fully-functioning units that are fit for sale and verifying they work as expected. 

Of these three, the United States possesses the greatest competitive advantage in the field of design, where American intellectual property and research prowess drive many of the innovations of the modern semiconductor industry. Electronic Design Automation software, the software that allows engineers to design chips, is dominated by three major firms, of which two, Cadence and Synopsys, are American companies. The third, Mentor Graphics, is a U.S.-based subsidiary of the German industrial company Siemens. U.S. Semiconductor Manufacturing Equipment (SME) is also an important input in the design stage, with U.S.-based companies currently comprising over 40 percent of the global market share. The United States and Japan alone account for more than two-thirds. 

Meanwhile, the PRC has aggressively ramped up wafer production, aiming to make China an integral part of the global supply chain, often stealing foreign intellectual property along the way to ease its production. Recent reported efforts by the PRC to illicitly acquire U.S. SME underscore that China recognizes the strategic importance of both IP and SME as primary inputs to the chip making process. By stealing the products of American research, China further creates an unfair business environment in which law-abiding third countries are unable to keep up with Chinese capacity. 

Semiconductor Lend-Lease: A Plan for the 21st Century 

The only way for the international community to compete is to level the playing field. In order to do so, we propose that the United States encourage and incentivize its companies to license their IP and SME to third countries looking to build wafer capacity. 

Before the United States officially entered into the Second World War, the administration of President Franklin Delano Roosevelt undertook the “Lend-Lease” policy, agreeing to continue to supply allied countries such as Great Britain with weapons and materiel, without immediate expectation of repayment. Recently, Lend-Lease has been resurrected in the modern context of the defense of Ukraine, with the United States and European powers supplying Ukraine with armaments and munitions Ukrainian industry could not produce itself. 

The crucial point of Lend-Lease is that it takes the form of immediate provison of critical outputs, rather than simple monetary donations, which require time and investment to convert into the desired goods. World War II-era Lend-Lease was not based on a long-term economic or development strategy, but rather on the short-term assessment that without American support, the United Kingdom would fall to Nazi occupation. Given the status of semiconductors as a key strategic good, the parallels with a slow-rolling crisis in the South China Sea and the Taiwan Strait become clear. While in the long term, South, East, and Southeast Asia will likely be able to level up with China in the field of semiconductors, the imminent threats of both Chinese wafer dominance and a potential invasion of Taiwan mean that this development must be accelerated. Rather than industrial and munitions production, as in 1941, the crucial ingredients the United States brings to this process today are intellectual property, design tools, and SME. These are thus the tools that should be leased to U.S. partners and allies, particularly in the Indo-Pacific. By allowing dedicated foreign partners to take advantage of the gains of American research, we will allow them to level up with China and truly compete in the international market. 

Although the economics of such a plan are complex, we present a sketch here of how one iteration might look. The United States federal government could negotiate with the “Big Three” EDA firms to purchase transferable licenses for their EDA software. The U.S. could then “Lend-Lease” licenses to major semiconductor producers in partner countries such as Singapore, Malaysia, Vietnam, the Philippines, or even in Latin America. The U.S. could license this software on the condition that products produced by such companies would be made available at discounted prices to the American market, and that companies should disavow further investment from or cooperation with Chinese entities. Partner companies in the Indo-Pacific could further agree to share any further research results produced using American IP, making further advancements available to American companies in the global market. 

When growing companies attain a predetermined level of market value they could offer compensation to the United States in the form of fees or stock options, which would be collected by the United States under the terms of the treaty and awarded to the EDA firms. Similar approaches could be taken toward licensing American IP, or even physically lending SME to countries in need. 

Licensing American research to designated partner countries comes with some risks and challenges. For one, it creates a greater attack surface for Chinese companies hoping to steal software and design processes created in the United States. Preventing such theft is already highly difficult, but the U.S. should extend cooperation in standardizing industrial security practices for strategic industries. 

A recent surge in fab construction in countries such as Singapore and India means that the expansion of the global semiconductor industry is already in motion. The United States can leverage its expertise and research prowess to speed up the growth of wafer production in third countries, while simultaneously countering China’s influence on global supply chains. 

A Semiconductor Reserve? 

The comparison of semiconductors to oil is frequently made and has a key strategic justification: for more than a century, oil was a key input to virtually all industrial processes, from transportation to defense production. Semiconductors now play a similar role, serving as a key ingredient in manufacturing. 

A further ambitious policy to mitigate the harm of Chinese chips is to create a centralized reservoir of semiconductors, akin to the Strategic Petroleum Reserve. Such a reserve would be operated by the Commerce Department and maintain holdings of both leading- and trailing-edge chips, obtained from free dealings on the open market. By taking advantage of bulk pricing and guaranteed, recurring contracts, the government could acquire a large number of semiconductors at reasonable prices, sourced exclusively from American and partner nation foundries. 

In the event of a chip shortage, the United States could sell chips back into the market, allowing key industries to continue to function with a trusted source of secure chips. In the absolute worst case of a geopolitical crisis involving China, a strategic stockpile would create a bulwark for the American defense industry to continue producing armaments during a period of disrupted chip supply. This buffer of time would be intended for domestic and allied production to ramp up and continue to supply security functions. 

However, allowing the U.S. to participate in the chip industry has a further impact on economic development. By making the U.S. a first-order purchaser of semiconductors at an industrial scale, the United States could create a reliable source of demand for fledgling businesses. The United States could serve as a transitory consumer buying up excess capacity when demand is weak, ensuring that new foundries are both capable of operation and shielded from attempts from China to smother demand. The direct participation of the U.S. in the global semiconductor market would help to kickstart industry in partner countries while providing a further incentive to build collaboration with the United States. 

Recommendation #2: Fencing in Chinese Semiconductor Exports 

A second step toward semiconductor independence will be in containing Chinese exports, with the goal of reducing China’s access to global markets and starving their industrial machine. 

The most direct approach to reducing demand for Chinese semiconductors is the imposition of tariffs. The U.S. consumer market is a potent economic force. By squeezing Chinese manufacturers seeking to compete in the U.S. market, the United States can avoid feeding additional production capacity that might be weaponized in a future conflict. These tariffs can take a variety of forms and justifications, from increased probes into Chinese labor standards and human rights practices to dumping investigations pursued at the WTO. The deep challenge of effective tariffs is how to enforce these tariffs once they come into play and how to coordinate them with international partners. 

Broad Tariffs, Deep Impact 

No rule works without an enforcement mechanism, and in the worst case, a strong public stance against Chinese semiconductors that is not effectively implemented may actually weaken U.S. credibility and embolden the Chinese government. Therefore, it is imperative to have unambiguous rules on trade restrictions, with a strong enforcement mechanism to match. 

These measures should not just apply to chips that are bought directly from China but rather include those that are assembled and packaged in third countries to circumvent U.S. tariffs. The maximal interpretation of the tariffs mandate would further include a calculated tariff on products that make use of Chinese semiconductors as an intermediate input. 

In the case of semiconductors made in China but assembled, tested, or packaged in other countries, we suggest an expansion of the Biden Administration’s 50% tariff on Chinese semiconductors to include all chips, consumer, or industrial products that include a wafer manufactured in the People’s Republic of China, based on their international market rate. That is, if an Indonesian car manufacturer purchases a wafer manufactured in China with a market value of $3,000, and uses it to manufacture a $35,000 car, importing this vehicle to the United States would be subject to an additional tax of $1,500. 

While fears abound of the inflationary effects of additional tariffs, they are necessary for the creation of an incentive structure that properly contains Chinese manufacturing. In the absence of proportional tariffs on chips and products assembled outside China, Chinese fabs will be able to circumvent U.S. trade restrictions by boosting wafer production that then takes advantage of assembly, test, and packaging in third countries. Further, it is imperative for the United States to not only restrict Chinese chip growth but to encourage the development of domestic and foreign non-China chip manufacturers. Imposing tariffs on Chinese chips as an intermediate ingredient is necessary to create a proper competitive environment. Ultimately, the goal is to ensure a diversification of fab locations beyond China that will create lower prices for consumers overall. 

How would tariffs on final goods containing Chinese chips be enforced? The policy issue of sanctioning and restricting an intermediate product is, unfortunately, not new. It is well known that Chinese precursor chemicals, often imported into Mexico, form much of the raw inputs for deadly fentanyl that is driving the United States opioid epidemic. Taking a cue from this example, we further suggest the creation of an internationally maintained database of products manufactured using Chinese semiconductors. As inspiration, the National Institutes of Health / NCATS maintains the Global Substance Registration System, a database that categorizes chemical substances, along with their commonly used names, regulatory classification, and relationships with other chemicals. Such a database could be administered by the Commerce Department’s Bureau of Industry and Security, allowing the personnel who enforce the tariffs to also collect all relevant information in one place. 

Companies importing products into the U.S. would be required to register the make and model of all Chinese chips used in each of their products so that the United States and participating countries could impose corresponding sanctions. Products imported to the U.S. would be subject to random checks involving disassembly in Commerce Department workshops, with failure to report a sanctioned semiconductor component making a company subject to additional tariffs and fines. Manual disassembly is painstaking and difficult, but regular, randomized inspections of imported products are the only way to truly verify their content. 

The maintenance of such a database would bring follow-on national security benefits, in that the disclosure of any future vulnerability in a Chinese electronic component would allow quick diagnosis of what systems, including critical infrastructure, might be immediately vulnerable. We believe that creating an enforcement infrastructure that coordinates information between the U.S. and partner countries is a necessary first step to ensuring that tariffs are effective. 

Zone Defense: International Cooperation in Semiconductor Tariffs 

At first glance, tariff action by the United States on Chinese-produced goods would appear to be a difficult coordination problem. By voluntarily declining an influx of cheaply-priced goods, American consumers exacerbate an existing trade glut in world semiconductor markets, allowing and incentivizing other nations to purchase these chips in greater volume and at a lower price. 

However, rather than dissuading further sanctions in world markets, tariffs may actually spur further coordination in blocking Chinese imports. The Biden Administration’s imposition of tariffs on Chinese electric vehicles coincided with parallel sanctions imposed by the European Union, India, and Brazil. As Chinese overcapacity in EVs is rejected by U.S. markets, other countries face intensified concerns about the potential for below-price “dumping” of products that could harm domestic industry. 

However, this ad-hoc international cooperation is still in a fragile and tentative stage and must be encouraged in order to create an “everywhere but China” semiconductor supply chain. Further, while countries impose tariffs to protect existing automotive and steel industries, global semiconductor manufacturing is currently concentrated in the Indo-Pacific. Thus, coordinating against China calls on countries to not just impose tariffs to protect existing industries, but to impose “nursery” tariffs that will protect nascent semiconductor development, even in places where an existing industry does not yet exist. 

A commonsense first step to building an atmosphere of trust is to take actions protecting partner countries from retaliation in the form of Chinese trade restrictions. In response to EU tariffs on Chinese EVs, Beijing has already threatened retaliatory restrictions on chicken feet, pork, and brandy. For a bloc as large as the European Union, these restrictive sanctions can irritate an important political constituency. For a smaller or less economically-powerful country, these measures might be decisive in sending the message that semiconductor tariffs are not worth the risk. 

The United States should negotiate bilateral treaties with partner nations to erect tariffs against Chinese manufacturing, with the agreement that, in the case of Chinese retaliation against predetermined fundamental national industries, the United States government will buy up excess capacity at slightly discounted prices and release it to the American market. This preemptive protection of allied trade will blunt China’s ability to harm U.S. partners and allies. Raising tariffs on imported goods also imposes costs on the Chinese consumer, meaning that in the best case, the decreased effectiveness of these tools will deter the PRC from attempting such measures in the first place. 

Recommendation #3: Mitigating the Threat of Existing Chips 

No matter the success of the previous measures, it will be impossible to keep Chinese products entirely outside the U.S. market. Therefore, a strategy is required for managing the operational risks posed by Chinese chips that have and will exist inside the U.S. domestic sphere. 

Precisely defining the scope of the threat is very important. A narrow definition might allow threats to pass through, while an overly wide definition may expend time and resources over nothing. A recent British effort to exclude Chinese-made cap badges presents a cautionary tale. By choosing a British supplier over an existing Chinese one after the acquisition process was already underway, the UK incurred an additional delay in its military pipeline, not to mention the additional confusion caused by such an administrative pivot. Implanting GPS tracking or listening devices within small pieces of metal by one company within the Chinese supply chain seems both impractical and far-fetched–though the PRC surely enjoys the chaos and expense such a panic can cause. 

We consider it analogously unlikely that China is currently aiming to insert intentional defects into its semiconductor manufacturing. First, individual wafers are optimized for their extremely low cost of production, meaning that inserting a carefully designed (and hidden) flaw would introduce additional costs that could compromise the goal of low-cost manufacturing. Any kind of remotely activated “kill switch” would require some kind of wireless receiver– and a receiver of any reasonable strength could not be effectively hidden on a large scale. Second, such a vulnerability would have to be inserted only into wafers that are eventually purchased by the U.S. and its allies. If not, then any attempt to activate a remote exploit could risk compromising uninvolved countries or even the Chinese domestic market, either by accidentally triggering unintended chips or by providing a hardware vulnerability that could be re-used by Western cyber operations. Deliberately planting such vulnerabilities would thus require not just extreme technical precision, but a careful accounting of where vulnerable chips arrive in the supply chain.

Nonetheless, the existence of Chinese chips in the American market can accomplish much without explicitly-designed defects or “kill switches”. Here, a simple lack of transparency may be enough. China currently requires that all software vulnerabilities be reported to the Ministry of Industry and Information Technology, but does not have any corresponding public reporting requirement. This raises the fear that the Chinese government may be “stockpiling” vulnerabilities in Chinese-produced products, which may be used in future cyber operations. Here, China does not need to explicitly build backdoors into its own hardware but may simply decline to publicly disclose vulnerabilities in software in order to attack the rest of the world. 

Shining a Light on Untrusted Hardware 

The likelihood of cooperation between Chinese industry and the CCP exposes a potentially important risk. Chinese software is often deployed atop or alongside Chinese semiconductors and is particularly dangerous in the form of hardware drivers, the “glue code” that ties together software with low-level hardware components. These drivers by default operate with high privileges, and are typically closed-source and thus difficult to examine. We believe that vulnerable drivers may be a key vector of Chinese espionage or cyber threats. In 2019, Microsoft disclosed the existence of a privilege escalation vulnerability found in a Huawei driver. Although Huawei cooperated with Microsoft, it is unclear under the current legal regime whether the discovery of similar vulnerabilities by Huawei would be reported and patched, or if they would be kept as an asset by the Chinese government. The promulgation of Chinese drivers packaged with cheap hardware thus means that the Chinese Communist Party will have access to a broad, and potentially pre-mapped, attack surface with which to exploit U.S. government services. 

The first policy step here is obvious: banning the use of Chinese chips in U.S. federal government acquisitions. This has already been proposed as a Defense Department regulation set to take effect in 2027. If possible, this date should be moved up to 2026 or earlier. In order to enforce this ban, continuous research should be undertaken to map supply chains that produce U.S. government semiconductors. How to accelerate and enforce this ban is an ongoing policy question that is beyond the scope of this paper. 

However, a deeper question is how to protect the myriad components of critical infrastructure, both formal and informal. The Cybersecurity and Infrastructure Security Agency (CISA) has defined 16 sectors of critical infrastructure whose failure could materially disrupt or endanger the lives of American citizens. The recent discovery of the Volt Typhoon threat group revealed the willingness of the Chinese government to penetrate U.S. critical infrastructure using vulnerable components. 

While some of the 16 CISA sectors, such as Government Services and the Defense Industrial Base, are within the purview of the federal government, many others, such as Healthcare, Food and Agriculture, and Information Technology, are run via complex partnerships between State, Local, Tribal, and Territorial (SLTT) governments and private industry. Although a best effort should be made to insulate these sectors from over-reliance on China, fully quarantining them from Chinese chips is simply unrealistic. Therefore we should explore proactive efforts at mitigation in the case of disruption. 

A first step would be to establish a team at CISA to decompile or reverse-engineer the drivers for Chinese hardware that is known to operate within U.S. critical infrastructure. Like manual disassembly, this is an expensive and arduous process, but it has the advantage of reducing an unknown or otherwise intractable problem to an issue of engineering. In this case, special care should be taken to catalog and prioritize pieces of Chinese hardware that impact the most critical infrastructure systems, such as Programmable Logic Controllers (PLCs) in energy infrastructure and processors in hospital databases. This approach can be coordinated with the threat database described in the previous section to disassemble and profile the drivers of the highest-impact semiconductor products first. If any vulnerabilities are found, warnings can be issued to critical infrastructure providers, and patches issued to the relevant parties. 

Brace for Impact: Building Infrastructure Resiliency 

Even in the case that neither the reduction of Chinese hardware nor the proactive search for driver vulnerabilities is able to prevent a Chinese attack, the United States should be prepared to mitigate the harms of a cyber crisis. 

A further step toward this goal would be the institution of resiliency protocols and drills for designated critical infrastructure providers. The 2017 WannaCry ransomware attack substantially incapacitated the UK National Health Service by locking providers out of Electronic Medical Record (EMR) systems. Mandating routine paper backups of digital medical records is one example of a resiliency strategy that could be deployed to ensure emergency functioning even in the case of a major service disruption. 

A further step to protect critical infrastructure is to mandate regular cyber training for infrastructure providers. CISA could work in cooperation with State, Local, Tribal, and Territorial regulatory bodies to identify critical pieces of infrastructure that could be subject to attack. CISA could develop hypothetical scenarios involving outages of critical Information Technology services, and work with local infrastructure providers, such as hospitals, municipal water services, transit providers, and the like, to create plans for how to continue to operate in the event of a crisis. CISA could also prepare baseline strategies, such as having non-internet connected control systems or offline backups of critical information. Such strategies could be adapted by individual infrastructure providers to best protect their services in the event of an attack. These plans could then be carried out in mock ‘cyber drills’ to exercise preparedness in the event of an incident. 

Ultimately, plans of this kind only prepare for service disruptions and do not address the longer-reaching impacts of breaches of confidentiality or the targeted manipulation of sensitive data. However, as we believe that the likelihood of targeted or sophisticated vulnerabilities in Chinese chips is relatively low, these kinds of brute force attacks are the most likely threat model. Preparing for the most basic and unsophisticated service disruptions is a good first step toward mitigating the harm of any potential cyber attack, including those not directly facilitated by Chinese hardware. This cyber-resiliency planning is therefore a strong general recommendation for protecting Americans from future threats. 

Conclusion

We have presented the issue of international semiconductor competition along three major axes: increasing production outside of China, containing an oversupply of Chinese semiconductors, and mitigating the risks of remaining Chinese chips in the U.S. market. We have proposed three slates of policies, with each corresponding to one of the three challenges:

Boosting non-China semiconductor production 

Containing Chinese exports 

Mitigating the threat of chips in the U.S. market 

We hope that this contribution will advance future discussions on the semiconductor trade and make a measurable impact on bolstering U.S. national security.