Increasing Responsible Data Sharing Capacity throughout Government

Deriving insights from data is essential for effective governance. However, collecting and sharing data—if not managed properly—can pose privacy risks for individuals. Current scientific understanding shows that so-called “anonymization” methods that have been widely used in the past are inadequate for protecting privacy in the era of big data and artificial intelligence. The evolving field of Privacy-Enhancing Technologies (PETs), including differential privacy and secure multiparty computation, offers a way forward for sharing data safely and responsibly.

The administration should prioritize the use of PETs by integrating them into data-sharing processes and strengthening the executive branch’s capacity to deploy PET solutions.

Challenge and Opportunity

A key function of modern government is the collection and dissemination of data. This role of government is enshrined in Article 1, Section 2 of the U.S. Constitution in the form of the decennial census—and has only increased with recent initiatives to modernize the federal statistical system and expand evidence-based policymaking. The number of datasets itself has also grown; there are now over 300,000 datasets on data.gov, covering everything from border crossings to healthcare. The release of these datasets not only accomplishes important transparency goals, but also represents an important step toward advancing American society fairer, as data are a key ingredient in identifying policies that benefit the public. 

Unfortunately, the collection and dissemination of data comes with significant privacy risks. Even with access to aggregated information, motivated attackers can extract information specific to individual data subjects and cause concrete harm. A famous illustration of this risk occurred in 1997 when Latanya Sweeney was able to identify the medical record of then-Governor of Massachusetts, William Weld, from a public, anonymized dataset. Since then, the power of data re-identification techniques—and incentives for third parties to learn sensitive information about individuals—have only increased, compounding this risk. As a democratic, civil-rights respecting nation, it is irresponsible for our government agencies to continue to collect and disseminate datasets without careful consideration of the privacy implications of data sharing.

While there may appear to be an irreconcilable tension between facilitating data-driven insight and protecting the privacy of individual’s data, an emerging scientific consensus shows that Privacy-Enhancing Technologies (PETs) offer a path forward. PETs are a collection of techniques that enable data to be used while tightly controlling the risk incurred by individual data subjects. One particular PET, differential privacy (DP), was recently used by the U.S. Census Bureau within their disclosure avoidance system for the 2020 decennial census in order to meet their dual mandates of data release and confidentiality. Other PETs, including variations of secure multiparty computation, have been used experimentally by other agencies, including to link long-term income data to college records and understand mental health outcomes for individuals who have earned doctorates. The National Institute of Standards and Technology (NIST) has produced frameworks and reports on data and information privacy, including PETs topics such as DP (see Q&A section). However, these reports still lack a comprehensive and actionable framework on how organizations should consider, use and deploy PETs in organizations. 

As artificial intelligence becomes more prevalent inside and outside government and relies on increasingly large datasets, the need for responsible data sharing is growing more urgent. The federal government is uniquely positioned to foster responsible innovation and set a strong example by promoting the use of PETs. The use of DP in the 2020 decennial census was an extraordinary example of the government’s capacity to lead global innovation in responsible data sharing practices. While the promise of continuing this trend is immense, expanding the use of PETs within government poses twin challenges: (1) sharing data within government comes with unique challenges—both technical and legal—that are only starting to be fully understood and (2) expertise on using PETs within government is limited. In this proposal, we outline a concrete plan to overcome these challenges and unlock the potential of PETs within government.

Plan of Action

Using PETs when sharing data should be a key priority for the executive branch. The new administration should encourage agencies to consider the use of PETs when sharing data and build a United States DOGE Service (USDS) “Responsible Data Sharing Corps” of professionals who can provide in-house guidance around responsible data sharing.

We believe that enabling data sharing with PETs requires (1) gradual, iterative refinement of norms and (2) increased capacity in government. With these in mind, we propose the following recommendations for the executive branch.

Strategy Component 1. Build consideration of PETs into the process of data sharing

Recommendation 1. NIST should produce a decision-making framework for organizations to rely on when evaluating the use of PETs.

NIST should provide a step-by-step decision-making framework for determining the appropriate use of PETs within organizations, including whether PETs should be used, and if so, which PET and how it should be deployed. Specifically, this guidance should be at the same level of granularity as NIST Risk Management Framework for Cybersecurity. NIST should consult with a range of stakeholders from the broad data sharing ecosystem to create this framework. This includes data curators (i.e., organizations that collect and share data, within and outside the government); data users (i.e., organizations that consume, use and rely on shared data, including government agencies, special interest groups and researchers); data subjects; experts across fields such as information studies, computer science, and statistics; and decision makers within public and private organizations who have prior experience using PETs for data sharing. The report may build on NIST’s existing related publications and other guides for policymakers considering the use of specific PETs, and should provide actionable guidance on factors to consider when using PETs. The output of this process should be not only a decision, but also a report documenting the execution of decision-making framework (which will be instrumental for Recommendation 3).

Recommendation 2. The Office of Management and Budget (OMB) should mandate government agencies interested in data sharing to use the NIST’s decision-making framework developed in Recommendation 1 to determine the appropriateness of PETs to protect their data pipelines.

The risks to data subjects associated with data releases can be significantly mitigated with the use of PETs, such as differential privacy. Along with considering other mechanisms of disclosure control (e.g., tiered access, limiting data availability), agencies should investigate the feasibility and tradeoffs around using PETs to protect data subjects while sharing data for policymaking and public use. To that end, OMB should require government agencies to use the decision-making framework produced by NIST (in Recommendation 1) for each instance of data sharing. We emphasize that this decision-making process may lead to a decision not to use PETs, as appropriate. Agencies should compile the produced reports such that they can be accessed by OMB as part of Recommendation 3.

Recommendation 3. OMB should produce a PET Use Case Inventory and annual reports that provide insights on the use of PETs in government data-sharing contexts.

To promote transparency and shared learning, agencies should share the reports produced as part of their PET deployments and associated decision-making processes with OMB. Using these reports, OMB should (1) publish a federal government PET Use Case Inventory (similar to the recently established Federal AI Use Case Inventory) and (2) synthesize these findings into an annual report. These findings should provide high-level insights into the decisions that are being made across agencies regarding responsible data sharing, and highlight the barriers to adoption of PETs within various government data pipelines. These reports can then be used to update the decision-making frameworks we propose that NIST should produce (Recommendation 1) and inspire further technical innovation in academia and the private sector.

Strategy Component 2. Build capacity around responsible data sharing expertise 

Increasing in-depth decision-making around responsible data sharing—including the use of PETs—will require specialized expertise. While there are some government agencies with teams well-trained in these topics (e.g., the Census Bureau and its team of DP experts), expertise across government is still lacking. Hence, we propose a capacity-building initiative that increases the number of experts in responsible data sharing across government.

Recommendation 4. Announce the creation of a “Responsible Data Sharing Corps.”

We propose that the USDS create a “Responsible Data Sharing Corps” (RDSC). This team will be composed of experts in responsible data sharing practices and PETs. RDSC experts can be deployed into other government agencies as needed to support decision-making about data sharing. They may also be available for as-needed consultations with agencies to answer questions or provide guidance around PETs or other relevant areas of expertise.

Recommendation 5. Build opportunities for continuing education and training for RDSC members.

Given the evolving nature of responsible data practices, including the rapid development of PETs and other privacy and security best practices, members of the RDSC should have 20% effort reserved for continuing education and training. This may involve taking online courses or attending workshops and conferences that describe state-of-the-art PETs and other relevant technologies and methodologies.

Recommendation 6. Launch a fellowship program to maintain the RDSC’s cutting-edge expertise in deploying PETS.

Finally, to ensure that the RDSC stays at the cutting edge of relevant technologies, we propose an RDSC fellowship program similar to or part of the Presidential Innovation Fellows. Fellows may be selected from academia or industry, but should have expertise in PETs and propose a novel use of PETs in a government data-sharing context. During their one-year terms, fellows will perform their proposed work and bring new knowledge to the RDSC.

Conclusion

Data sharing has become a key priority for the government in recent years, but privacy concerns make it critical to modernize technology for responsible data use to leverage data for policymaking and transparency. PETs such as differential privacy, secure multiparty computation, and others offer a promising way forward. However, deploying PETs at a broad scale requires changing norms and increasing capacity in government. The executive branch should lead these efforts by encouraging agencies to consider PETs when making data-sharing decisions and building a “Responsible Data Sharing Corps” who can provide expertise and support for agencies in this effort. By encouraging the deployment of PETs, the government can increase fairness, utility and transparency of data while protecting itself—and its data subjects—from privacy harms.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
What are the concrete risks associated with data sharing?

Data sharing requires a careful balance of multiple factors, with privacy and utility being particularly important.



  • Data products released without appropriate and modern privacy protection measures could facilitate abuse, as attackers can weaponize information contained in these data products against individuals, e.g., blackmail, stalking, or publicly harassing those individuals.

  • On the other hand, the lack of accessible data can also cause harm due to reduced utility: various actors, such as state and local government entities, may have limited access to accurate or granular data, resulting in the inefficient allocation of resources to small or marginalized communities.

What are some examples of PETs to consider?

Privacy-Enhancing Technologies is a broad umbrella category that includes many different technical tools. Leading examples of these tools include differential privacy, secure multiparty computation, trusted execution environments, and federated learning. Each one of these technologies is designed to address different privacy threats. For additional information, we suggest the UN Guide on Privacy-Enhancing Technologies for Official Statistics and the ICO’s resources on Privacy-Enhancing Technologies.

What NIST publications are relevant to PETs?

NIST has multiple publications related to data privacy, such as the Risk Management Framework for Cybersecurity and the Privacy Framework. The report De-Identifying Government Datasets: Techniques and Governance focuses on responsible data sharing by government organizations, while the Guidelines for Evaluating Differential Privacy Guarantees provides a framework to assess the privacy protection level provided by differential privacy for any organization.

What is differential privacy (DP)?

Differential privacy is a framework for controlling the amount of information leaked about individuals during a statistical analysis. Typically, random noise is injected into the results of the analysis to hide individual people’s specific information while maintaining overall statistical patterns in the data. For additional information, we suggest Differential Privacy: A Primer for a Non-technical Audience.

What is secure multiparty computation (MPC)?

Secure multiparty computation is a technique that allows several actors to jointly aggregate information while protecting each actor’s data from disclosure. In other words, it allows parties to jointly perform computations on their data while ensuring that each party learns only the result of the computation. For additional information, we suggest Secure Multiparty Computation FAQ for Non-Experts.

How have privacy-enhancing technologies been used in government before, domestically and internationally?

There are multiple examples of PET deployments at both the federal and local levels both domestically and internationally. We list several examples below, and refer interested readers to the in-depth reports by Advisory Committee on Data for Evidence Building (report 1 and report 2):



  • The Census Bureau used differential privacy in their disclosure avoidance system to release results from the 2020 decennial census data. Using differential privacy allowed the bureau to provide formal disclosure avoidance guarantees as well as precise information about the impact of this system on the accuracy of the data.

  • The Boston Women’s Workforce Council (BWWC) measures wage disparities among employers in the greater Boston area using secure multiparty computation (MPC).

  • The Israeli Ministry of Health publicly released its National Life Birth Registry using differential privacy.

  • Privacy-preserving record linkage, a variant of secure multiparty computation, has been used experimentally by both the U.S. Department of Education and the National Center for Health Statistics. Additionally, it has been used at the county level in Allegheny County, PA.


Additional examples can also be found in the UN’s case-study repository of PET deployments.

What type of expertise is required to deploy PETs solutions?

Data-sharing projects are not new to the government, and pockets of relevant expertise—particularly in statistics, software engineering, subject matter areas, and law—already exist. Deploying PET solutions requires technical computer science expertise for building and integrating PETs into larger systems, as well as sociotechnical expertise in communicating the use of PETs to relevant parties and facilitating decision-making around critical choices.

Reforming the Federal Advisory Committee Landscape for Improved Evidence-based Decision Making and Increasing Public Trust

Federal Advisory Committees (FACs) are the single point of entry for the American public to provide consensus-based advice and recommendations to the federal government. These Advisory Committees are composed of experts from various fields who serve as Special Government Employees (SGEs), attending committee meetings, writing reports, and voting on potential government actions.

Advisory Committees are needed for the federal decision-making process because they provide additional expertise and in-depth knowledge for the Agency on complex topics, aid the government in gathering information from the public, and allow the public the opportunity to participate in meetings about the Agency’s activities. As currently organized, FACs are not equipped to provide the best evidence-based advice. This is because FACs do not meet transparency requirements set forth by GAO: making pertinent decisions during public meetings, reporting inaccurate cost data, providing  official meeting documents publicly available online, and more. FACs have also experienced difficulty with recruiting and retaining top talent to assist with decision making. For these reasons, it is critical that FACs are reformed and equipped with the necessary tools to continue providing the government with the best evidence-based advice. Specifically, advice as it relates to issues such as 1) decreasing the burden of hiring special government employees 2) simplifying the financial disclosure process 3) increasing understanding of reporting requirements and conflict of interest processes 4) expanding training for Advisory Committee members 5) broadening  the roles of Committee chairs and designated federal officials 6) increasing public awareness of Advisory Committee roles 7) engaging the public outside of official meetings 8) standardizing representation from Committee representatives 9) ensuring that Advisory Committees are meeting per their charters and 10) bolstering Agency budgets for critical Advisory Committee issues. 

Challenge and Opportunity

Protecting the health and safety of the American public and ensuring that the public has the opportunity to participate in the federal decision-making process is crucial. We must evaluate the operations and activities of federal agencies that require the government to solicit evidence-based advice and feedback from various experts through the use of federal Advisory Committees (FACs). These Committees are instrumental in facilitating transparent and collaborative deliberation between the federal government, the advisory body, and the American public and cannot be done through the use of any other mechanism. Advisory Committee recommendations are integral to strengthening public trust and reinforcing the credibility of federal agencies. Nonetheless, public trust in government has been waning and efforts should be made to increase public trust. Public trust is known as the pillar of democracy and fosters trust between parties, particularly when one party is external to the federal government. Therefore, the use of Advisory Committees, when appropriately used, can assist with increasing public trust and ensuring compliance with the law. 

There have also been many success stories demonstrating the benefits of Advisory Committees. When Advisory Committees are appropriately staffed based on their charge, they can decrease the workload of federal employees, assist with developing policies for some of our most challenging issues, involve the public in the decision-making process, and more. However, the state of Advisory Committees and the need for reform have been under question, and even more so as we transition to a new administration. Advisory Committees have contributed to the improvement in the quality of life for some Americans through scientific advice, as well as the monitoring of cybersecurity. For example, an FDA Advisory Committee reviewed data and saw promising results for the treatment of sickle cell disease (SCD) which has been a debilitating disease with limited treatment for years. The Committee voted in favor of gene therapy drugs Casgevy and Lyfgenia which were the first to be approved by the FDA for SCD. 

Under the first Trump administration, Executive Order (EO) 13875 resulted in a significant decrease in the number of federal advisory meetings. This  limited agencies’ ability to convene external advisors. Federal science advisory committees met less during this administration than any prior administration, met less than what was required from their charter, disbanded long standing Advisory Committees, and scientists receiving agency grants were barred from serving on Advisory Committees. Federal Advisory Committee membership also decreased by 14%, demonstrating the issue of recruiting and retaining top talent. The disbandment of Advisory Committees, exclusion of key scientific external experts from Advisory Committees, and burdensome procedures can potentially trigger  severe consequences that affect the health and safety of Americans. 

Going into a second Trump administration, it is imperative that Advisory Committees have the opportunity to assist federal agencies with the evidence-based advice needed to make critical decisions that affect the American public. The suggested reforms that follow can work to improve the overall operations of Advisory Committees while still providing the government with necessary evidence-based advice. With successful implementation of the following recommendations, the federal government will be able to reduce administrative burden on staff through the recruitment, onboarding, and conflict of interest processes. 

The U.S. Open Government Initiative encourages the promotion and participation of public and community engagement in  governmental affairs. However, individual Agencies can and should do more to engage the public. This policy memo identifies several areas of potential reform for Advisory Committees and aims to provide recommendations for improving the overall process without compromising Agency or Advisory Committee membership integrity. 

Plan of Action

The proposed plan of action identifies several policy recommendations to reform the federal Advisory Committee (Advisory Committee) process, improving both operations and efficiency. Successful implementation of these policies will  1) improve the Advisory Committee member experience, 2) increase transparency in federal government decision-making, and 3) bolster trust between the federal government, its Advisory Committees, and the public. 

Streamline Joining Advisory Committees

Recommendation 1. Decrease the burden of hiring special government employees in an effort to (1) reduce the administrative burden for the Agency and (2) encourage Advisory Committee members, who are also known as special government employees (SGEs), to continue providing the best evidence-based advice to the federal government through reduced onerous procedures

The Ethics in Government Act of 1978 and Executive Order 12674 lists OGE-450 reporting as the required public financial disclosure for all executive branch and special government employees. This Act provides the Office of Government Ethics (OGE) the authority to implement and regulate a financial disclosure system for executive branch and special government employees whose duties have “heightened risk of potential or actual conflicts of interest”. Nonetheless, the reporting process becomes onerous when Advisory Committee members have to complete the OGE-450 before every meeting even if their information remains unchanged. This presents a challenge for Advisory Committee members who wish to continue serving, but are burdened by time constraints. The process also burdens federal staff who manage the financial disclosure system. 

Policy Pathway 1. Increase funding for enhanced federal staffing capacity to undertake excessive administrative duties for financial reporting.

Policy Pathway 2. All federal agencies that deploy Advisory Committees can conduct a review of the current OGC-450 process, budget support for this process, and work to develop an electronic process that will eliminate the use of forms and allow participants to select dropdown options indicating if their financial interests have changed.  

Recommendation 2. Create and use public platforms such as OpenPayments by CMS to (1) aid in simplifying the financial disclosure reporting process and (2) increase transparency for disclosure procedures

Federal agencies should create a financial disclosure platform that streamlines the process and allows Advisory Committee members to submit their disclosures and easily make updates. This system should also be created to monitor and compare financial conflicts. In addition, agencies that utilize the expertise of Advisory Committees for drugs and devices should identify additional ways in which they can promote financial transparency. These agencies can use Open Payments, a system operated by Centers for Medicare & Medicaid Services (CMS), to “promote a more financially transparent and accountable healthcare system”. The Open Payments system makes payments from medical and drug device companies to individuals, healthcare providers, and teaching hospitals accessible to the public. If for any reason financial disclosure forms are called into question, the Open Payments platform can act as a check and balance in identifying any potential financial interests of Advisory Committee members. Further steps that can be taken to simplify the financial disclosure process would be to utilize conflict of interest software such as Ethico which is a comprehensive tool that allows for customizable disclosure forms, disclosure analytics for comparisons, and process automation.   

Policy Pathway. The Office of Government Ethics should require all federal agencies that operate Advisory Committees to develop their own financial disclosure system and include a second step in the financial disclosure reporting process as due diligence, which includes reviewing the Open Payments by CMS system for potential financial conflicts or deploying conflict of interest monitoring software to streamline the process.

Streamline Participation in an Advisory Committee

Recommendation 3. Increase understanding of annual reporting requirements for conflict of interest (COI)

Agencies should develop guidance that explicitly states the roles of Ethics Officers, also known as Designated Agency Ethics Officials (DAEO), within the federal government. Understanding the roles and responsibilities of Advisory Committee members and the public will help reduce the spread of misinformation regarding the purpose of Advisory Committees. In addition, agencies should be encouraged by the Office of Government Ethics to develop guidance that indicates the criteria for inclusion or exclusion of participation in Committee meetings. Currently, there is no public guidance that states what types of conflicts of interests are granted waivers for participation. Full disclosure of selection and approval criteria will improve transparency with the public and draw clear delineations between how Agencies determine who is eligible to participate. 

Policy Pathway. Develop conflict of interest (COI) and financial disclosure guidance specifically for SGEs that states under what circumstances SGEs are allowed to receive waivers for participation in Advisory Committee meetings.

Recommendation 4. Expand training for Advisory Committee members to include (1) ethics and (2) criteria for making good recommendations to policymakers

Training should be expanded for all federal Advisory Committee members to include ethics training which details the role of Designated Agency Ethics Officials, rules and regulations for financial interest disclosures, and criteria for making evidence-based recommendations to policymakers. Training for incoming Advisory Committee members ensures that all members have the same knowledge base and can effectively contribute to the evidence-based recommendations process.

Policy Pathway. Agencies should collaborate with the OGE and Agency Heads to develop comprehensive training programs for all incoming Advisory Committee members to ensure an understanding of ethics as contributing members, best practices for providing evidence-based recommendations, and other pertinent areas that are deemed essential to the Advisory Committee process.

Leverage Advisory Committee Membership

Recommendation 5. Uplifting roles of the Committee Chairs and Designated Federal Officials

Expanding the roles of Committee Chairs and Designated Federal Officers (DFOs) may assist federal Agencies with recruiting and retaining top talent and maximizing the Committee’s ability to stay abreast of critical public concerns. Considering the fact that the General Services Administration has to be consulted for the formation of new Committees, renewal, or alteration of Committees, they can be instrumental in this change.

Policy Pathway. The General Services Administration (GSA) should encourage federal Agencies to collaborate with Committee Chairs and DFOs to recruit permanent and ad hoc Committee members who may have broad network reach and community ties that will bolster trust amongst Committees and the public. 

Recommendation 6. Clarify intended roles for Advisory Committee members and the public

There are misconceptions among the public and Advisory Committee members about Advisory Committee roles and responsibilities. There is also ambiguity regarding the types of Advisory Committee roles such as ad hoc members, consulting, providing feedback for policies, or making recommendations. 

Policy Pathway. GSA should encourage federal Agencies to develop guidance that delineates the differences between permanent and temporary Advisory Committee members, as well as their roles and responsibilities depending on if they’re providing feedback for policies or providing recommendations for policy decision-making.

Recommendation 7. Utilize and engage expertise and the public outside of public meetings

In an effort to continue receiving the best evidence-based advice, federal Agencies should develop alternate ways to receive advice outside of public Committee meetings. Allowing additional opportunities for engagement and feedback from Committee experts or the public will allow Agencies to expand their knowledge base and gather information from communities who their decisions will affect.

Policy Pathway. The General Services Administration should encourage federal Agencies to create opportunities outside of scheduled Advisory Committee meetings to engage Committee members and the public on areas of concern and interest as one form of engagement. 

Recommendation 8. Standardize representation from Committee representatives (i.e., industry), as well as representation limits

The Federal Advisory Committee Act (FACA) does not specify the types of expertise that should be represented on all federal Advisory Committees, but allows for many types of expertise. Incorporating various sets of expertise that are representative of the American public will ensure the government is receiving the most accurate, innovative, and evidence-based recommendations for issues and products that affect Americans. 

Policy Pathway. Congress should include standardized language in the FACA that states all federal Advisory Committees should include various sets of expertise depending on their charge. This change should then be enforced by the GSA.

Support a Vibrant and Functioning Advisory Committee System

Recommendation 9. Decrease the burden to creating an Advisory Committee and make sure Advisory Committees are meeting per their charters

The process to establish an Advisory Committee should be simplified in an effort to curtail the amount of onerous processes that lead to a delay in the government receiving evidence based advice.

Advisory Committee charters state the purpose of Advisory Committees, their duties, and all aspirational aspects. These charters are developed by agency staff or DFOs with consultation from their agency Committee Management Office. Charters are needed to forge the path for all FACs.

Policy Pathway. Designated Federal Officers (DFOs) within federal agencies should work with their Agency head to review and modify steps to establishing FACs. Eliminate the requirement for FACs to require consultation and/or approval from GSA for the formation, renewal, or alteration of Advisory Committees.

Recommendation 10. Bolster agency budgets to support FACs on critical issues where regular engagement and trust building with the public is essential for good policy

Federal Advisory Committees are an essential component to receive evidence-based recommendations that will help guide decisions at all stages of the policy process. These Advisory Committees are oftentimes the single entry point external experts and the public have to comment and participate in the decision-making process. However, FACs take considerable resources to operate depending on the frequency of meetings, the number of Advisory Committee members, and supporting FDA staff. Without proper appropriations, they have a diminished ability to recruit and retain top talent for Advisory Committees. The Government Accountability Office (GAO) reported that in 2019, approximately $373 million dollars was spent to operate a total of 960 federal Advisory Committees. Some Agencies have experienced a decrease in the number of Advisory Committee convenings. Individual Agency heads should conduct a budget review of average operating and projected costs and develop proposals for increased funding to submit to the Appropriations Committee.  

Policy Pathway. Congress should consider increasing appropriations to support FACs so they can continue to enhance federal decision-making, improve public policy, boost public credibility, and Agency morale. 

Conclusion

Advisory Committees are necessary to the federal evidence-based decision-making ecosystem. Enlisting the advice and recommendations of experts, while also including input from the American public, allows the government to continue making decisions that will truly benefit its constituents. Nonetheless, there are areas of FACs that can be improved to ensure it continues to be a participatory, evidence-based process. Additional funding is needed to compensate the appropriate Agency staff for Committee support, provide potential incentives for experts who are volunteering their time, and finance other expenditures.

Frequently Asked Questions
How will Federal Advisory Committees (Advisory Committees) increase government efficiency?

With reform of Advisory Committees, the process for receiving evidence-based advice will be streamlined, allowing the government to receive this advice in a faster and less burdensome manner. Reform will be implemented by reducing the administrative burden for federal employees through the streamlining of recruitment, financial disclosure, and reporting processes.

A Federal Center of Excellence to Expand State and Local Government Capacity for AI Procurement and Use

The administration should create a federal center of excellence for state and local artificial intelligence (AI) procurement and use—a hub for expertise and resources on public sector AI procurement and use at the state, local, tribal, and territorial (SLTT) government levels. The center could be created by expanding the General Services Administration’s (GSA) existing Artificial Intelligence Center of Excellence (AI CoE). As new waves of AI technologies enter the market, shifting both practice and policy, such a center of excellence would help bridge the gap between existing federal resources on responsible AI and the specific, grounded challenges that individual agencies face. In the decades ahead, new AI technologies will touch an expanding breadth of government services—including public health, child welfare, and housing—vital to the wellbeing of the American people. An AI CoE federal center would equip public sector agencies with sustainable expertise and set a consistent standard for practicing responsible AI procurement and use. This resource ensures that AI truly enhances services, protects the public interest, and builds public trust in AI-integrated state and local government services. 

Challenge and Opportunity 

State, local, tribal, and territorial (SLTT) governments provide services that are critical to the welfare of our society. Among these: providing housing, child support, healthcare, credit lending, and teaching. SLTT governments are increasingly interested in using AI to assist with providing these services. However, they face immense challenges in responsibly procuring and using new AI technologies. While grappling with limited technical expertise and budget constraints, SLTT government agencies considering or deploying AI must navigate data privacy concerns, anticipate and mitigate biased model outputs, ensure model outputs are interpretable to workers, and comply with sector-specific regulatory requirements, among other responsibilities. 

The emergence of foundation models (large AI systems adaptable to many different tasks) for public sector use exacerbates these existing challenges. Technology companies are now rapidly developing new generative AI services tailored towards public sector organizations. For example, earlier this year, Microsoft announced that Azure OpenAI Service would be newly added to Azure Government—a set of AI services that target government customers. These types of services are not specifically created for public sector applications and use contexts, but instead are meant to serve as a foundation for developing specific applications. 

For SLTT government agencies, these generative AI services blur the line between procurement and development: Beyond procuring specific AI services, we anticipate that agencies will increasingly be tasked with the responsible use of general AI services to develop specific AI applications. Moreover, recent AI regulations suggest that responsibility and liability for the use and impacts of procured AI technologies will be shared by the public sector agency that deploys them, rather than just resting with the vendor supplying them.

SLTT agencies must be well-equipped with responsible procurement practices and accountability mechanisms pivotal to moving forward given the shifts across products, practice, and policy. Federal agencies have started to provide guidelines for responsible AI procurement (e.g., Executive Order 13960, OMB-M-21-06, NIST RMF). But research shows that SLTT governments need additional support to apply these resources.: Whereas existing federal resources provide high-level, general guidance, SLTT government agencies must navigate a host of challenges that are context-specific (e.g., specific to regional laws, agency practices, etc.). SLTT government agency leaders have voiced a need for individualized support in accounting for these context-specific considerations when navigating procurement decisions. 

Today, private companies are promising state and local government agencies that using their AI services can transform the public sector. They describe diverse potential applications, from supporting complex decision-making to automating administrative tasks. However, there is minimal evidence that these new AI technologies can improve the quality and efficiency of public services. There is evidence, on the other hand, that AI in public services can have unintended consequences, and when these technologies go wrong, they often worsen the problems they are aimed at solving. For example, by increasing disparities in decision-making when attempting to reduce them. 

Challenges to responsible technology procurement follow a historical trend: Government technology has frequently been critiqued for failures in the past decades. Because public services such as healthcare, social work, and credit lending have such high stakes, failures in these areas can have far-reaching consequences. They also entail significant financial costs, with millions of dollars wasted on technologies that ultimately get abandoned. Even when subpar solutions remain in use, agency staff may be forced to work with them for extended periods despite their poor performance.

The new administration is presented with a critical opportunity to redirect these trends. Training each relevant individual within SLTT government agencies, or hiring new experts within each agency, is not cost- or resource-effective. Without appropriate training and support from the federal government, AI adoption is likely to be concentrated in well-resourced SLTT agencies, leaving others with fewer resources (and potentially more low income communities) behind. This could lead to disparate AI adoption and practices among SLTT agencies, further exacerbating existing inequalities. The administration urgently needs a plan that supports SLTT agencies in learning how to handle responsible AI procurement and use–to develop sustainable knowledge about how to navigate these processes over time—without requiring that each relevant individual in the public sector is trained. This plan also needs to ensure that, over time, the public sector workforce is transformed in their ability to navigate complicated AI procurement processes and relationships—without requiring constant retraining of new waves of workforces. 

In the context of federal and SLTT governments, a federal center of excellence for state and local AI procurement would accomplish these goals through a “hub and spoke” model. This center of excellence would serve as the “hub” that houses a small number of selected experts from academia, non-profit organizations, and government. These experts would then train “spokes”—existing state and local public sector agency workers—in navigating responsible procurement practices. To support public sector agencies in learning from each others’ practices and challenges, this federal center of excellence could additionally create communication channels for information- and resource-sharing across the state and local agencies. 

Procured AI technologies in government will serve as the backbone of local public services for decades to come. Upskilling government agencies to make smart decisions about which AI technologies to procure (and which are best avoided) would not only protect the public from harmful AI systems but would also save the government money by decreasing the likelihood of adopting expensive AI technologies that end up getting dropped. 

Plan of Action 

A federal center of excellence for state and local AI procurement would ensure that procured AI technologies are responsibly selected and used to serve as a strong and reliable backbone for public sector services. This federal center of excellence can support both intra-agency and inter-agency capacity-building and learning about AI procurement and use—that is, mechanisms to support expertise development within a given public sector agency and between multiple public sector agencies. This federal center of excellence would not be deliberative (i.e., SLTT governments would receive guidance and support but would not have to seek approval on their practices). Rather, the goal would be to upskill SLTT agencies so they are better equipped to navigate their own AI procurement and use endeavors. 

To upskill SLTT agencies through inter-agency capacity-building, the federal center of excellence would house experts in relevant domain areas (e.g., responsible AI, public interest technology, and related topics). Fellows would work with cohorts of public sector agencies to provide training and consultation services. These fellows, who would come from government, academia, and civil society, would build on their existing expertise and experiences with responsible AI procurement, integrating new considerations proposed by federal standards for responsible AI (e.g., Executive Order 13960, OMB-M-21-06, NIST RMF). The fellows would serve as advisors to help operationalize these guidelines into practical steps and strategies, helping to set a consistent bar for responsible AI procurement and use practices along the way. 

Cohorts of SLTT government agency workers, including existing agency leaders, data officers, and procurement experts, would work together with an assigned advisor to receive consultation and training support on specific tasks that their agency is currently facing. For example, for agencies or programs with low AI maturity or familiarity (e.g., departments that are beginning to explore the adoption of new AI tools), the center of excellence can help navigate the procurement decision-making process, help them understand their agency-specific technology needs, draft procurement contracts, select amongst proposals, and negotiate plans for maintenance. For agencies and programs with high AI maturity or familiarity, the advisor can train the programs about unexpected AI behaviors and mitigation strategies, when this arises. These communication pathways would allow federal agencies to better understand the challenges state and local governments face in AI procurement and maintenance, which can help seed ideas for improving existing resources and create new resources for AI procurement support.

To scaffold intra-agency capacity-building, the center of excellence can build the foundations for cross-agency knowledge-sharing. In particular, it would include a communication platform and an online hub of procurement resources, both shared amongst agencies. The communication platform would allow state and local government agency leaders who are navigating AI procurement to share challenges, learned lessons, and tacit knowledge to support each other. The online hub of resources can be collected by the center of excellence and SLTT government agencies. Through the online hub, agencies can upload and learn about new responsible AI resources and toolkits (e.g., such as those created by government and the research community), as well as examples of procurement contracts that agencies themselves used. 

To implement this vision, the new administration should expand the U.S. General Services Administration’s (GSA) existing Artificial Intelligence Center of Excellence (AI CoE), which provides resources and infrastructural support for AI adoption across the federal government. We propose expanding this existing AI CoE to include the components of our proposed center of excellence for state and local AI procurement and use. This would direct support towards SLTT government agencies—which are currently unaccounted for in the existing AI CoE—specifically via our proposed capacity-building model.

Over the next 12 months, the goals of expanding the AI CoE would be three-fold:

1. Develop the core components of our proposed center of excellence within the AI CoE. 

2. Launch collaborations for the first sample of SLTT government agencies. Focus on building a path for successful collaborations: 

3. Build a path for our proposed center of excellence to grow and gain experience. If the first few collaborations show strong reviews, design a scaling strategy that will: 

Conclusion

Expanding the existing AI CoE to include our proposed federal center of excellence for AI procurement and use can help ensure that SLTT governments are equipped to make informed, responsible decisions about integrating AI technologies into public services. This body would provide necessary guidance and training, helping to bridge the gap between high-level federal resources and the context-specific needs of SLTT agencies. By fostering both intra-agency and inter-agency capacity-building for responsible AI procurement and use, this approach builds sustainable expertise, promotes equitable AI adoption, and protects public interest. This ensures that AI enhances—rather than harms—the efficiency and quality of public services. As new waves of AI technologies continue to enter the public sector, touching a breadth of services critical to the welfare of the American people, this center of excellence will help maintain high standards for responsible public sector AI for decades to come.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
What is existing guidance for responsible SLTT procurement and use of AI technologies?

Federal agencies have published numerous resources to support responsible AI procurement, including the Executive Order 13960, OMB-M-21-06, NIST RMF. Some of these resources provide guidance on responsible AI development in organizations broadly, across the public, private, and non-profit sectors. For example, the NIST RMF provides organizations with guidelines to identify, assess, and manage risks in AI systems to promote the deployment of more trustworthy and fair AI systems. Others focus on public sector AI applications. For instance, the OMB Memorandum published by the Office of Management and Budget describes strategies for federal agencies to follow responsible AI procurement and use practices.

Why a federal center? Can’t SLTT governments do this on their own?

Research describes how these forms of resources often require additional skills and knowledge that make it challenging for agencies to effectively use on their own. A federal center of excellence for state and local AI procurement could help agencies learn to use these resources. Adapting these guidelines to specific SLTT agency contexts necessitates a careful task of interpretation which may, in turn, require specialized expertise or resources. The creation of this federal center of excellence to guide responsible SLTT procurement on-the-ground can help bridge this critical gap. Fellows in the center of excellence and SLTT procurement agencies can build on this existing pool of guidance to build a strong theoretical foundation to guide their practices.

How has this “hub and spoke” model been used before?

The hub and spoke model has been used across a range of applications to support efficient management of resources and services. For instance, in healthcare, providers have used the hub and spoke model to organize their network of services; specialized, intensive services would be located in “hub” healthcare establishments whereas secondary services would be provided in “spoke” establishments, allowing for more efficient and accessible healthcare services. Similar organizational networks have been followed in transportation, retail, and cybersecurity. Microsoft follows a hub and spoke model to govern responsible AI practices and disseminate relevant resources. Microsoft has a single centralized “hub” within the company that houses responsible AI experts—those with expertise on the implementation of the company’s responsible AI goals. These responsible AI experts then train “spokes”—workers residing in product and sales teams across the company, who learn about best practices and support their team in implementing them.

Who would be the experts selected as fellows by the center of excellence? What kind of training would they receive?

During the training, experts would form a stronger foundation for (1) on-the-ground challenges and practices that public sector agencies grapple with when developing, procuring, and using AI technologies and (2) existing AI procurement and use guidelines provided by federal agencies. The content of the training would be taken from syntheses of prior research on public sector AI procurement and use challenges, as well as existing federal resources available to guide responsible AI development. For example, prior research has explored public sector challenges to supporting algorithmic fairness and accountability and responsible AI design and adoption decisions, amongst other topics.


The experts who would serve as fellows for the federal center of excellence would be individuals with expertise and experience studying the impacts of AI technologies and designing interventions to support more responsible AI development, procurement, and use. Given the interdisciplinary nature of the expertise required for the role, individuals should have an applied, socio-technical background on responsible AI practices, ideally (but not necessarily) for the public sector. The individual would be expected to have the skills needed to share emerging responsible AI practices, strategies, and tacit knowledge with public sector employees developing or procuring AI technologies. This covers a broad range of potential backgrounds.

What are some examples of the skills or competencies fellows might bring to the Center?

For example, a professor in academia who studies how to develop public sector AI systems that are more fair and aligned with community needs may be a good fit. A socio-technical researcher in civil society with direct experience studying or developing new tools to support more responsible AI development, who has intuition over which tools and practices may be more or less effective, may also be a good candidate. A data officer in a state government agency who has direct experience procuring and governing AI technologies in their department, with an ability to readily anticipate AI-related challenges other agencies may face, may also be a good fit. The cohort of fellows should include a balanced mix of individuals coming from government, academia, and civil society.

Strengthening Information Integrity with Provenance for AI-Generated Text Using ‘Fuzzy Provenance’ Solutions

Synthetic text generated by artificial intelligence (AI) can pose significant threats to information integrity. When users accept deceptive AI-generated content—such as large-scale false social media posts by malign foreign actors—as factual, national security is put at risk. One way to help mitigate this danger is by giving users a clear understanding of the provenance of the information they encounter online. 

Here, provenance refers to any verifiable indication of whether text was generated by a human or by AI, for example by using a watermark. However, given the limitations of watermarking AI-generated text, this memo also introduces the concept of fuzzy provenance, which involves identifying exact text matches that appear elsewhere on the internet. As these matches will not always be available, the descriptor “fuzzy” is used. While this information will not always establish authenticity with certainty, it offers users additional clues about the origins of a piece of text.

To ensure platforms can effectively provide this information to users, the National Institute of Standards and Technology (NIST)’s AI Safety Institute should develop guidance on how to display to users both provenance and fuzzy provenance—where available—within no more than one click. To expand the utility of fuzzy provenance, NIST could also issue guidance on how generative AI companies could allow the records of their free AI models to be crawled and indexed by search engines, thereby making potential matches to AI-generated text easier to discover. Tradeoffs surrounding this approach are explored further in the FAQ section.

By creating a reliable, user-friendly framework for surfacing these details, NIST would empower readers to better discern the trustworthiness of the text they encounter, thereby helping to counteract the risks posed by deceptive AI-generated content.

Challenge and Opportunity

Synthetic Text and Information Integrity

In the past two years, generative AI models have become widely accessible, allowing users to produce customized text simply by providing prompts. As a result, there has been a rapid proliferation of “synthetic” text—AI-generated content—across the internet. As NIST’s Generative Artificial Intelligence Profile notes, this means that there is a “[l]owered barrier of entry to generated text that may not distinguish fact from opinion or fiction or acknowledge uncertainties, or could be leveraged for large scale dis- and mis-information campaigns.”

Information integrity risks stemming from synthetic text—particularly when generated for non-creative purposes—can pose a serious threat to national security. For example, in July 2024 the Justice Department disrupted Russian generative-AI-enabled disinformation bot farms. These Russian bots produced synthetic text, including in the form of social media posts by fake personas, meant to promote messages aligned with the interests of the Russian government. 

Provenance Methods For Reducing Information Integrity Risks

NIST has an opportunity to provide community guidance to reduce the information integrity risks posed by all types of synthetic content. The main solution currently being considered by NIST for reducing the risks of synthetic content in general is provenance, which refers to whether a piece of content was generated by AI or a human. As described by NIST, provenance is often ascertained by creating a non-fungible watermark, or a cryptographic signature for a piece of content. The watermark is permanently associated with the piece of content. Where available, provenance information is helpful because knowing the origin of text can help a user know whether to rely on the facts it contains. For example, an AI-generated news report may currently be less trustworthy than a human news report because the former is more prone to fabrications.

However, there are currently no methods widely accepted as effective for determining the provenance of synthetic text. As NIST’s report, Reducing Risks Posed by Synthetic Content, details, “[t]he effectiveness of synthetic text detection is subject to ongoing debate” (Sec. 3.2.2.4). Even if a piece of text is originally AI-generated with a watermark (e.g., by generating words with a unique statistical pattern), people can easily copy a piece of text by paraphrasing (especially via AI), without transferring the original watermark. Text watermarks are also vulnerable to adversarial attacks, with malicious actors able to mimic the watermark signature and make text appear watermarked when it is not.  

Plan of Action

To capture the benefits of provenance, while mitigating some of its weaknesses, NIST should issue guidance on how platforms can make available to users both provenance and “fuzzy provenance” of text. Fuzzy provenance is coined here to refer to exact text matches on the internet, which can sometimes reflect provenance but not necessarily (thus “fuzzy”). Optionally, NIST could also consider issuing guidance on how generative AI companies can make their free models’ records available to be crawled and indexed by search engines, so that fuzzy provenance information would show text matches with generative AI model records. There are tradeoffs to this recommendation, which is why it is optional; see FAQs for further discussion. Making both provenance and fuzzy provenance information available (in no more than one click) will give users more information to help them evaluate how trustworthy a piece of text is and reduce information integrity risks. 

Combined Provenance and Fuzzy Provenance Approach

Figure 1. Mock implementation of combined provenance and fuzzy provenance
The above image captures what an implementation of the combined provenance and fuzzy provenance guidance might include. When a user highlights a piece of text that is sufficiently long, they can click “learn more about this text” to find more information.

There are ways to communicate provenance and fuzzy provenance so that it is both useful and easy-to-understand. In this concept showing the provenance of text, for example:

Benefits of the Combined Approach

Showing both provenance and fuzzy provenance information provides users with critical context to evaluate the trustworthiness of a piece of text. Between provenance and fuzzy provenance, users would have access to information about many pieces of high-impact text, especially claims that could be particularly harmful for individuals, groups, or society at large. Making all this information immediately available also reduces friction for users so that they can get this information right where they encounter text.

Provenance information can be helpful to provide to users when it is available. For instance, knowing that a tech support company’s website description was AI-generated may encourage users to check other sources (like reviews) to see if the company is a real entity (and AI was used just to generate the description) or a fake entity entirely, before giving a deposit to hire the company (see user journey 1 in this video for an example).

Where clear provenance information is not available, fuzzy provenance can help fill the gap by providing valuable context to users in several ways:

Fuzzy provenance is also effective because it shows context and gives users autonomy to decide how to interpret that context. Academic studies have found that users tend to be more receptive when presented with further information they can use for their own critical thinking compared to being shown a conclusion directly (like a label), which can even backfire or be misinterpreted. This is why users may trust contextual methods like crowdsourced information more than provenance labels.

Finally, fuzzy provenance methods are generally feasible at scale, since they can be easily implemented with existing search engine capabilities (via an exact text match search). Furthermore, since fuzzy provenance only relies on exact text matching with other sources on the internet, it works without needing coordination among text-producers or compliance from bad actors. 

Conclusion

To reduce the information integrity risks posed by synthetic text in a scalable and effective way, the National Institute for Standards and Technology (NIST) should develop community guidance on how platforms hosting text-based digital content can make accessible (in no more than one click) the provenance and “fuzzy provenance” of the piece of text, when available. NIST should also consider issuing guidance on how AI companies could make their free generative AI records available to be crawled by search engines, to amplify the effectiveness of “fuzzy provenance”.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Making free generative AI records available to be crawled by search engines includes tradeoffs, which is why it is an optional recommendation to consider. Below are some questions regarding implementation guidance, and trade offs including privacy and proprietary information.

Frequently Asked Questions on free generative AI record optional recommendation
What are some examples of implementation guidance for AI model companies?

Guidance could instruct AI model companies how to make their free generative AI conversation records available to be crawled and indexed by search engines. Similar to ChatGPT logs or Perplexity threads, a unique URL would be created for each conversation, capturing the date it occurred. The key difference is that all free model conversation records would be made available, but only with the AI outputs of the conversation, after removing personally identifiable information (PII) (see “Privacy considerations” section below). Because users can already choose to share conversations with each other (meaning the conversation logs are retained), and conversation logs for major model providers do not currently appear to have an expiration date, this requirement shouldn’t impose an additional storage burden for AI model companies.

What are some examples of implementation guidance for search engines?

Guidance could instruct search engines how to crawl and index these model logs so that queries with exact text matches to the AI outputs would surface the appropriate model logs. This would not be very different from search engines crawling/indexing other types of new URLs and should be well-within existing search engine capabilities. In terms of storage, since only free model logs would be crawled and indexed, and most free models rate-limit the number of user messages allowed, storage should also not be a concern. For instance, even with 200 million weekly active users for ChatGPT, the number of conversations in a year would only be on the order of billions, which is well-within the current scale that existing search engines have to operate to enable users to “search the web”.

How can we ensure user privacy when making free AI model records available?

  • Output filtering on the AI outputs should be done to remove any personal identifiable information (PII) present in the model’s responses. However, it might still be possible to extrapolate who the original user was just by looking at the AI outputs taken together and inferring some of the user prompts. This is a privacy concern that should be further investigated. Some possible mitigations include additionally removing any location references of a certain granularity (i.e. removing mentions of neighborhoods, but retaining mentions of states) and presenting AI responses in the conversation in a randomized order.

  • Removals should be made possible by a user-initiated process demonstrating privacy concerns, similar to existing search engine removal protocols.

  • User consent would also be an important consideration here. NIST could propose that free model users must “opt-in”, or that free model record crawling/indexing be “opt-out” by default for users, though this may greatly compromise the reliability of fuzzy provenance.

What proprietary tradeoffs should be considered when making free AI model outputs available to be crawled and indexed by search engines?

  • Training on AI-generated text: AI companies are concerned about accidentally picking up on too much AI-generated text on the web and training on that instead of higher human-generated text, thus degrading the quality of their own generative models. However, because they would have identifiable domain prefixes (ie chatgpt.com, perplexity.ai), it would be easy to exclude these AI-generated conversation logs if desired during training. Indeed, provenance and fuzzy provenance may help AI companies avoid unintentionally training on AI-generated text.

  • Sharing model outputs: On the flipside, AI companies might be concerned that making so many AI-generated model outputs available for competitors to access may result in helping competitors improve their own models. This is a fair concern, though partially mitigated by a) specific user inputs are not available, and only the AI outputs; and b) only free model outputs would be logged, rather than any premium models, thus providing some proprietary protection. However, it is still possible that competitors may be able to enhance their own responses by training on the structure of AI outputs from other models at scale.

Tending Tomorrow’s Soil: Investing in Learning Ecosystems

“Tending soil.”

That’s how Fred Rogers described Mister Rogers’ Neighborhood, his beloved television program that aired from 1968 to 2001. Grounded in principles gleaned from top learning scientists, the Neighborhood offered a model for how “learning ecosystems” can work in tandem to tend the soil of learning. 

Today, a growing body of evidence suggests that Rogers’ model was not only effective, but that real-life learning ecosystems – networks that include classrooms, living rooms, libraries, museums, and more – may be the most promising approach for preparing learners for tomorrow. As such, cities and regions around the world are constructing thoughtfully designed ecosystems that leverage and connect their communities’ assets, responding to the aptitudes, needs, and dreams of the learners they serve. 

Efforts to study and scale these ecosystems at local, state, and federal levels would position the nation’s students as globally competitive, future-ready learners.

The Challenge

For decades, America’s primary tool for “tending soil” has been its public schools, which are (and will continue to be) the country’s best hope for fulfilling its promise of opportunity. At the same time, the nation’s industrial-era soil has shifted. From the way our communities function to the way our economy works, dramatic social and technological upheavals have remade modern society. This incongruity – between the world as it is and the world that schools were designed for – has blunted the effectiveness of education reforms; heaped systemic, society-wide problems on individual teachers; and shortchanged the students who need the most support.

“Public education in the United States is at a crossroads,” notes a report published by the Alliance for Learning Innovation, Education Reimagined, and Transcend: “to ensure future generations’ success in a globally competitive economy, it must move beyond a one-size-fits-all model towards a new paradigm that prioritizes innovation that holds promise to meet the needs, interests, and aspirations of each and every learner.”

What’s needed is the more holistic paradigm epitomized by Mister Rogers’ Neighborhood: a collaborative ecosystem that sparks engaged, motivated learners by providing the tools, resources, and relationships that every young person deserves.

The Opportunity

With components both public and private, virtual and natural, “learning ecosystems” found in communities around the world reflect today’s connected, interdependent society. These ecosystems are not replacements for schools – rather, they embrace and support all that schools can be, while also tending to the vital links between the many places where kids and families learn: parks, libraries, museums, afterschool programs, businesses, and beyond. The best of these ecosystems function as real-life versions of Mister Rogers’ Neighborhood: places where learning happens everywhere, both in and out of school. Where every learner can turn to people and programs that help them become, as Rogers used to say, “the best of whoever you are.”

Nearly every community contains the components of effective learning ecosystems. The partnerships forged within them can – when properly tended – spark and spread high-impact innovations; support collaboration among formal and informal educators; provide opportunities for young people to solve real-world problems; and create pathways to success in a fast-changing modern economy. By studying and investing in the mechanisms that connect these ecosystems, policymakers can build “neighborhoods” of learning that prepare students for citizenship, work, and life.

Plan of Action

Learning ecosystems can be cultivated at every level. Whether local, state, or federal, interested policymakers should:

Establish a commission on learning ecosystems. Tasked with studying learning ecosystems in the U.S. and abroad, the commission would identify best practices and recommend policy that 1) strengthens an area’s existing learning ecosystems and/or 2) nurtures new connections. Launched at the federal, state, or local level and led by someone with a track record for getting things done, the commission should include representatives from various sectors, including early childhood educators, K-12 teachers and administrators, librarians, researchers, CEOs and business leaders, artists, makers, and leaders from philanthropic and community-based organizations. The commission will help identify existing activities, research, and funding for learning ecosystems and will foster coordination and collaboration to maximize the effectiveness of the ecosystem’s resources.

A 2024 report by Knowledge to Power Catalysts notes that these cross-sector commissions are increasingly common at various levels of government, from county councils to city halls. As policymakers establish interagency working groups, departments of children and youth, and networks of human services providers, “such offices at the county or municipal level often play a role in cross-sector collaboratives that engage the nonprofit, faith, philanthropic, and business communities as well.”

Pittsburgh’s Remake Learning ecosystem, for example, is steered by the Remake Learning Council, a blue-ribbon commission of Southwestern Pennsylvania leaders from education, government, business, and the civic sector committed to “working together to support teaching, mentoring, and design – across formal and informal educational settings – that spark creativity in kids, activating them to acquire knowledge and skills necessary for navigating lifelong learning, the workforce, and citizenship.”

Establish a competitive grant program to support pilot projects. These grants could seed new ecosystems and/or support innovation among proven ecosystems. (Several promising ecosystems are operating throughout the country already; however, many are excluded from funding opportunities by narrowly focused RFPs.) This grant program can be administered by the commission to catalyze and strengthen learning ecosystems at the federal, state, or local levels. Such a program could be modeled after:

Host a summit on learning ecosystems. Leveraging the gravitas of a government and/or civic institution such as the White House, a governor’s mansion, or a city hall, bring members of the commission together with learning ecosystem leaders and practitioners, along with cross-sector community leaders. A summit will underscore promising practices, share lessons learned, and highlight monetary and in-kind commitments to support ecosystems. The summit could leverage for learning ecosystems the philanthropic commitments model developed and used by previous presidential administrations to secure private and philanthropic support. Visit remakelearning.org/forge to see an example of one summit’s schedule, activities, and grantmaking opportunities.

Establish an ongoing learning ecosystem grant program for scaling and implementing lessons learned. This grant program could be administered at the federal, state, or local level – by a city government, for example, or by partnerships like the Appalachian Regional Commission. As new learning ecosystems form and existing ones evolve, policymakers should continue to provide grants that support learning ecosystem partnerships between communities that allow innovations in one city or region to take root in another. 

Invest in research, publications, convenings, outreach, and engagement efforts that highlight local ecosystems and make their work more visible, especially for families. The ongoing grant program can include funding for opportunities that elevate the benefits of learning ecosystems. Events such as Remake Learning Days – an annual festival billed as “the world’s largest open house for teaching and learning” and drawing an estimated 300,000 attendees worldwide – build demand for learning ecosystems among parents, caregivers, and community leaders, ensuring grassroots buy-in and lasting change.

This memo was developed in partnership with the Alliance for Learning Innovation, a coalition dedicated to advocating for building a better research and development infrastructure in education for the benefit of all students. 

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
How do learning ecosystems benefit students?

Within a learning ecosystem, students aren’t limited to classrooms, schools, or even their own districts – nor do they have to travel far to find opportunities that light them up. By blurring the lines between “in school” and “out of school,” ecosystems make learning more engaging, more relevant, and even more joyful. Pittsburgh’s Remake Learning ecosystem, for example, connects robotics professionals with classroom teachers to teach coding and STEM. Librarians partner with teaching artists to offer weeklong deep dives into topics attractive to young people. A school district launches a program – say, a drone academy for girls – and opens it up to learners from neighboring districts.


As ecosystems expand to include more members, the partnerships formed within them spark exciting, ever-evolving opportunities for learners.

How do learning ecosystems benefit communities?

Within an ecosystem, learning isn’t just for young people. An ecosystem’s out-of-school components – businesses, universities, makerspaces, and more – bring real-world problems directly to learners, leading to tangible change in communities and a more talented, competitive future workforce. In greater Washington, D.C., for example, teachers partner with cultural institutions to develop curricula based on students’ suggestions for improving the city. In Kansas City, high schoolers partner with entrepreneurs and health care professionals to develop solutions for everything from salmonella poisoning to ectopic pregnancy. And in Pittsburgh, public school students are studying cybersecurity, training for aviation careers, conducting cutting-edge cancer research, and more.

How do learning ecosystems benefit educators?

Learning ecosystems also support educators. In Pittsburgh, educators involved in Remake Learning note that “they feel celebrated and validated in their work,” writes researcher Erin Gatz. Moreover, the ecosystem’s “shared learning and supportive environment were shown to help educators define or reinforce their professional identity.”

How do learning ecosystems benefit local economies?

Learning ecosystems can aid local economies, too. In eastern Kentucky, an ecosystem of school districts, universities, and economic development organizations empowers students to reimagine former coal land for entrepreneurial purposes. And in West Virginia, an ecosystem of student-run companies has helped the state recover from natural disasters.

Where are examples of learning ecosystems already operating in the United States?

Since 2007, Pittsburgh’s Remake Learning has emerged as the most talked-about learning ecosystem in the world. Studied by scholars, recognized by heads of state, and expanding to include more then 700 schools, libraries, museums, and other sites of learning, Remake Learning has – through two decades of stewardship – inspired more than 40 additional learning ecosystems. Meanwhile, the network’s Moonshot Grants are seeding new ecosystems across the nation and around the world.

What inspiration can we draw from globally?

Global demand for learning ecosystems is growing. A 2020 report released by HundrED, a Finland-based nonprofit, profiles 16 of the most promising examples operating in the United States. Likewise, the World Innovation Summit for Education explores nine learning ecosystems operating worldwide: “Across the globe, there is a growing consensus that education demands radical transformation if we want all citizens to become future-ready in the face of a more digitally enabled, uncertain, and fast-changing world,” the summit notes. “Education has the potential to be the greatest enabler of preparing everyone, young and old, for the future, yet supporting learning too often remains an issue for schools alone.”

What about public schools?

Learning ecosystems support collaboration and community among public schools, connecting classrooms, schools, and educators across diverse districts. Within Remake Learning, for example, a cohort of 42 school districts works together – and in partnership with afterschool programs, health care providers, universities, and others – to make Western Pennsylvania a model for the future of learning.


The cohort’s collaborative approach has led to a dazzling array of awards and opportunities for students: A traditional classroom becomes a futuristic flight simulator. A school district opens its doors to therapy dogs and farm animals. Students in dual-credit classes earn college degrees before they’ve even finished high school. Thanks in part to the ecosystem’s efforts, Western Pennsylvania is now recognized as home to the largest cluster of nationally celebrated school districts in the country.

I’m interested in starting or supporting a learning ecosystem in my community. Where do I start?

As demand for learning ecosystems continues to gather momentum, several organizations have released playbooks and white papers designed to guide policymakers, practitioners, and other interested parties. Helpful resources include:


What are some additional resources?

In addition, Remake Learning has released three publications that draw on more than twenty years of “tending soil.” The publications share methods and mindsets for navigating some of the most critical questions that face ecosystems’ stewards:


Protecting Infant Nutrition Security:
Shifting the Paradigm on Breastfeeding to Build a Healthier Future for all Americans

The health and wellbeing of American babies have been put at risk in recent years, and we can do better. Recent events have revealed deep vulnerabilities in our nation’s infant nutritional security. For example: Pandemic-induced disruptions in maternity care practices that support  the establishment of breastfeeding; the infant formula recall and resulting shortage; and a spate of weather-related natural disasters have demonstrated infrastructure gaps and a lack of resilience to safety and supply chain challenges. All put babies in danger during times of crisis.

Breastfeeding is foundational to lifelong health and wellness, but systemic barriers prevent many families from meeting their breastfeeding goals. The policies and infrastructure surrounding postpartum families often limit their ability to succeed in breastfeeding. Despite important benefits, new data from the CDC shows that while 84.1% of infants start out breastfeeding, these numbers fall dramatically in the weeks after birth, with only 57.5% of infants breastfeeding exclusively at one month of age. Disparities persist across geographic location, and other sociodemographic factors, including race/ethnicity, maternal age, and education. Breastfeeding rates in North America are the lowest in the world. Longstanding evidence shows that it is not a lack of desire but rather a lack of support, access, and resources that creates these barriers.

This administration has an opportunity to take a systems approach to increasing support for breastfeeding and making parenting easier for new mothers. Key policy changes to address systemic barriers include providing guidance to states on expanding Medicaid coverage of donor milk, building breastfeeding support and protection into the existing emergency response framework at the Federal Emergency Management Agency, and expressing support for establishing a national paid leave program. 

Policymakers on both sides of the aisle agree that no baby should ever go hungry, as evidenced by the bipartisan passage of recent breastfeeding legislation (detailed below) and widely supported regulations. However, significant barriers remain. This administration has the power to address long-standing inequities and set the stage for the next generation of parents and infants to thrive. Ensuring that every family has the support they need to make the best decisions for their child’s health and wellness benefits the individual, the family, the community, and the economy. 

Challenge and Opportunity

Breastfeeding plays an essential role in establishing good nutrition and healthy weight, reducing the risk of chronic disease and infant mortality, and improving maternal and infant health outcomes. Breastfed children have a decreased risk of obesity, type 1 and 2 diabetes, asthma, and childhood leukemia. Women who breastfeed reduce their risk of specific chronic diseases, including type 2 diabetes, cardiovascular disease, and breast and ovarian cancers. On a relational level, the hormones produced while breastfeeding, like oxytocin, enhance the maternal-infant bond and emotional well-being. The American Academy of Pediatrics recommends infants be exclusively breastfed for approximately six months with continued breastfeeding while introducing complementary foods for two years or as long as mutually desired by the mother and child.  

Despite the well-documented health benefits of breastfeeding, deep inequities in healthcare, community, and employment settings impede success. Systemic barriers disproportionately impact Black, Indigenous, and other communities of color, as well as families in rural and economically distressed areas. These populations already bear the weight of numerous health inequities, including limited access to nutritious foods and higher rates of chronic disease—issues that breastfeeding could help mitigate. 

Breastfeeding Saves Dollars and Makes Sense 

Low breastfeeding rates in the United States cost our nation millions of dollars through higher health system costs, lost productivity, and higher household expenditures. Not breastfeeding is associated with economic losses of about $302 billion annually or 0.49% of world gross national income. At the national level, improving breastfeeding practices through programs and policies is one of the best investments a country can make, as every dollar invested is estimated to result in a $35 economic return

In the United States, chronic disease management results in trillions of dollars in annual healthcare costs, which increased breastfeeding rates could help reduce. In the workplace setting, employers see significant cost savings when their workers are able to maintain breastfeeding after returning to work. Increased breastfeeding rates are also associated with reduced environmental impact and associated expenses. Savings can be seen at home as well, as following optimal breastfeeding practices reduces household expenditures. Investments in infant nutrition last a lifetime, paying long-term dividends critical for economic and human development. Economists have completed cost-benefit analyses, finding that investments in nutrition are one of the best value-for-money development actions, laying the groundwork for the success of investments in other sectors.

Ongoing trends in breastfeeding outcomes indicate that there are entrenched policy-level challenges and barriers that need to be addressed to ensure that all infants have an opportunity to benefit from access to human milk. Currently, for too many families, the odds are stacked against them. It’s not a question of individual choice but one of systemic injustice. Families are often forced into feeding decisions that do not reflect their true desires due to a lack of accessible resources, support, and infrastructure.

While the current landscape is rife with challenges, the solutions are known and the potential benefits are tremendous. This administration has the opportunity to realize these benefits and implement a smart and strategic response to the urgent situation that our nation is facing just as the political will is at an all-time high. 

The History of Breastfeeding Policy

In the late 1960s and early 1970s less than 30 percent of infants were breastfed. The concerted efforts of individuals and organizations and the emergence of the field of lactation have worked to counteract or remove many barriers, and policymakers have sent a clear and consistent message that breastfeeding is bipartisan. This is evident in the range of recent lactation-friendly legislation, including: 

Administrative efforts ranging from the Business Case for Breastfeeding to The Surgeon General’s Call to Action to Support Breastfeeding and the armed services updates on uniform requirements for lactating soldiers demonstrate a clear commitment to breastfeeding support across the decades. 

These policy changes have made a difference. But additional attention and investment, with a particular focus on the birth and early postpartum period, as well as during and after emergencies, is needed to secure the potential health and economic benefits of comprehensive societal support for breastfeeding. This administration can take considerable steps toward improving  U.S. health and wellness and protecting infant nutrition security.  

Plan of Action

A range of federal agencies coordinate programs, services, and initiatives impacting the breastfeeding journey for new parents. Expanding and building on existing efforts through the following steps can help address some of today’s most pressing barriers to breastfeeding. 

Each of the recommended actions can be implemented independently and would create meaningful, incremental change for families. However, a comprehensive approach that implements all these recommendations would create the marked shift in the landscape needed to improve breastfeeding initiation and duration rates and establish this administration as a champion for breastfeeding families. 

AgencyAgency RoleRecommend ActionAnticipated Outcome
Federal Emergency Management Agency (FEMA)


FEMA coordinates within the federal government to make sure America is equipped to prepare for and respond to disasters.Require FEMA to participate in the Federal Interagency Breastfeeding Workgroup, a collection of federal agencies that come together to connect and collaborate on breastfeeding issues.Increased connection and coordination across agencies.
Federal Emergency Management Agency (FEMA)


FEMA coordinates within the federal government to make sure America is equipped to prepare for and respond to disasters.Update the FEMA Public Assistance Program and Policy Guide to include breastfeeding and lactation as a functional need so that emergency response efforts can include services from lactation support providers.Integration of breastfeeding support into emergency response and recovery efforts.
Office of Management & Budget (OMB)The OMB oversees the implementation of the President’s vision across the Executive Branch, including through budget development and execution.Include funding for the establishment of a national paid family and medical leave program as a priority in the President’s Budget.Setting the stage for Congressional action.
Domestic Policy Council (DPC)The DPC drives the development and implementation of the President’s domestic policy agenda in the White House and across the Federal government.Support the efforts of the bipartisan, bicameral congressional Paid Leave Working Group.Setting the stage for Congressional action.
This table summarizes the recommendations, grouped by the federal agency that would be responsible for implementing the change to increase breastfeeding rates in the U.S. for improved health and economic outcomes.

Recommendation 1. Increase access to pasteurized donor human milk by directing the Centers for Medicare & Medicaid Services (CMS) to provide guidance to states on expanding Medicaid coverage. 

Pasteurized donor human milk is lifesaving for vulnerable infants, particularly those born preterm or with serious health complications. Across the United States, milk banks gently pasteurize donated human milk and distribute it to fragile infants in need. This lifesaving liquid gold reduces mortality rates, lowers healthcare costs, and shortens hospital stays. Specifically, the use of donor milk is associated with increased survival rates and lowered rates of infections, sepsis, serious lung disease, and gastrointestinal complications. In 2022, there were 380,548 preterm births in the United States, representing 10.4% of live births, so the potential for health and cost savings is substantial. Data from one study shows that the cost of a neonatal intensive care unit stay for infants at very low birth weight is nearly $220,000 for 56 days. The use of donor human milk can reduce hospital length of stay by 18-50 days by preventing the development of necrotizing enterocolitis in preterm infants. The benefits of human milk extend beyond the inpatient stay, with infants receiving all human milk diets in the NICU experiencing fewer hospital readmissions and better overall long-term outcomes.

Although donor milk has important health implications for vulnerable infants in all communities and can result in significant economic benefit, donor milk is not equitably accessible. While milk banks serve all states, not all communities have easy access to donated human milk. Moreover, many insurers are not required to cover the cost, creating significant barriers to access and contributing to racial and geographic disparities.

To ensure that more babies in need have access to lifesaving donor milk, the administration should work with CMS to expand donor milk coverage under state Medicaid programs. Medicaid covers approximately 40% of all US births and 50% of all early preterm births. Medicaid programs in at least 17 states and the District of Columbia already include coverage of donor milk. The administration can expand access to this precious milk, help reduce health care costs, and address racial and geographic disparities by releasing guidance for the remaining states regarding coverage options in Medicaid.

Recommendation 2. Include infant feeding in Federal Emergency Management Agency (FEMA) emergency planning and response.

Infants and children are among the most vulnerable in an emergency, so it is critical that their unique needs are considered and included in emergency planning and response guidance. Breastfeeding provides clean, optimal nutrition, requires no fuel, water, or electricity, and is available, even in the direst circumstances. Human milk contains antibodies that fight infection, including diarrhea and respiratory infections common among infants in emergency situations. Yet efforts to protect infant and young child feeding in emergencies are sorely lacking, particularly in the immediate aftermath of disasters and emergencies. 

Ensuring access to lactation support and supplies as part of emergency response efforts is essential for protecting the health and safety of infants. Active support and coordination between federal, state, and local governments, the commercial milk formula industry, lactation support providers, and all other relevant actors involved in response to emergencies is needed to ensure safe infant and young child feeding practices and equitable access to support. There are two simple, cost-effective steps that FEMA can take to protect breastfeeding, preserve resources, and thus save additional lives during emergencies.

Recommendation 3. Expand access to paid family & medical leave by including paid leave as a priority in the President’s Budget and supporting the efforts of the bipartisan, bicameral congressional Paid Leave Working Group. 

Employment policies in the United States make breastfeeding harder than it needs to be. The United States is one of the only countries in the world without a national paid family and medical leave program. Many parents return to work quickly after birth, before a strong breastfeeding relationship is established, because they cannot afford to take unpaid leave or because they do not qualify for paid leave programs with their employer or through state or local programs. Nearly 1 in 4 employed mothers return to work within two weeks of childbirth.

Paid family leave programs make it possible for employees to take time for childbirth recovery, bond with their baby, establish feeding routines, and adjust to life with a new child without threatening their family’s economic well-being. This precious time provides the foundation for success, contributing to improved rates of breastfeeding initiation and duration, yet only a small portion of workers are able to access it. There are significant disparities in access to paid leave among racial and ethnic groups, with Black and Hispanic employees less likely than their white non-Hispanic counterparts to have access to paid parental leave. There are similar disparities in breastfeeding outcomes among racial groups.  

The momentum is building substantially to improve the paid family and medical leave landscape in the United States. Thirteen states and the District of Columbia have established mandatory state paid family leave systems. Supporting paid leave has become an important component of candidate campaign plans, and bipartisan support for establishing a national program remains strong among voters. The formation of Bipartisan Paid Family Leave Working Groups in both the House and Senate demonstrate commitment from policymakers on both sides of the aisle. 

By directing the Office of Management and Budget to include funding for paid leave in the President’s Budget recommendation and working collaboratively with the Congressional Paid Leave Working Groups, the administration can advance federal efforts to increase access to paid family and medical leave, improving public health and helping American businesses.  

Conclusion

These three strategies offer the opportunity for the White House to make an immediate and lasting impact by protecting infant nutrition security and addressing disparities in breastfeeding rates, on day one of the Presidential term. A systems approach that utilizes multiple strategies for integrating breastfeeding into existing programs and efforts would help shift the paradigm for new families by addressing long-standing barriers that disproportionately affect marginalized communities—particularly Black, Indigenous, and families of color. A clear and concerted effort from the Administration, as outlined, offers the opportunity to benefit all families and future generations of American babies. 

The administration’s focused and strategic efforts will create a healthier, more supportive world for babies, families, and breastfeeding parents, improve maternal and child health outcomes, and strengthen the economy. This administration has the chance to positively shape the future for generations of American families, ensuring that every baby gets the best possible start in life and that every parent feels empowered and supported.

Now is the time to build on recent momentum and create a world where families have true autonomy in infant feeding decisions. A world where paid family leave allows parents the time to heal, bond, and establish feeding routines; communities provide equitable access to donor milk; and federal, state, and local agencies have formal plans to protect infant feeding during emergencies, ensuring no baby is left vulnerable. Every family deserves to feel empowered and supported in making the best choices for their children, with equitable access to resources and support systems.

This policy memo was written with support from Suzan Ajlouni, Public Health Writing Specialist at the U.S. Breastfeeding Committee. The policy recommendations have been identified through the collective learning, idea sharing, and expertise of USBC members and partners.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
Isn’t the choice to breastfeed a personal one?

Rather than being a matter of personal choice, infant feeding practice is informed by circumstance and level (or lack) of support. When roadblocks exist at every turn, families are backed into a decision because the alternatives are not available, attainable, or viable. United States policies and infrastructure were not built with the realities of breastfeeding in mind. Change is needed to ensure that all who choose to breastfeed are able to meet their personal breastfeeding goals, and society at large reaps the beneficial social and economic outcomes.

How much would it cost to establish a national paid family and medical leave program?

The Fiscal Year 2024 President’s Budget proposed to establish a national, comprehensive paid family and medical leave program, providing up to 12 weeks of leave to allow eligible workers to take time off to care for and bond with a new child; care for a seriously ill loved one; heal from their own serious illness; address circumstances arising from a loved one’s military deployment; or find safety from domestic violence, sexual assault, or stalking. The budget recommendation included $325 billion for this program. It’s important to look at this with the return on investment in mind, including improved labor force attachment and increased earnings for women; better outcomes and reduced health care costs for ill, injured, or disabled loved ones; savings to other tax-funded programs, including Medicaid, SNAP, and other forms of public assistance; and national economic growth, jobs growth, and increased economic activity.

How will we know if these efforts are having an impact?

There are a variety of national monitoring and surveillance efforts tracking breastfeeding initiation, duration, and exclusivity rates that will inform how well these actions are working for the American people, including the National Immunization Survey (NIS), Pregnancy Risk Assessment and Monitoring System (PRAMS), Infant Feeding Practices Study, and National Vital Statistics System. The CDC Breastfeeding Report card is published biannually to bring these key data points together and place them into context. Significant improvements in the data have already been seen across recent decades, with breastfeeding initiation rates increasing from 73.1 percent in 2004 to 84.1 percent in 2021.

Is there enough buy-in from organizations and individuals to support these systemic changes?

The U.S. Breastfeeding Committee is a coalition bringing together approximately 140 organizations from coast to coast representing the grassroots to the treetops – including federal agencies, national, state, tribal, and territorial organizations, and for-profit businesses – that support the USBC mission to create a landscape of breastfeeding support across the United States. Nationwide, a network of hundreds of thousands of grassroots advocates from across the political spectrum support efforts like these. Together, we are committed to ensuring that all families in the U.S. have the support, resources, and accommodations to achieve their breastfeeding goals in the communities where they live, learn, work, and play. The U.S. Breastfeeding Committee and our network stand ready to work with the administration to advance this plan of action.

Supporting Device Reprocessing to Reduce Waste in Health Care

The U.S. healthcare system produces 5 million tons of waste annually, or approximately 29 pounds per hospital bed daily. Roughly 80 percent of the healthcare industry’s carbon footprint comes from the production, transportation, use, and disposal of single-use devices (SUDs), which are pervasive in the hospital. Notably, 95% of the environmental impact of single-use medical products results from the production of those products. 

While the Food and Drug Administration (FDA) oversees new devices being brought to market, it is up to the manufacturer to determine whether a device will be marketed as single-use or multiple-use. Manufacturers have a financial incentive to market devices for “single-use” or “disposable” as marketing a device as reusable requires expensive cleaning validations.

In order to decrease healthcare waste and environmental impact, FDA leads on identifying reusable devices that can be safely reprocessed and incentivizing manufacturers to test the reprocessing of their device. This will require the FDA to strengthen its management of single-use and reusable device labeling. Further, the Veterans Health Administration, the nation’s largest healthcare system, should reverse the prohibition on reprocessed SUDs and become a national leader in the reprocessing of medical devices.

Challenge and Opportunity

While healthcare institutions are embracing decarbonization and waste reduction plans, they cannot do this effectively without addressing the enormous impact of single-use devices (SUDs). The majority of research literature concludes that SUDs are associated with higher levels of environmental impact than reusable products. 

FDA regulations governing SUD reprocessing make it extremely challenging for hospitals to reprocess low-risk SUDs, which is inconsistent with the FDA’s “least burdensome provisions.” The FDA requires hospitals or commercial SUD reprocessing facilities to act as the device’s manufacturer, meaning they must follow the FDA’s rules for medical device manufacturers’ requirements and take on the associated liabilities. Hospitals are not keen to take on the liability of a manufacturer, yet commercial reprocessors do not offer many lower-risk devices that can be reprocessed. 

As a result, hospitals and clinics are no longer willing to sterilize SUDs through methods like autoclaving even despite documentation showing that sterilization is safe and precedent showing that similar devices have been safely sterilized and reused for many years without adverse events. Many devices, including pessaries for pelvic organ prolapse and titanium phacoemulsification tips for cataract surgery, can be safely reprocessed in their clinical use. These products, given their risk profile, need not be subject to the FDA’s full medical device manufacturer requirements.  

Further, manufacturers are incentivized to bring SUDs to market quicker than those that may be reprocessed. Manufacturers often market devices as single-use solely because the original manufacturer chose not to conduct expensive cleaning and sterilization validations, not because such cleaning and sterilization validations cannot be done. FDA regulations that govern SUDs should be better tailored to each device so that clinicians on the frontlines can provide appropriate and environmentally sustainable health care. 

Reprocessed devices cost 25 to 40% less. Thus, the use of reprocessed SUDs can reduce costs in hospitals significantly — about $465 million in 2023. Per the Association of Medical Device Reprocessors, if the reprocessing practices of the top 10% performing hospitals were maximized across all hospitals that use reprocessed devices, U.S. hospitals could have saved an additional $2.28 billion that same year. Indeed, enabling and encouraging the use of reprocessed SUDs can also yield significant cost reductions without compromising patient care. 

Plan of Action

As the FDA began regulating SUD reprocessing in 2000, it is imperative that the FDA take the lead on creating a clear, streamlined process for clearing or approving reusable devices in order to ensure the safety and efficacy of reprocessed devices. These recommendations would permit healthcare systems to reprocess and reuse medical devices without fear of noncompliance by the Joint Commission or Centers for Medicare and Medicaid Services that reply on FDA regulations. Further, the nation’s largest healthcare system, the Veterans Health Administration, should become a leader in medical device reprocessing, and lead on showcasing the standard of practice for sustainable healthcare.

  1. FDA should publish a list of SUDs that have a proven track record of safe reprocessing to empower hospitals to reduce waste, costs, and environmental impact without compromising patient safety. The FDA should change the labels of single-use devices to multi-use when reuse by hospitals is possible and validated via clinical studies, as the “single-use” label has promoted the mistaken belief that SUDs cannot be safely reprocessed. Per the FDA, the single-use label simply means a given device has not undergone the original equipment manufacturer (OEM) validation tests necessary to label a device “reusable.” The label does not mean the device cannot be cleared for reprocessing. 
  1. In order to help governments and healthcare systems prioritize the environmental and cost benefits of reusable devices over SUDs, FDA should incentivize applications of reusable or commercially reprocessable devices, such as through expediting review. The FDA can also incentivize use of reprocessed devices through payments to hospitals for meeting reprocessing benchmarks. 
  1. The FDA should not subject low-risk devices that can be safely reprocessed for clinical use to full device manufacturer requirements. The FDA should further encourage healthcare procurement staff by creating an accessible database of devices cleared for reprocessing and alerting healthcare systems about regulated reprocessing options. In doing so, the FDA can help reduce the burden on hospitals in reprocessing low-risk SUDs and encourage healthcare systems to sterilize SUDs through methods like autoclaving. 
  1. As the only major health system in the U.S. to prohibit the use of reprocessed SUDs, the U.S. Veterans Health Administration should reverse its prohibition as soon as possible. This prohibition likely remains because of outdated determinations of risks, which comes at major costs for the environment and Americans. Doing so would be consistent with the FDA’s conclusions that reprocessed SUDs are safe and effective.  
  1. FDA should recommend that manufacturers publicly report the materials used in the composition of devices so that end-users can more easily compare products and determine the environmental impact of devices. As explained by AMDR, some Original Equipment Manufacturer (OEM) practices discourage or fully prevent the use of reprocessed devices. It is imperative that the FDA vigorously track and impede these practices. Not only will requiring public reporting device composition help healthcare buyers make more informed decisions, it will also help promote a more circular economy that uplifts sustainability efforts. 

Conclusion

To decrease costs, waste, and environmental impact, the healthcare sector urgently needs to increase its use of reusable devices. One of the largest barriers is FDA requirements that result in needlessly stringent requirements of hospitals, hindering the adoption of less wasteful, less costly reprocessed devices.

FDA’s critical role in medical device labeling, clearing, or approving more devices as reusable, has down market implications and influences many other regulatory and oversight bodies, including the Centers for Medicare & Medicaid Services (CMS), the Association for the Advancement of Medical Instrumentation (AAMI), the Joint Commission, hospitals, health care offices, and health care providers. It is essential for the FDA to step up and take the lead in revising the device reprocessing pipeline. 

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Taking on the World’s Factory: A Path to Contain China on Legacy Chips

Last year the Federation of American Scientists (FAS), Jordan Schneider (of ChinaTalk), Chris Miller (author of Chip War) and Noah Smith (of Noahpinion) hosted a call for ideas to address the U.S. chip shortage and Chinese competition. A handful of ideas were selected based on the feasibility of the idea and its and bipartisan nature. This memo is one of them.

Challenge and Opportunity

The intelligent and autonomous functioning of physical machinery is one of the key societal developments of the 21st century, changing and assisting in the way we live our lives. In this context, semiconductors, once a niche good, now form the physical backbone of automated and intelligent systems. The supply chain disruptions of 2020 laid bare the vulnerability of the global economy in the face of a chip shortage, which created scarcity and inflation in everything from smartphones to automobiles. In an even more extreme case, a lack of chips could impact critical infrastructure, such as squeezing the supply of medical devices necessary for many modern procedures. 

The deployment of partially- or fully-automated warfighting further means that Artificial Intelligence (AI) systems now have direct and inescapable impacts on national security. With great power conflict opening on the horizon, threats toward and emanating from the semiconductor supply chain have become even more evident. 

In this context, the crucial role of the People’s Republic of China (PRC) in chip production represents a clear and present danger to global security. Although the PRC currently trails in the production of cutting-edge sub-16 nm chips used for the development of AI models, the country’s market dominance in the field of so-called “trailing edge chips” of 28 nm or above has a much wider impact due to their ubiquity in all traditional use cases outside of AI. 

The most important harm of this is clear: by leveraging its control of a keystone international industry, the Chinese Communist Party will be able to exert greater coercive pressure on other nations. In a hypothetical invasion of Taiwan, this could mean credibly threatening the U.S. and other democratic countries not to intervene under the threat of a semiconductor embargo. Even more dramatically, given the reliance of modern military manufacture on digital equipment, in the case of a full-scale war between the People’s Republic of China and the United States, China could produce enormous amounts of materiel while severely capping the ability of the rest of the world to meet its challenge. 

A secondary but significant risk involves the ability of China to build defects or vulnerabilities into its manufactured hardware. Control over the physical components that underlie critical infrastructure, or even military hardware, could allow targeted action to paralyze U.S. society or government in the face of a crisis. While defense and critical infrastructure supply chains represent only a small fraction of all semiconductor-reliant industrial products, mitigation of this harm represents a baseline test of the ability of the United States to screen imports relevant to national security. 

Beyond Subsidies: A Blueprint for Global Manufacturing

Wresting back control of the traditional semiconductor supply chain from China is widely recognized as a prime policy goal for the United States and allied democratic countries. The U.S. has already begun with the passage of the CHIPS and Science Act in 2022, providing subsidies and tax incentives to encourage the creation of new fabrication plants (fabs) in the United States. But a strategic industry cannot survive on subsidies alone. Preferential tax treatment and government consumption may stand up some degree of semiconductor manufacture. But it cannot rival the size of China’s if the PRC is able to establish itself as the primary chip supplier in both its domestic market and the rest of the world.

Nascent American foundries and the multinational companies that operate them must be able to survive in a competitive international environment without relying on unpredictable future support. They must do this while fighting against PRC-backed chip manufacturers operating with both a strong domestic market and massively developed economies of scale. Given the sheer size of both the Chinese manufacturing base and its domestic market, the U.S. cannot hope to accomplish this goal alone. Only a united coalition of developed and developing countries can hope to compete. 

The good news is that the United States and its partners in Europe and the Indo-Pacific have all the necessary ingredients to realize this vision. Developing countries in South and Southeast Asia and the Pacific have a vast and expanding industrial base, augmented by Special Economic Zones and technical universities. America and its developed partners bring the capital investment and intellectual property necessary to kickstart semiconductor production abroad. 

The goal of a rest-of-world semiconductor alliance will be twofold: to drive down the cost of chips made in the U.S. and its allies while simultaneously pushing up the cost of purchasing legacy semiconductors produced in China to meet it. Only when these two intersect will the balance of global trade begin to tip back toward the democratic world. The first two slates of policy recommendations will focus on decreasing the cost of non-China production and increasing the cost of Chinese imports, respectively. 

Finally, even in the case in which non-Chinese-influenced semiconductors become competitive with those made in the PRC, it will likely be impossible to fully exclude Chinese hardware from American and allied markets. Therefore, the final raft of policy recommendations will focus on mitigating the threat of Chinese chips in American and allied markets, including possible risks of inbuilt cyber vulnerability. 

The creation of an autonomous and secure supply chain entirely outside of China is possible. The challenge will be to achieve semiconductor independence in time to prevent China from successively weaponizing chip dominance in a future war. With clashes escalating in the South China Sea and threats across the Taiwan Strait growing ever more ominous, the clock is ticking. But America’s Indo-Pacific partners are also increasingly convinced of the urgency of cooperation. The policies presented aim to make maximum use of this critical time to build strategic independence and ensure peace. 

Plan of Action

Recommendation #1: Boosting non-China Manufacturing 

The first and most crucial step toward semiconductor sovereignty is to build and strengthen a semiconductor supply chain outside of China. No matter the ability to protect domestic markets from Chinese competition, U.S. industrial productivity relies on cheap and reliable access to chips. Without this, it is impossible to ramp up industrial production in key industries from defense contracting to consumer electronics. 

According to a report released by the CSIS Wadhwani Center for AI and Advanced Technologies, the global semiconductor value chain broadly involves three components. At the top is design, which involves creating Electronic Design Automation (EDA) software, generating IP, and producing manufacturing equipment. Next is fabrication, which entails printing and manufacturing the wafers that are the key ingredient for finished semiconductors. The final stage is assembly, test, and packaging (ATP), which entails packaging wafers into fully-functioning units that are fit for sale and verifying they work as expected. 

Of these three, the United States possesses the greatest competitive advantage in the field of design, where American intellectual property and research prowess drive many of the innovations of the modern semiconductor industry. Electronic Design Automation software, the software that allows engineers to design chips, is dominated by three major firms, of which two, Cadence and Synopsys, are American companies. The third, Mentor Graphics, is a U.S.-based subsidiary of the German industrial company Siemens. U.S. Semiconductor Manufacturing Equipment (SME) is also an important input in the design stage, with U.S.-based companies currently comprising over 40 percent of the global market share. The United States and Japan alone account for more than two-thirds. 

Meanwhile, the PRC has aggressively ramped up wafer production, aiming to make China an integral part of the global supply chain, often stealing foreign intellectual property along the way to ease its production. Recent reported efforts by the PRC to illicitly acquire U.S. SME underscore that China recognizes the strategic importance of both IP and SME as primary inputs to the chip making process. By stealing the products of American research, China further creates an unfair business environment in which law-abiding third countries are unable to keep up with Chinese capacity. 

Semiconductor Lend-Lease: A Plan for the 21st Century 

The only way for the international community to compete is to level the playing field. In order to do so, we propose that the United States encourage and incentivize its companies to license their IP and SME to third countries looking to build wafer capacity. 

Before the United States officially entered into the Second World War, the administration of President Franklin Delano Roosevelt undertook the “Lend-Lease” policy, agreeing to continue to supply allied countries such as Great Britain with weapons and materiel, without immediate expectation of repayment. Recently, Lend-Lease has been resurrected in the modern context of the defense of Ukraine, with the United States and European powers supplying Ukraine with armaments and munitions Ukrainian industry could not produce itself. 

The crucial point of Lend-Lease is that it takes the form of immediate provison of critical outputs, rather than simple monetary donations, which require time and investment to convert into the desired goods. World War II-era Lend-Lease was not based on a long-term economic or development strategy, but rather on the short-term assessment that without American support, the United Kingdom would fall to Nazi occupation. Given the status of semiconductors as a key strategic good, the parallels with a slow-rolling crisis in the South China Sea and the Taiwan Strait become clear. While in the long term, South, East, and Southeast Asia will likely be able to level up with China in the field of semiconductors, the imminent threats of both Chinese wafer dominance and a potential invasion of Taiwan mean that this development must be accelerated. Rather than industrial and munitions production, as in 1941, the crucial ingredients the United States brings to this process today are intellectual property, design tools, and SME. These are thus the tools that should be leased to U.S. partners and allies, particularly in the Indo-Pacific. By allowing dedicated foreign partners to take advantage of the gains of American research, we will allow them to level up with China and truly compete in the international market. 

Although the economics of such a plan are complex, we present a sketch here of how one iteration might look. The United States federal government could negotiate with the “Big Three” EDA firms to purchase transferable licenses for their EDA software. The U.S. could then “Lend-Lease” licenses to major semiconductor producers in partner countries such as Singapore, Malaysia, Vietnam, the Philippines, or even in Latin America. The U.S. could license this software on the condition that products produced by such companies would be made available at discounted prices to the American market, and that companies should disavow further investment from or cooperation with Chinese entities. Partner companies in the Indo-Pacific could further agree to share any further research results produced using American IP, making further advancements available to American companies in the global market. 

When growing companies attain a predetermined level of market value they could offer compensation to the United States in the form of fees or stock options, which would be collected by the United States under the terms of the treaty and awarded to the EDA firms. Similar approaches could be taken toward licensing American IP, or even physically lending SME to countries in need. 

Licensing American research to designated partner countries comes with some risks and challenges. For one, it creates a greater attack surface for Chinese companies hoping to steal software and design processes created in the United States. Preventing such theft is already highly difficult, but the U.S. should extend cooperation in standardizing industrial security practices for strategic industries. 

A recent surge in fab construction in countries such as Singapore and India means that the expansion of the global semiconductor industry is already in motion. The United States can leverage its expertise and research prowess to speed up the growth of wafer production in third countries, while simultaneously countering China’s influence on global supply chains. 

A Semiconductor Reserve? 

The comparison of semiconductors to oil is frequently made and has a key strategic justification: for more than a century, oil was a key input to virtually all industrial processes, from transportation to defense production. Semiconductors now play a similar role, serving as a key ingredient in manufacturing. 

A further ambitious policy to mitigate the harm of Chinese chips is to create a centralized reservoir of semiconductors, akin to the Strategic Petroleum Reserve. Such a reserve would be operated by the Commerce Department and maintain holdings of both leading- and trailing-edge chips, obtained from free dealings on the open market. By taking advantage of bulk pricing and guaranteed, recurring contracts, the government could acquire a large number of semiconductors at reasonable prices, sourced exclusively from American and partner nation foundries. 

In the event of a chip shortage, the United States could sell chips back into the market, allowing key industries to continue to function with a trusted source of secure chips. In the absolute worst case of a geopolitical crisis involving China, a strategic stockpile would create a bulwark for the American defense industry to continue producing armaments during a period of disrupted chip supply. This buffer of time would be intended for domestic and allied production to ramp up and continue to supply security functions. 

However, allowing the U.S. to participate in the chip industry has a further impact on economic development. By making the U.S. a first-order purchaser of semiconductors at an industrial scale, the United States could create a reliable source of demand for fledgling businesses. The United States could serve as a transitory consumer buying up excess capacity when demand is weak, ensuring that new foundries are both capable of operation and shielded from attempts from China to smother demand. The direct participation of the U.S. in the global semiconductor market would help to kickstart industry in partner countries while providing a further incentive to build collaboration with the United States. 

Recommendation #2: Fencing in Chinese Semiconductor Exports 

A second step toward semiconductor independence will be in containing Chinese exports, with the goal of reducing China’s access to global markets and starving their industrial machine. 

The most direct approach to reducing demand for Chinese semiconductors is the imposition of tariffs. The U.S. consumer market is a potent economic force. By squeezing Chinese manufacturers seeking to compete in the U.S. market, the United States can avoid feeding additional production capacity that might be weaponized in a future conflict. These tariffs can take a variety of forms and justifications, from increased probes into Chinese labor standards and human rights practices to dumping investigations pursued at the WTO. The deep challenge of effective tariffs is how to enforce these tariffs once they come into play and how to coordinate them with international partners. 

Broad Tariffs, Deep Impact 

No rule works without an enforcement mechanism, and in the worst case, a strong public stance against Chinese semiconductors that is not effectively implemented may actually weaken U.S. credibility and embolden the Chinese government. Therefore, it is imperative to have unambiguous rules on trade restrictions, with a strong enforcement mechanism to match. 

These measures should not just apply to chips that are bought directly from China but rather include those that are assembled and packaged in third countries to circumvent U.S. tariffs. The maximal interpretation of the tariffs mandate would further include a calculated tariff on products that make use of Chinese semiconductors as an intermediate input. 

In the case of semiconductors made in China but assembled, tested, or packaged in other countries, we suggest an expansion of the Biden Administration’s 50% tariff on Chinese semiconductors to include all chips, consumer, or industrial products that include a wafer manufactured in the People’s Republic of China, based on their international market rate. That is, if an Indonesian car manufacturer purchases a wafer manufactured in China with a market value of $3,000, and uses it to manufacture a $35,000 car, importing this vehicle to the United States would be subject to an additional tax of $1,500. 

While fears abound of the inflationary effects of additional tariffs, they are necessary for the creation of an incentive structure that properly contains Chinese manufacturing. In the absence of proportional tariffs on chips and products assembled outside China, Chinese fabs will be able to circumvent U.S. trade restrictions by boosting wafer production that then takes advantage of assembly, test, and packaging in third countries. Further, it is imperative for the United States to not only restrict Chinese chip growth but to encourage the development of domestic and foreign non-China chip manufacturers. Imposing tariffs on Chinese chips as an intermediate ingredient is necessary to create a proper competitive environment. Ultimately, the goal is to ensure a diversification of fab locations beyond China that will create lower prices for consumers overall. 

How would tariffs on final goods containing Chinese chips be enforced? The policy issue of sanctioning and restricting an intermediate product is, unfortunately, not new. It is well known that Chinese precursor chemicals, often imported into Mexico, form much of the raw inputs for deadly fentanyl that is driving the United States opioid epidemic. Taking a cue from this example, we further suggest the creation of an internationally maintained database of products manufactured using Chinese semiconductors. As inspiration, the National Institutes of Health / NCATS maintains the Global Substance Registration System, a database that categorizes chemical substances, along with their commonly used names, regulatory classification, and relationships with other chemicals. Such a database could be administered by the Commerce Department’s Bureau of Industry and Security, allowing the personnel who enforce the tariffs to also collect all relevant information in one place. 

Companies importing products into the U.S. would be required to register the make and model of all Chinese chips used in each of their products so that the United States and participating countries could impose corresponding sanctions. Products imported to the U.S. would be subject to random checks involving disassembly in Commerce Department workshops, with failure to report a sanctioned semiconductor component making a company subject to additional tariffs and fines. Manual disassembly is painstaking and difficult, but regular, randomized inspections of imported products are the only way to truly verify their content. 

The maintenance of such a database would bring follow-on national security benefits, in that the disclosure of any future vulnerability in a Chinese electronic component would allow quick diagnosis of what systems, including critical infrastructure, might be immediately vulnerable. We believe that creating an enforcement infrastructure that coordinates information between the U.S. and partner countries is a necessary first step to ensuring that tariffs are effective. 

Zone Defense: International Cooperation in Semiconductor Tariffs 

At first glance, tariff action by the United States on Chinese-produced goods would appear to be a difficult coordination problem. By voluntarily declining an influx of cheaply-priced goods, American consumers exacerbate an existing trade glut in world semiconductor markets, allowing and incentivizing other nations to purchase these chips in greater volume and at a lower price. 

However, rather than dissuading further sanctions in world markets, tariffs may actually spur further coordination in blocking Chinese imports. The Biden Administration’s imposition of tariffs on Chinese electric vehicles coincided with parallel sanctions imposed by the European Union, India, and Brazil. As Chinese overcapacity in EVs is rejected by U.S. markets, other countries face intensified concerns about the potential for below-price “dumping” of products that could harm domestic industry. 

However, this ad-hoc international cooperation is still in a fragile and tentative stage and must be encouraged in order to create an “everywhere but China” semiconductor supply chain. Further, while countries impose tariffs to protect existing automotive and steel industries, global semiconductor manufacturing is currently concentrated in the Indo-Pacific. Thus, coordinating against China calls on countries to not just impose tariffs to protect existing industries, but to impose “nursery” tariffs that will protect nascent semiconductor development, even in places where an existing industry does not yet exist. 

A commonsense first step to building an atmosphere of trust is to take actions protecting partner countries from retaliation in the form of Chinese trade restrictions. In response to EU tariffs on Chinese EVs, Beijing has already threatened retaliatory restrictions on chicken feet, pork, and brandy. For a bloc as large as the European Union, these restrictive sanctions can irritate an important political constituency. For a smaller or less economically-powerful country, these measures might be decisive in sending the message that semiconductor tariffs are not worth the risk. 

The United States should negotiate bilateral treaties with partner nations to erect tariffs against Chinese manufacturing, with the agreement that, in the case of Chinese retaliation against predetermined fundamental national industries, the United States government will buy up excess capacity at slightly discounted prices and release it to the American market. This preemptive protection of allied trade will blunt China’s ability to harm U.S. partners and allies. Raising tariffs on imported goods also imposes costs on the Chinese consumer, meaning that in the best case, the decreased effectiveness of these tools will deter the PRC from attempting such measures in the first place. 

Recommendation #3: Mitigating the Threat of Existing Chips 

No matter the success of the previous measures, it will be impossible to keep Chinese products entirely outside the U.S. market. Therefore, a strategy is required for managing the operational risks posed by Chinese chips that have and will exist inside the U.S. domestic sphere. 

Precisely defining the scope of the threat is very important. A narrow definition might allow threats to pass through, while an overly wide definition may expend time and resources over nothing. A recent British effort to exclude Chinese-made cap badges presents a cautionary tale. By choosing a British supplier over an existing Chinese one after the acquisition process was already underway, the UK incurred an additional delay in its military pipeline, not to mention the additional confusion caused by such an administrative pivot. Implanting GPS tracking or listening devices within small pieces of metal by one company within the Chinese supply chain seems both impractical and far-fetched–though the PRC surely enjoys the chaos and expense such a panic can cause. 

We consider it analogously unlikely that China is currently aiming to insert intentional defects into its semiconductor manufacturing. First, individual wafers are optimized for their extremely low cost of production, meaning that inserting a carefully designed (and hidden) flaw would introduce additional costs that could compromise the goal of low-cost manufacturing. Any kind of remotely activated “kill switch” would require some kind of wireless receiver– and a receiver of any reasonable strength could not be effectively hidden on a large scale. Second, such a vulnerability would have to be inserted only into wafers that are eventually purchased by the U.S. and its allies. If not, then any attempt to activate a remote exploit could risk compromising uninvolved countries or even the Chinese domestic market, either by accidentally triggering unintended chips or by providing a hardware vulnerability that could be re-used by Western cyber operations. Deliberately planting such vulnerabilities would thus require not just extreme technical precision, but a careful accounting of where vulnerable chips arrive in the supply chain.

Nonetheless, the existence of Chinese chips in the American market can accomplish much without explicitly-designed defects or “kill switches”. Here, a simple lack of transparency may be enough. China currently requires that all software vulnerabilities be reported to the Ministry of Industry and Information Technology, but does not have any corresponding public reporting requirement. This raises the fear that the Chinese government may be “stockpiling” vulnerabilities in Chinese-produced products, which may be used in future cyber operations. Here, China does not need to explicitly build backdoors into its own hardware but may simply decline to publicly disclose vulnerabilities in software in order to attack the rest of the world. 

Shining a Light on Untrusted Hardware 

The likelihood of cooperation between Chinese industry and the CCP exposes a potentially important risk. Chinese software is often deployed atop or alongside Chinese semiconductors and is particularly dangerous in the form of hardware drivers, the “glue code” that ties together software with low-level hardware components. These drivers by default operate with high privileges, and are typically closed-source and thus difficult to examine. We believe that vulnerable drivers may be a key vector of Chinese espionage or cyber threats. In 2019, Microsoft disclosed the existence of a privilege escalation vulnerability found in a Huawei driver. Although Huawei cooperated with Microsoft, it is unclear under the current legal regime whether the discovery of similar vulnerabilities by Huawei would be reported and patched, or if they would be kept as an asset by the Chinese government. The promulgation of Chinese drivers packaged with cheap hardware thus means that the Chinese Communist Party will have access to a broad, and potentially pre-mapped, attack surface with which to exploit U.S. government services. 

The first policy step here is obvious: banning the use of Chinese chips in U.S. federal government acquisitions. This has already been proposed as a Defense Department regulation set to take effect in 2027. If possible, this date should be moved up to 2026 or earlier. In order to enforce this ban, continuous research should be undertaken to map supply chains that produce U.S. government semiconductors. How to accelerate and enforce this ban is an ongoing policy question that is beyond the scope of this paper. 

However, a deeper question is how to protect the myriad components of critical infrastructure, both formal and informal. The Cybersecurity and Infrastructure Security Agency (CISA) has defined 16 sectors of critical infrastructure whose failure could materially disrupt or endanger the lives of American citizens. The recent discovery of the Volt Typhoon threat group revealed the willingness of the Chinese government to penetrate U.S. critical infrastructure using vulnerable components. 

While some of the 16 CISA sectors, such as Government Services and the Defense Industrial Base, are within the purview of the federal government, many others, such as Healthcare, Food and Agriculture, and Information Technology, are run via complex partnerships between State, Local, Tribal, and Territorial (SLTT) governments and private industry. Although a best effort should be made to insulate these sectors from over-reliance on China, fully quarantining them from Chinese chips is simply unrealistic. Therefore we should explore proactive efforts at mitigation in the case of disruption. 

A first step would be to establish a team at CISA to decompile or reverse-engineer the drivers for Chinese hardware that is known to operate within U.S. critical infrastructure. Like manual disassembly, this is an expensive and arduous process, but it has the advantage of reducing an unknown or otherwise intractable problem to an issue of engineering. In this case, special care should be taken to catalog and prioritize pieces of Chinese hardware that impact the most critical infrastructure systems, such as Programmable Logic Controllers (PLCs) in energy infrastructure and processors in hospital databases. This approach can be coordinated with the threat database described in the previous section to disassemble and profile the drivers of the highest-impact semiconductor products first. If any vulnerabilities are found, warnings can be issued to critical infrastructure providers, and patches issued to the relevant parties. 

Brace for Impact: Building Infrastructure Resiliency 

Even in the case that neither the reduction of Chinese hardware nor the proactive search for driver vulnerabilities is able to prevent a Chinese attack, the United States should be prepared to mitigate the harms of a cyber crisis. 

A further step toward this goal would be the institution of resiliency protocols and drills for designated critical infrastructure providers. The 2017 WannaCry ransomware attack substantially incapacitated the UK National Health Service by locking providers out of Electronic Medical Record (EMR) systems. Mandating routine paper backups of digital medical records is one example of a resiliency strategy that could be deployed to ensure emergency functioning even in the case of a major service disruption. 

A further step to protect critical infrastructure is to mandate regular cyber training for infrastructure providers. CISA could work in cooperation with State, Local, Tribal, and Territorial regulatory bodies to identify critical pieces of infrastructure that could be subject to attack. CISA could develop hypothetical scenarios involving outages of critical Information Technology services, and work with local infrastructure providers, such as hospitals, municipal water services, transit providers, and the like, to create plans for how to continue to operate in the event of a crisis. CISA could also prepare baseline strategies, such as having non-internet connected control systems or offline backups of critical information. Such strategies could be adapted by individual infrastructure providers to best protect their services in the event of an attack. These plans could then be carried out in mock ‘cyber drills’ to exercise preparedness in the event of an incident. 

Ultimately, plans of this kind only prepare for service disruptions and do not address the longer-reaching impacts of breaches of confidentiality or the targeted manipulation of sensitive data. However, as we believe that the likelihood of targeted or sophisticated vulnerabilities in Chinese chips is relatively low, these kinds of brute force attacks are the most likely threat model. Preparing for the most basic and unsophisticated service disruptions is a good first step toward mitigating the harm of any potential cyber attack, including those not directly facilitated by Chinese hardware. This cyber-resiliency planning is therefore a strong general recommendation for protecting Americans from future threats. 

Conclusion

We have presented the issue of international semiconductor competition along three major axes: increasing production outside of China, containing an oversupply of Chinese semiconductors, and mitigating the risks of remaining Chinese chips in the U.S. market. We have proposed three slates of policies, with each corresponding to one of the three challenges:

Boosting non-China semiconductor production 

Containing Chinese exports 

Mitigating the threat of chips in the U.S. market 

We hope that this contribution will advance future discussions on the semiconductor trade and make a measurable impact on bolstering U.S. national security. 

Using Targeted Industrial Policy to Address National Security Implications of Chinese Chips

Last year the Federation of American Scientists (FAS), Jordan Schneider (of ChinaTalk), Chris Miller (author of Chip War) and Noah Smith (of Noahpinion) hosted a call for ideas to address the U.S. chip shortage and Chinese competition. A handful of ideas were selected based on the feasibility of the idea and its and bipartisan nature. This memo is one of them.

In recent years, China has heavily subsidized its legacy chip manufacturing capabilities. Although U.S. sanctions have restricted China’s access to and ability to develop advanced AI chips, they have done nothing to undermine China’s production of “legacy chips,” which are semiconductors built on process nodes 28nm or larger. It is important to clarify that the “22 nm” “20nm” “28nm” or “32nm” lithography process is simply a commercial name for a generation of a certain size and its technology that has no correlation to the actual manufacturing specification, such as the gate length or half pitch. Furthermore, it is important to note that different firms have different specifications when it comes to manufacturing. For instance, Intel’s 22nm lithography process uses a 193nm wavelength argon fluoride laser (ArF laser) with a 90nm Gate Pitch and a 34 nm Fin height. These specifications vary between fab plats such as Intel and TSMC. The prominence of these chips makes them a critical technological component in applications as diverse as medical devices, fighter jets, computers, and industrial equipment. Since 2014, state-run funds in China have invested more than $40 billion into legacy chip production to meet their goal of 70% chip sufficiency by 2030. Chinese legacy chip dominance—made possible only through the government’s extensive and unfair support—will undermine the position of Western firms and render them less competitive against distorted market dynamics.

Challenge and Opportunity

Growing Chinese capacity and “dumping” will deprive non-Chinese chipmakers of substantial revenue, making it more difficult for these firms to maintain a comparative advantage. China’s profligate industrial policy has damaged global trade equity and threatens to create an asymmetrical market. The ramifications of this economic problem will be most felt in America’s national security, as opposed to from the lens of consumers, who will benefit from the low costs of Chinese dumping programs until a hostile monopoly is established. Investors—anticipating an impending global supply glut—are already encouraging U.S. firms to reduce capital expenditures by canceling semiconductor fabs, undermining the nation’s supply chain and self-sufficiency. In some cases, firms have decided to cease manufacturing particular types of chips outright due to profitability concerns and pricing pressures. Granted, the design of chip markets is intentionally opaque, so pricing data is insufficient to fully define the extent of this phenomenon; however, instances such as Taiwan’s withdrawal from certain chip segments shortly after a price war between China and its competitors in late 2023 indicate the severity of this issue. If they continue, similar price disputes are capable of severely subverting the competitiveness of non-Chinese firms, especially considering how Chinese firms are not subject to the same fiscal constraints as their unsubsidized counterparts. In an industry with such high fixed costs, the Chinese state’s subsidization gives such firms a great advantage and imperils U.S. competitiveness and national security.

Were the U.S. to engage in armed conflict with China, reduced industrial capacity could quickly impede the military’s ability to manufacture weapons and other materials. Critical supply chain disruptions during the COVID-19 pandemic illustrate how the absence of a single chip can hold hostage entire manufacturing processes; if China gains absolute legacy chip manufacturing dominance, these concerns would be further amplified as Chinese firms become able to outright deny American access to critical chips, impose harsh costs through price hikes, or impose diplomatic compromises and quid-pro-quo.Furthermore, decreased Chinese reliance on Taiwanese semiconductors reduces their economic incentive to pursue a diplomatic solution in the Taiwan Strait—making armed conflict in the region more likely. This weakened posture endangers global norms and the balance of power in Asia—undermining American economic and military hegemony in the region.  

China’s legacy chip manufacturing is fundamentally an economic problem with national security consequences. The state ought to interfere in the economy only when markets do not operate efficiently and in cases where the conduct of foreign adversaries creates market distortion. While the authors of this brief do not support carte blanche industrial policy to advance the position of American firms, it is the belief of these authors that the Chinese government’s efforts to promote legacy chip manufacturing warrant American interference to ameliorate harms that they have invented. U.S. regulators have forced American companies to grapple with the sourcing problems surrounding Chinese chips; however, the issue with chip control is largely epistemic. It is not clear which firms do and do not use Chinese chips, and even if U.S. regulators knew, there is little political appetite to ban them as corporations would then have to pass higher costs onto consumers and exacerbate headline inflation. Traditional policy tools for achieving economic objectives—such as sanctions—are therefore largely ineffectual in this circumstance. More innovative solutions are required.  

If its government fully commits to the policy, there is little U.S. domestic or foreign policy can do to prevent  China from developing chip independence. While American firms can be incentivized to outcompete their  Chinese counterparts, America cannot usurp Chinese political directives to source chips locally. This is true because China lacks the political restraints of Western countries in financially incentivizing production, but also because in the past—under lighter sanctions regimes—China’s Semiconductor Manufacturing International  Corporation (SMIC) acquired multiple Advanced Semiconductor Materials Lithography (ASML) DUV (Deep  Ultraviolet Light) machines. Consequently, any policy that seeks to mitigate the perverse impact of Chinese dominance of the legacy chip market must a) boon the competitiveness of American and allied firms in “third markets” such as Indonesia, Vietnam, and Brazil and b) de-risk America’s supply chain from market distortions and the overreliance that Chinese policies have affected. China’s growing global share of legacy chip manufacturing threatens to recreate the global chip landscape in a way that will displace U.S. commercial and security interests. Consequently, the United States must undertake both defensive and offensive measures to ensure a coordinated response to Chinese disruption.  

Plan of Action

Considering the above, we propose the United States enact a policy mutually predicated on innovative technological reform and targeted industrial policy to onshore manufacturing capabilities. 

Recommendation 1. Weaponizing electronic design automation 

Policymakers must understand that from a lithography perspective, the United States controls all essential technologies when it comes to the design and manufacturing of integrated circuits. This is a critically overlooked dimension in contemporary policy debates because electronic design automation (EDA) software closes the gap between high-level chip design in software and the lithography system itself. Good design simulates a proposed circuit before manufacturing, plans large integrated circuits (IC) by “bundling” small subcomponents together,  and verifies the design is connected correctly and will deliver the required performance. Although often overlooked, the photolithography process, as well as the steps required before it, is a process as complex as coming up with the design of the chip itself. 

No profit-maximizing manufacturer would print a chip “as designed” because it would suffer certain distortions and degradations throughout the printing process; therefore, EDA software is imperative to mitigate imperfections throughout the photolithography process. In much the same way that software within a home-use printer automatically screens for paper material (printer paper vs glossy photo paper) and automatically adjusts the mixture of solvent, resins, and additives to display properly, EDA software learns design kinks and responds dynamically. In the absence of such software, the yield of usable chips would be much lower, making these products less commercially viable. Contemporary public policy discourse focuses only on chips as a commodified product, without recognizing the software ecosystem that is imperative in their design and use. 

Today, there exist only two major suppliers of EDA software for semiconductor manufacturing: Synopsys and Cadence Design Systems. This reality presents a great opportunity for the United States to assert dominance in the legacy chips space. In hosting all EDA in a U.S.-based cloud—for instance, a data center located in Las Vegas or another secure location—America can force China to purchase computing power needed for simulation and verification for each chip they design. This policy would mandate Chinese reliance on U.S. cloud services to run electromagnetic simulations and validate chip design. Under this proposal, China would only be able to use the latest EDA software if such software is hosted in the U.S., allowing American firms to a) cut off access at will, rendering their technology useless and b) gain insight into homegrown Chinese designs built on this platform. Since such software would be hosted on a U.S.-based cloud, Chinese users would not download the software which would greatly mitigate the risk of foreign hacking or intellectual property theft. While the United States cannot control chips outright considering Chinese production, it can control where they are integrated. A machine without instructions is inoperable, and the United States can make China’s semiconductors obsolete.  

The emergence of machine learning has introduced substantial design innovation in older lithography technologies. For instance, Synopsis has used new technologies to discern the optimal route for wires that link chip circuits, which can factor in all the environmental variables to simulate the patterns a photo mask design would project throughout the lithography process. While the 22nm process is not cutting edge, it is legacy only in the sense of its architecture. Advancements in hardware design and software illustrate the dynamism of this facet in the semiconductor supply chain. In extraordinary circumferences, the United States could also curtail usage of such software in the event of a total trade war. Weaponizing this proprietary software could compel China to divulge all source code for auditing purposes since hardware cannot work without a software element.

The United States must also utilize its allied partnerships to restrict critical replacement components from enabling injurious competition from the Chinese. Software notwithstanding, China currently has the capability to produce 14nm nodes because SMIC acquired multiple ASML DUV machines under lighter Department of  Commerce restrictions; however, SMIC heavily relies on chip-making equipment imported from the Netherlands and Japan. While the United States cannot alter the fact of possession, it has the capacity to take limited action against the realization of these tools’ potential by restricting China’s ability to import replacement parts to service these machines, such as the lenses they require to operate. Only the German firm Zeiss has the capability to produce such lenses that ArF lasers require to focus—illustrating the importance of adopting a regulatory outlook that encompasses all verticals within the supply chain. The utility of controlling critical components is further amplified by the fact that American and European firms have limited efficacy in enforcing copyright laws against Chinese entities. For instance, while different ICs are manufactured within the 22nm instruction set, not all run on a common instruction set such as ARM. However, even if such designs run on a copyrighted instruction set, the United States has no power to enforce domestic copyright law in a Chinese jurisdiction. China’s capability to reverse engineer and replicate Western-designed chips further underscores the importance of controlling 1) the EDA landscape and 2) ancillary components in the chip manufacturing process. This reality presents a tremendous yet overlooked opportunity for the United States to reassert control over China’s legacy chip market.  

Recommendation 2. Targeted industrial policy 

In the policy discourse surrounding semiconductor manufacturing, this paper contends that too much emphasis has been placed on the chips themselves. It is important to note that there are some areas in which the United States is not commercially competitive with China, such as in the NAND flash memory space. China’s Yangtze Memory Technologies has become a world leader in flash storage and can now manufacture a 232-layer 3D NAND on par with the most sophisticated American and Korean firms, such as Western Digital and Samsung, at a lower cost. However, these shortcomings do not preclude America from asserting dominance over the semiconductor market as a whole by leveraging its dynamic random-access memory (DRAM) dominance, bolstering nearshore NAND manufacturing, and developing critical mineral processing capabilities. Both DRAM and NAND are essential components for any computationally integrated technology.  

While the U.S. cannot compete on rote manufacturing prowess because of high labor costs, it would be strategically beneficial to allow supply chain redundancies with regard to NAND and rare earth metal processing. China currently processes upwards of 90% of the world’s rare earth metals, which are critical to any type of semiconductor chips.  While the U.S. possesses strategic reserves for commodities such as oil, it does not have any meaningful reserve when it comes to rare earth metals—making this a critical national security threat.  Should China stop processing rare earth metals for the U.S., the price of any type of semiconductor—in any form factor—would increase dramatically. Furthermore, as a strategic matter, the United States would not have accomplished its national security objectives if it built manufacturing capabilities yet lacked critical inputs to supply this potential. Therefore, any legacy chips proposal must first establish sufficient rare earth metal processing capabilities or a strategic reserve of these critical resources.  

Furthermore, given the advanced status of U.S. technological manufacturing prowess, it makes little sense to outright onshore legacy chip manufacturing capabilities—especially considering high U.S. costs and the substantial diversion of intellectual capital that such efforts would require. Each manufacturer must develop their own manufacturing process from scratch. A modern fab runs 24×7 and has a complicated workflow, with its own technique and software when it comes to lithography. For instance, since their technicians and scientists are highly skilled, TSMC no longer focuses on older generation lithography (i.e., 22nm) because it would be unprofitable for them to do so when they cannot fulfill their demand for 3nm or 4nm. The United States is better off developing its comparative advantage by specializing in cutting-edge chip manufacturing capabilities, as well as research and development initiatives; however, while American expertise remains expensive, America has wholly neglected the potential utility of its southern neighbors in shoring up rare earth metals processing. Developing Latin American metals processing—and legacy chip production—capabilities can mitigate national security threats. Hard drive manufacturers have employed a similar nearshoring approach with great success. 

To address both rare earth metals and onshoring concerns, the United States should pursue an economic integration framework with nations in Latin America’s Southern Cone, targeting a partialized (or multi-sectoral) free trade agreement with the Southern Common Market (MERCOSUR) bloc. The United States should pursue this policy along two industry fronts, 1) waiving the Common External Tariff for United States’ petroleum and other fuel exports, which currently represent the largest import group for Latin American members of the bloc, and 2) simultaneously eliminating all trade barriers on American importation of critical minerals––namely arsenic, germanium, and gallium––which are necessary for legacy chip manufacturing. Enacting such an agreement and committing substantial capital to the project over a long-term time horizon would radically increase semiconductor manufacturing capabilities across all verticals of the supply chain. Two mutually inclusive premises underpin this policy’s efficacy: 

Firstly, the production of economic interdependence with a bloc of Latin American states (as opposed to a single nation) serves to diversify risk in the United States; each nation provides different sets and volumes of critical minerals and has competing foreign policy agendas. This reduces the capacity of states to exert meaningful and organized diplomatic pressure on the United States, as supply lines can be swiftly re-adjusted within the bloc. Moreover, MERCOSUR countries are major energy importers, specifically with regard to Bolivian natural gas and American petroleum. Under an energy-friendly U.S. administration, the effects of this policy would be especially pronounced: low petroleum costs enable the U.S. to subtly reassert its geopolitical sway within its regional sphere of influence, notably in light of newly politically friendly Argentinian and Paraguayan governments. China has been struggling to ratify its own trade accords with the bloc given industry vulnerability, this initiative would further undermine its geopolitical influence in the region. Refocusing critical mineral production within this regional geography would decrease American reliance on Chinese production. 

Secondly, nearshoring the semiconductor supply chain would reduce transport costs, decrease American vulnerability to intercontinental disruptions, and mitigate geopolitical reliance on China. Reduced extraction costs in Latin America, minimized transportation expenses, and reduced labor costs in especially Midwestern and Southern U.S. states enable America to maintain export competitiveness as a supplier to ASEAN’s booming technology industry in adjacent sectors, which indicates that China will not automatically fill market distortions. Furthermore, establishing investment arbitration procedures compliant with the General  Agreement on Tariffs and Trade’s Dispute Settlement Understanding should accompany the establishment of transcontinental commerce initiatives, and therefore designate the the World Trade Organization as the exclusive forum for dispute settlement. 

This policy is necessary to avoid the involvement of corrupt states’ backpaddling on established systems, which has historically impeded corporate involvement in the Southern Cone. This international legal security mechanism serves to assure entrepreneurial inputs that will render cooperation with American enterprises mutually attractive. However, partial free trade accords for primary sector materials are not sufficient to revitalize American industry and shift supply lines. To address the demand side, the exertion of downward pressure on pricing, alongside the reduction of geopolitical risk, should be accompanied by the institution of a state-subsidized low-interest loan, with available rate reset for approved legacy chip manufacturers, and a special-tier visa for hired personnel working in legacy chip manufacturing. Considering the sensitive national security interests at stake, the U.S. Federal Contractor Registration ought to employ the same awarding mechanisms and security filtering criteria used for federal arms contracts in its company auditing mechanisms.  Under this scheme, vetted and historically capable legacy chip manufacturing firms will be exclusively able to take advantage of significant subventions and exceptional ‘wartime’ loans. Two reasons underpin the need for this martial, yet market-oriented industrial policy. 

Firstly, legacy chip production requires highly specialized labor and immensely expensive fixed costs given the nature of accompanying machinery. Without targeted low-interest loans, the significant capital investment required for upgrading and expanding chip manufacturing facilities would be prohibitively high–potentially eroding the competitiveness of American and allied industries in markets that are heavily saturated with Chinese subsidies. Such mechanisms for increased and cheap liquidity also render it easier to import highly specialized talent from China, Taiwan, Germany, the Netherlands, etc., by offering more competitive compensation packages and playing onto the attractiveness of the United States lifestyle. This approach would mimic the Second World War’s “Operation Paperclip,” executed on a piecemeal basis at the purview of approved legacy chip suppliers.  

Secondly, the investment fluidity that accompanies significant amounts of accessible capital serves to reduce stasis in the research and development of sixth-generation automotive, multi-use legacy chips (in both autonomous and semi-autonomous systems). Much of this improvement a priori occurs through trial-and-error processes within state-of-the-art facilities under the long-term commitment of manufacturing, research, and operations teams. 

Acknowledging the strategic importance of centralizing, de-risking, and reducing reliance on foreign suppliers will safeguard the economic stability, national defense capabilities, and the innovative flair of the United States––restoring the national will and capacity to produce on its own shores. The national security ramifications of Chinese legacy chip manufacturing are predominantly downstream of their economic consequences, particularly vis-à-vis the integrity of American defense manufacturing supply chains. In implementing the aforementioned solutions and moving chip manufacturing to closer and friendlier locales, American firms can be well positioned to compete globally against Chinese counterparts and supply the U.S. military with ample chips in the event of armed conflict.  

In 2023, the Wall Street Journal exposed the fragility of American supply chain resilience when they profiled how one manufacturing accident took offline 100% of the United States’ production capability for black powder—a critical component of mortar shells, artillery rounds, and Tomahawk missiles.  This incident illustrates how critical a consolidated supply chain can be for national security and the importance of mitigating overreliance on China for critical components. As firms desire lower prices for their chips, ensuring adequate capacity is a significant component of a successful strategy to address China’s growing global share of legacy chip manufacturing. However, there are additional national security concerns for legacy chip manufacturing that supersede their economic significance–mitigating supply chain vulnerabilities is among the most consequential of these considerations.  

Lastly, when there are substantial national security objectives at stake, the state is justified in acting independently of economic considerations; markets are sustained only by the imposition of binding and common rules. Some have argued that the possibility of cyber sabotage and espionage through military applications of Chinese chip technology warrants accelerating the timeline of procurement restrictions. The National Defense Authorization Act for Fiscal Year 2023’s Section 5949 prohibits the procurement of China-sourced chips from 2027 onwards. Furthermore, the Federal Communications Commission has the power to restrict China-linked semiconductors in U.S. critical infrastructure under the Secure Networks Act and the U.S.  Department of Commerce reserves the right to restrict China-sourced semiconductors if they pose a threat to critical communications and information technology infrastructure.  

However, Matt Blaze’s 1994 article “Protocol Failure in the Escrowed Encryption Standard” exposed the shortcomings of supposed hardware backdoors, such as the NSA’s “clipper chip” that they designed in the 1990s to surveil users. In the absence of functional software, a Chinese-designed hardware backdoor into sensitive applications could not function. This scenario would be much like a printer trying to operate without an ink cartridge. Therefore, instead of outright banning inexpensive Chinese chips and putting American firms at a competitive disadvantage, the federal government should require Chinese firms to release source code to firmware and supporting software for the chips they sell to Western companies. This would allow these technologies to be independently built and verified without undermining the competitive position of American industry. The U.S. imposed sanctions against Huawei in 2019 on suspicion of the potential espionage risks that reliance on Chinese hardware poses. While tighter regulation of Chinese semiconductors in sensitive areas seems to be a natural and pragmatic extension of this logic, it is unnecessary and undermines American dynamism.

Conclusion

Considering China’s growing global share of legacy chip manufacturing as a predominantly economic problem with substantial national security consequences, the American foreign policy establishment ought to pursue 1) a new technological outlook that exploits all facets of the integrated chip supply chain—including EDA software and allied replacement component suppliers—and 2) a partial free-trade agreement with MERCOSUR to further industrial policy objectives.  

To curtail Chinese legacy chip dominance, the United States should weaponize its monopoly on electronic design automation software. By effectively forcing Chinese firms to purchase computing services from a U.S.-based cloud, American EDA software firms can audit and monitor Chinese innovations while reserving the ability to deny them service during armed conflict. Restricting allied firms’ ability to supply Chinese manufacturers with ancillary components can likewise slow the pace of Chinese legacy chip ascendence.  

Furthermore, although China no longer relies on the United States or allied countries for NAND  manufacturing, the United States and its allies maintain DRAM superiority. The United States must leverage capabilities to maintain Chinese reliance on its DRAM prowess and sustain its competitive edge while considering restricting the export of this technology for Chinese defense applications under extraordinary circumstances. Simultaneously, efforts to nearshore NAND technologies in South America can delay the pace of Chinese legacy chip ascendence, especially if implemented alongside a strategic decision to reduce reliance on Chinese rare earth metals processing.  

In nearshoring critical mineral inputs to the end of preserving national security and reducing costs, the United States should adopt a market-oriented industrial policy of rate-reset, and state-subsidized low-interest loans for vetted legacy chip manufacturing firms. Synergy between greater competitiveness, capital solvency, and de-risked supply chains would enable U.S. firms to compete against Chinese counterparts in critical “third markets,” and reduce supply chain vulnerabilities that undermine national security. As subsidy-induced Chinese market distortions weigh less on the commercial landscape, the integrity of American defense capabilities will simultaneously improve, especially if bureaucratic agencies move to further insulate critical U.S. infrastructure against potential cyber espionage.

An “Open Foundational” Chip Design Standard and Buyers’ Group to Create a Strategic Microelectronics Reserve

Last year the Federation of American Scientists (FAS), Jordan Schneider (of ChinaTalk), Chris Miller (author of Chip War) and Noah Smith (of Noahpinion) hosted a call for ideas to address the U.S. chip shortage and Chinese competition. A handful of ideas were selected based on the feasibility of the idea and its and bipartisan nature. This memo is one of them.

Semiconductors are not one industry, but thousands. They range from ultra-high value advanced logic chips like H100s to bulk discrete electronic components and basic integrated circuits (IC). Leading-edge chips are advanced logic, memory, and interconnect devices manufactured in cutting-edge facilities requiring production processes at awe-inspiring precision. Leading-edge chips confer differential capabilities, and “advanced process nodes”1 enable the highest performance computation and the most compact and energy-efficient devices. This bleeding edge performance is derived from the efficiencies enabled by more densely packed circuit elements in a computer chip. Smaller transistors require lower voltages and more finely packed ones can compute faster. 

Devices manufactured with older process nodes, 65nm and above, form the bulk by volume of devices we use. These include discrete electrical components like diodes or transistors, power semiconductors, and low-value integrated circuits such as bulk microcontrollers (MCU). These inexpensive logic chips like MCUs, memory controllers, and clock setters I term “commodity ICs”. While the keystone components in advanced applications are manufactured at the leading edge, older nodes are the table stakes of electrical systems. These devices supply power, switch voltages, transform currents, command actuators, and sense the environment. These devices we’ll collectively term foundational chips, as they provide the platform upon which all electronics rest. And their supply can be a point of failure. The automotive MCU shortage provides a bitter lesson that even the humblest device can throttle production. 

Foundational devices themselves do not typically enable differentiating capabilities. In many applications, such as computing or automotive, they simply enable basic functions. These devices are low-cost, low-margin goods, made with a comparatively simpler production process. Unfortunately, a straightforward supply does not equate to a secure one. Foundational chips are manufactured by a small number of firms concentrated in China. This is in part due to long-running industrial policy efforts by the Chinese government, with significant production subsidies. The Chips and Science Act was mainly about innovation and international competitiveness. Reshoring a significant fraction of leading-edge production to the United States in the hope of returning valuable communities of engineering practice (Fuchs & Kirchain, 2010). While these policy goals are vital, foundational chip supply represents a different challenge and must be addressed by other interventions. 

The main problem posed by the existing foundational chip supply is resilience. They are manufactured by a few geographically clustered firms and are thus vulnerable to disruption, from geopolitical conflicts (e.g. export controls on these devices) or more quotidian outages such as natural disasters or shipping disruptions. 

There is also concern that foreign governments may install hardware backdoors in chips manufactured within their borders, enabling them to deactivate the deployed stock of chips. While this meaningful security consideration, it is less applicable in foundational devices, as their low complexity makes such backdoors more challenging. A DoD analysis found mask and wafer production to be the manufacturing process steps most resilient to adversarial interference (Coleman, 2023, p. 36).  There already exist “trusted foundry” electronics manufacturers for critical U.S. defense applications concerned about confidentiality; these  policy interventions seek to address the vulnerability to a conventional supply disruption. This report will first outline the technical and economic features of foundational chip supply which are barriers to a resilient supply, and then propose policy to address these barriers. 

Challenge and Opportunity

Technical characteristics of the manufacture and end-use of foundational microelectronics make supply especially vulnerable to disruption. Commodity logic ICs such as MCUs or memory controllers vary in their clock speed, architecture, number of pins, number of inputs/outputs (I/O), mapping of I/O to pins, package material, circuit board connection, and other design features. Some of these features, like operating temperature range, are key drivers of performance in particular applications. However most custom features in commodity ICs do not confer differential capability or performance advantages to the final product, the pin-count of a microcontroller does not determine the safety or performance of a vehicle. Design lock-in combined with this feature variability results in dramatically reduced short-run substitutability of these devices; while MCUs exist in a commodity-style market, they are not interchangeable without significant redesign efforts. This phenomenon, designs based on specialized components that are not required in the application, is known as over-specification (Smith & Eggert, 2018). This means that while there are numerous semiconductor manufacturing firms, in practice there may only be a single supplier for a specified foundational component. 

These over-specification risks are exacerbated by a lack of value chain visibility. Firms possess little knowledge of their tier 2+ suppliers. The fractal symmetry of this knowledge gap means that even if an individual firm secures robust access to the components they directly use, they may still be exposed to disruption through their suppliers. Value chains are only as strong as their weakest link. Physical characteristics of foundational devices also uncouple them from the leading edge. Many commodity ICs just don’t benefit from classical feature shrinkage; bulk MCUs or low-end PMICs don’t improve in performance with transistor density as their outputs are essentially fixed. Analog devices experience performance penalties at too small a feature scale, with physically larger transistors able to process higher voltages and produce lower sympathetic capacitance. Manufacturing commodity logic ICs using leading-edge logic fabs would be prohibitively expensive and would be actively detrimental to analog device performance. These factors, design over-specification, supply chain opacity, and insulation from leading-edge production, combine to functionally decrease the already narrow supply of legacy chips. 

Industrial dynamics impede this supply from becoming more robust without policy intervention. Foundational chips, whether power devices or memory controllers are low-margin commodity-style products sold in volume. The extreme capital intensity of the industry combined with the low margin for these makes supply expansion unattractive for producers, with short-term capital discipline a common argument against supply buildout (Connatser, 2024). The premium firms pay for performance results in significant investment in leading-edge design and production capacity as firms compete for this demand. The commodity environment of foundational devices in contrast is challenging to pencil out as even trailing-edge fabs are highly capital-intensive (Reinhardt, 2022). Chinese production subsidies also impede the expansion of foundational fabs, as they further narrow already low margins. Semiconductor demand is historically cyclical, and producers don’t make investment decisions based on short-run demand signals. These factors make foundational device supply challenging to expand: firms manufacture commodity-style products manufactured in capital-intensive facilities, competing with subsidized producers, to meet widely varying demands. Finally, foundational chip supply resilience is a classic positive externality good. No individual firm captures all or even most of the benefit of a more robust supply ecosystem. 

Plan of Action

To secure the supply of foundational chips, this memo recommends the development of an “Open Foundational” design standard and buyers’ group. One participant in that buyer’s group will be the U.S. federal government, which would establish a strategic microelectronics reserve to ensure access to critical chips. This reserve would be initially stocked through a multi-year advanced market commitment for Open Foundational devices. 

The foundational standard would be a voluntary consortium of microelectronics users in critical sectors, inspired by the Open Compute Project. It would ideally contain firms from critical sectors such as enterprise computation, automotive manufacturing, communications infrastructure, and others. The group would initially convene to identify a set of foundational devices that are necessary to their sectors (i.e. system architecture commodity ICs and power devices for computing) and identify design features that don’t significantly impact performance, and thus could be standardized. From these, a design standard could be developed. Firms are typically locked to existing devices for their current design; one can’t place a 12-pin MCU into a board built for 8. Steering committee firms will thus be asked to commit some fraction of future designs to use Open Foundational microelectronics, ideally on a ramping-up basis. The goal of the standard is not to mandate away valuable features, unique application needs should still be met by specialized devices, such as rad-hardened components in satellites. By adopting a standard platform of commodity chips in future designs, the buyers’ group would represent demand of sufficient scale to motivate investment, and supply would be more robust to disruptions once mature. 

Government should adopt the standard where feasible, to build greater resilience in critical systems if nothing else. This should be accompanied by a diplomatic effort for key democratic allies to partner in adopting these design practices in their defense applications. The foundational standard should seek geographic diversity in suppliers, as manufacturing concentrated anywhere represents a point of failure. The foundational standard also allows firms to de-risk their suppliers as well as themselves. They can stipulate in contracts that their tier-one suppliers need to adopt Foundational Standards in their designs, and OEMs who do so can market the associated resilience advantage. 

Having developed the open standard through the buyers’ group, Congress should authorize the purchase through the Department of Commerce a strategic microelectronics reserve (SMR). Inspired by the strategic petroleum reserve, the microelectronics reserve is intended to provide the backstop foundational hardware for key government and societal operations during a crisis. The composition of the SMR will likely evolve as technologies and applications develop, but at launch, the purchasing authority should commit to a long-term high-volume purchase of Foundational Standard devices, a policy structure known as an advanced market commitment. 

Advanced market commitments are effective tools to develop supply when there is initial demand uncertainty, clear product specification, and a requirement for market demand to mature (Ransohoff, 2024). The foundational standard provides the product specification, and the advanced government commitment provides demand at a duration that should exceed both the product development and fab construction lifecycle, on the order of 5 years or more. This demand should be steady, with regular annual purchases at scale, ensuring producers’ consistent demand through the ebbs and flows of a volatile industry. If these efforts are successful, the U.S. government will cultivate a more robust and resilient supply ecosystem both for its own core services and for firms and citizens. The SMR could also serve as a backstop when supply fluctuations do occur, as with the strategic petroleum reserve.

The goal of the SMR is not to fully substitute for existing stockpiling efforts, either by firms or by the government for defense applications. Through the expanded supply base for foundational chips enabled by the SMR, and through the increase in substitutability driven by the Foundational Standard, users can concentrate their stockpiling efforts on the chips which confer differentiated capabilities. As resources can be concentrated in more application-specific chips, stockpiling becomes more efficient, enabling more production for the same investment. In the long run, the SMR should likely diversify to include more advanced components such as high-capacity memory, and field-programmable processors. This would ensure government access to core computational capabilities in a disaster or conflict scenario. But as all systems are built on a foundation, the SMR should begin with Foundational Standard devices. 

There are potential risks to this approach. The most significant is that this model of foundational chips does not accurately reflect physical reality. Interfirm cooperation in setting and adhering to the standards is conditional on these devices not determining performance. If firms perceive foundational chips as providing a competitive advantage to their system or products, they shall not crucify capability on a cross of standards. Alternatively, each sector may have a basket of foundational devices as we describe, but there may be little to no overlap sector-to-sector. In this case, the sectors representing the largest demand, such as enterprise computing, may be able to develop their own standard, but without resilience spillovers into other applications. These scenarios should be identifiably early in the standard-setting process before significant physical investment is made. In such cases, the government should explore using fab lines in the national prototyping facility to flexibly manufacture a variety of foundational chips when needed, by developing adaptive production lines and processes. This functionally shifts the policy goal up the value chain, achieving resilience through flexible manufacture of devices rather than flexible end-use.

Value chains may be so opaque that the buyers’ group might fail to identify a vulnerable chip. The Department of Commerce developing an office of supply mapping, and applying a tax penalty to firms who fail to report component flows are potential mitigation strategies. Existing subsidized foundational chip supply by China may make virtually any greenfield production uncompetitive. In this case, trade restrictions or a counter-subsidy may be required until the network effects of the Foundational Standard enable long-term viability.  We do not want the Foundational standard to lock in technological stagnation, in fact the opposite. Accordingly, there should be a periodic and iterative review of the devices within the standard and their features. The problems of legacy chips are distinct from those at the technical frontier.  

Foundational chips are necessary but not sufficient for modern electronic systems. It was not the hundreds of dollar System-on-a-Chip components that brought automotive production to a halt, but the sixteen-cent microcontroller. The technical advances fueled by leading-edge nodes are vital to our long-term competitiveness, but they too rely on legacy devices. We must in parallel fortify the foundation on which our security and dynamism rests.

Blank Checks for Black Boxes: Bring AI Governance to Competitive Grants

The misuse of AI in federally-funded projects can risk public safety and waste taxpayer dollars.

The Trump administration has a pivotal opportunity to spot wasteful spending, promote public trust in AI, and safeguard Americans from unchecked AI decisions. To tackle AI risks in grant spending, grant-making agencies should adopt trustworthy AI practices in their grant competitions and start enforcing them against reckless grantees.

Federal AI spending could soon skyrocket. One ambitious legislative plan from a Senate AI Working Group calls for doubling non-defense AI spending to $32 billion a year by 2026. That funding would grow AI across R&D, cybersecurity, testing infrastructure, and small business support. 

Yet as federal AI investment accelerates, safeguards against snake oil lag behind. Grants can be wasted on AI that doesn’t work. Grants can pay for untested AI with unknown risks. Grants can blur the lines of who is accountable for fixing AI’s mistakes. And grants offer little recourse to those affected by an AI system’s flawed decisions. Such failures risk exacerbating public distrust of AI, discouraging possible beneficial uses. 

Oversight for federal grant spending is lacking, with: 

Watchdogs, meanwhile, play a losing game, chasing after errant programs one-by-one only after harm has been done. Luckily, momentum is building for reform. Policymakers recognize that investing in untrustworthy AI erodes public trust and stifles genuine innovation. Steps policymakers could take include setting clear AI quality standards, training grant judges, monitoring grantee’s AI usage, and evaluating outcomes to ensure projects achieve their potential. By establishing oversight practices, agencies can foster high-potential projects for economic competitiveness, while protecting the public from harm. 

Challenge and Opportunity

Poor AI Oversight Jeopardizes Innovation and Civil Rights

The U.S. government advances public goals in areas like healthcare, research, and social programs by providing various types of federal assistance. This funding can go to state and local governments or directly to organizations, nonprofits, and individuals. When federal agencies award grants, they typically do so expecting less routine involvement than they would with other funding mechanisms, for example cooperative agreements. Not all federal grants look the same—agencies administer mandatory grants, where the authorizing statute determines who receives funding, and competitive grants (or “discretionary grants”), where the agency selects award winners. In competitive grants, agencies have more flexibility to set program-specific conditions and award criteria, which opens opportunities for policymakers to structure how best to direct dollars to innovative projects and mitigate emerging risks. 

These competitive grants fall short on AI oversight. Programmatic policy is set in cross-cutting laws, agency-wide policies, and grant-specific rules; a lack of AI oversight mars all three. To date, no government-wide AI regulation extends to AI grantmaking. Even when President Biden’s 2023 AI Executive Order directed agencies to implement responsible AI practices, the order’s implementing policies exempted grant spending (see footnote 25) entirely from the new safeguards. In this vacuum, the 26 grantmaking agencies are on their own to set agency-wide policies. Few have. Agencies can also set AI rules just for specific funding opportunities. They do not. In fact, in a review of a large set of agency discretionary grant programs, only a handful of funding notices announced a standard for AI quality in a proposed program. (See: One Bad NOFO?) The net result? A policy and implementation gap for the use of AI in grant-funded programs.

Funding mistakes damage agency credibility, stifle innovation, and undermines the support for people and communities financial assistance aims to provide. Recent controversies highlight how today’s lax measures—particularly in setting clear rules for federal financial assistance, monitoring how they are used, and responding to public feedback—have led to inefficient and rights-trampling results. In just the last few years, some of the problems we have seen include:

Any grant can attract controversy, and these grants are no exception. But the cases above spotlight transparency, monitoring, and participation deficits—the same kinds of AI oversight problems weakening trust in government that policymakers aim to fix in other contexts.

Smart spending depends on careful planning. Without it, programs may struggle to drive innovation or end up funding AI that infringes peoples’ rights. OMB, as well as agency Inspectors General, and grant managers will need guidance to evaluate what money is going towards AI and how to implement effective oversight. Government will face tradeoffs and challenges promoting AI innovation in federal grants, particularly due to:

1) The AI Screening Problem. When reviewing applications, agencies might fail to screen out candidates that exaggerate their AI capabilities—or fail to report bunk AI use altogether. Grantmaking requires calculated risks on ideas that might fail. But grant judges who are not experts in AI can make bad bets. Applicants will pitch AI solutions directly to these non-experts, and grant winners, regardless of their original proposal, will likely purchase and deploy AI, creating additional oversight challenges. 

2) The grant-procurement divide. When planning a grant, agencies might set overly burdensome restrictions that dissuade qualified applicants from applying or otherwise take up too much time, getting in the way of grant goals. Grants are meant to be hands-off;  fostering breakthroughs while preventing negligence will be a challenging needle to thread. 

 3) Limited agency capacity. Agencies may be unequipped to monitor grant recipients’ use of AI. After awarding funding, agencies can miss when vetted AI breaks down on launch. While agencies audit grantees, those audits typically focus on fraud and financial missteps. In some cases, agencies may not be measuring grantee performance well at all (slides 12-13).  Yet regular monitoring, similar to the oversight used in procurement, will be necessary to catch emergent problems that affect AI outcomes. Enforcement, too, could be cause for concern; agencies clawback funds for procedural issues, but “almost never withhold federal funds when grantees are out of compliance with the substantive requirements of their grant statutes.” Even as the funding agency steps away, an inaccurate AI system can persist, embedding risks over a longer period of time.

Plan of Action

Recommendation 1. OMB and agencies should bake-in pre-award scrutiny through uniform requirements and clearer guidelines

Recommendation 2. OMB and grant marketplaces should coordinate information sharing between agencies

To support review of AI-related grants, OMB and grantmaking agency staff should pool knowledge on AI’s tricky legal, policy, and technical matters. 

Recommendation 3. Agencies should embrace targeted hiring and talent exchanges for grant review boards

Agencies should have experts in a given AI topic judging grant competitions. To do so requires overcoming talent acquisition challenges. To that end:

Recommendation 4. Agencies should step up post-award monitoring and enforcement

You can’t improve what you don’t measure—especially when it comes to AI. Quantifying, documenting, and enforcing against careless AI uses can be a new task for grantmaking agencies.  Incident reporting will improve the chances that existing cross-cutting regulations, including civil rights laws, can reel back AI gone awry. 

Recommendation 5. Agencies should encourage and fund efforts to investigate and measure AI harms 

Conclusion

Little limits how grant winners can spend federal dollars on AI. With the government poised to massively expand its spending on AI, that should change. 

The federal failure to oversee AI use in grants erodes public trust, civil rights, effective service delivery and the promise of government-backed innovation. Congressional efforts to remedy these problems–starting probes, drafting letters–are important oversight measures, but only come after the damage is done. 

Both the Trump and Biden administrations have recognized that AI is exceptional and needs exceptional scrutiny. Many of the lessons learned from scrutinizing federal agency AI procurement apply to grant competitions. Today’s confluence of public will, interest, and urgency is a rare opportunity to widen the aperture of AI governance to include grantmaking.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
What authorities allow agencies to make grant competitions?

Enabling statutes for agencies often are the authority for grant competitions. For grant competitions, the statutory language leaves it to agencies to place further specific policies on the competition. Additionally, laws, like the DATA Act and Federal Grant and Cooperative Agreement Act, offer definitions and guidance to agencies in the use of federal funds.

What kinds of steps do agencies take in pre-award funding?

Agencies already conduct a great deal of pre-award planning to align grantmaking with Executive Orders. For example, in one survey of grantmakers, a little over half of respondents updated their pre-award processes, such as applications and organization information, to comply with an Executive Order. Grantmakers aligning grant planning with the Trump administration’s future Executive Orders will likely follow similar steps.

Who receives federal grant funding for the development and use of AI?

A wide range of states, local governments, companies, and individuals receive grant competition funds. Spending records, available on USASpending.gov, give some insight into where grant funding goes, though these records too, can be incomplete.

Fighting Fakes and Liars’ Dividends: We Need To Build a National Digital Content Authentication Technologies Research Ecosystem

The U.S. faces mounting challenges posed by increasingly sophisticated synthetic content. Also known as digital media ( images, audio, video, and text), increasingly, these are produced or manipulated by generative artificial intelligence (AI).  Already, there has been a proliferation in the abuse of generative AI technology to weaponize synthetic content for harmful purposes, such as financial fraud, political deepfakes, and the non-consensual creation of intimate materials featuring adults or children. As people become less able to distinguish between what is real and what is fake, it has become easier than ever to be misled by synthetic content, whether by accident or with malicious intent. This makes advancing alternative countermeasures, such as technical solutions, more vital than ever before. To address the growing risks arising from synthetic content misuse, the National Institute of Standards and Technology (NIST) should take the following steps to create and cultivate a robust digital content authentication technologies research ecosystem: 1) establish dedicated university-led national research centers, 2) develop a national synthetic content database, and 3) run and coordinate prize competitions to strengthen technical countermeasures. In turn, these initiatives will require 4) dedicated and sustained Congressional funding of these initiatives. This will enable technical countermeasures to be able to keep closer pace with the rapidly evolving synthetic content threat landscape, maintaining the U.S.’s role as a global leader in responsible, safe, and secure AI.

Challenge and Opportunity

While it is clear that generative AI offers tremendous benefits, such as for scientific research, healthcare, and economic innovation, the technology also poses an accelerating threat to U.S. national interests. Generative AI’s ability to produce highly realistic synthetic content has increasingly enabled its harmful abuse and undermined public trust in digital information. Threat actors have already begun to weaponize synthetic content across a widening scope of damaging activities to growing effect. Project losses from AI-enabled fraud are anticipated to reach up to $40 billion by 2027, while experts estimate that millions of adults and children have already fallen victim to being targets of AI-generated or manipulated nonconsensual intimate media or child sexual abuse materials – a figure that is anticipated to grow rapidly in the future. While the widely feared concern of manipulative synthetic content compromising the integrity of the 2024 U.S. election did not ultimately materialize, malicious AI-generated content was nonetheless found to have shaped election discourse and bolstered damaging narratives. Equally as concerning is the accumulative effect this increasingly widespread abuse is having on the broader erosion of public trust in the authenticity of all digital information. This degradation of trust has not only led to an alarming trend of authentic content being increasingly dismissed as ‘AI-generated’, but has also empowered those seeking to discredit the truth, or what is known as the “liar’s dividend”.

From the amusing… to the not-so-benign.

A. In March 2023, a humorous synthetic image of Pope Francis, first posted on Reddit by creator Pablo Xavier, wearing a Balenciaga coat quickly went viral across social media.

B. In May 2023, this synthetic image was duplicitously published on X as an authentic photograph of an explosion near the Pentagon. Before being debunked by authorities, the image’s widespread circulation online caused significant confusion and even led to a temporary dip in the U.S. stock market.

Research has demonstrated that current generative AI technology is able to produce synthetic content sufficiently realistic enough that people are now unable to reliably distinguish between AI-generated and authentic media. It is no longer feasible to continue, as we currently do, to rely predominantly on human perception capabilities to protect against the threat arising from increasingly widespread synthetic content misuse. This new reality only increases the urgency of deploying robust alternative countermeasures to protect the integrity of the information ecosystem. The suite of digital content authentication technologies (DCAT), or techniques, tools, and methods that seek to make the legitimacy of digital media transparent to the observer, offers a promising avenue for addressing this challenge. These technologies encompass a range of solutions, from identification techniques such as machine detection and digital forensics to classification and labeling methods like watermarking or cryptographic signatures. DCAT also encompasses technical approaches that aim to record and preserve the origin of digital media, including content provenance, blockchain, and hashing.

Evolution of Synthetic Media

Screenshot from an AI-manipulated video of President Obama

Published in 2018, this now infamous PSA sought to illustrate the dangers of synthetic content. It shows an AI-manipulated video of President Obama, using narration from a comedy sketch by comedian Jordan Peele.

In 2020, a hobbyist creator employed an open-source generative AI model to ‘enhance’ the Hollywood CGI version of Princess Leia in the film Rouge One.

In 2020, a hobbyist creator employed an open-source generative AI model to ‘enhance’ the Hollywood CGI version of Princess Leia in the film Rouge One.

The hugely popular Tiktok account @deeptomcruise posts parody videos featuring a Tom Cruise imitator face-swapped with the real Tom Cruise’s real face, including this 2022 video, racking up millions of views.

The hugely popular Tiktok account @deeptomcruise posts parody videos featuring a Tom Cruise imitator face-swapped with the real Tom Cruise’s real face, including this 2022 video, racking up millions of views.

The 2024 film Here relied extensively on generative AI technology to de-age and face-swap actors in real-time as they were being filmed.

The 2024 film Here relied extensively on generative AI technology to de-age and face-swap actors in real-time as they were being filmed.

Robust DCAT capabilities will be indispensable for defending against the harms posed by synthetic content misuse, as well as bolstering public trust in both information systems and AI development. These technical countermeasures will be critical for alleviating the growing burden on citizens, online platforms, and law enforcement to manually authenticate digital content. Moreover, DCAT will be vital for enforcing emerging legislation, including AI labeling requirements and prohibitions on illegal synthetic content. The importance of developing these capabilities is underscored by the ten bills (see Fig 1) currently under Congressional consideration that, if passed, would require the employment of DCAT-relevant tools, techniques, and methods.

Figure 1. Congressional bills which would require the use of DCAT tools, techniques, and methods.
Bill NameSenateHouse
AI Labelling ActS.2691H.R.6466
Take It Down ActS.4569H.R.8989
DEFIANCE ActS.3696H.R.7569
Preventing Deepfakes of Intimate Images ActH.R.3106
DEEPFAKES Accountability ActH.R.5586
AI Transparency in Elections ActS.3875H.R.8668
Securing Elections From AI Deception ActH.R. 8858
Protecting Consumers from Deceptive AI ActH.R. 7766
COPIED ActS.4674
NO FAKES ActS.4875H.R.9551

However, significant challenges remain. DCAT capabilities need to be improved, with many currently possessing weaknesses or limitations such brittleness or security gaps. Moreover, implementing these countermeasures must be carefully managed to avoid unintended consequences in the information ecosystem, like deploying confusing or ineffective labeling to denote the presence of real or fake digital media. As a result, substantial investment is needed in DCAT R&D to develop these technical countermeasures into an effective and reliable defense against synthetic content threats.

The U.S. government has demonstrated its commitment to advancing DCAT to reduce synthetic content risks through recent executive actions and agency initiatives. The 2023 Executive Order on AI (EO 14110) mandated the development of content authentication and tracking tools. Charged by the EO 14110 to address these challenges, NIST has taken several steps towards advancing DCAT capabilities. For example, NIST’s recently established AI Safety Institute (AISI) takes the lead in championing this work in partnership with NIST’s AI Innovation Lab (NAIIL).  Key developments include: the dedication of one of the U.S. Artificial Intelligence Safety Institute Consortium’s (AISIC) working groups to identifying and advancing DCAT R&D; the publication of NIST AI 100-4, which “examines the existing standards, tools, methods, and practices, as well as the potential development of further science-backed standards and techniques” regarding current and prospective DCAT capabilities; and the $11 million dedicated to international research on addressing dangers arising from synthetic content announced at the first convening of the International Network of AI Safety Institutes. Additionally, NIST’s Information Technology Laboratory (ITL) has launched the GenAI Challenge Program to evaluate and advance DCAT capabilities. Meanwhile, two pending bills in Congress, the Artificial Intelligence Research, Innovation, and Accountability Act (S. 3312) and the Future of Artificial Intelligence Innovation Act (S. 4178), include provisions for DCAT R&D.

Although these critical first steps have been taken, an ambitious and sustained federal effort is necessary to facilitate the advancement of technical countermeasures such as DCAT. This is necessary to more successfully combat the risks posed by synthetic content—both in the immediate and long-term future. To gain and maintain a competitive edge in the ongoing race between deception and detection, it is vital to establish a robust national research ecosystem that fosters agile, comprehensive, and sustained DCAT R&D.

Plan of Action

NIST should engage in three initiatives: 1) establishing dedicated university-based DCAT research centers, 2) curating and maintaining a shared national database of synthetic content for training and evaluation, as well as 3) running and overseeing regular federal prize competitions to drive innovation in critical DCAT challenges. The programs, which should be spearheaded by AISI and NAIIL, are critical for enabling the creation of a robust and resilient U.S. DCAT research ecosystem. In addition, the 118th Congress should 4) allocate dedicated funding to supporting these enterprises.

These recommendations are not only designed to accelerate DCAT capabilities in the immediate future, but also to build a strong foundation for long-term DCAT R&D efforts. As generative AI capabilities expand, authentication technologies must too keep pace, meaning that developing and deploying effective technical countermeasures will require ongoing, iterative work. Success demands extensive collaboration across technology and research sectors to expand problem coverage, maximize resources, avoid duplication, and accelerate the development of effective solutions. This coordinated approach is essential given the diverse range of technologies and methodologies that must be considered when addressing synthetic content risks.

Recommendation 1. Establish DCAT Research Institutes

NIST should establish a network of dedicated university-based research to scale up and foster long-term, fundamental R&D on DCAT. While headquartered at leading universities, these centers would collaborate with academic, civil society, industry, and government partners, serving as nationwide focal points for DCAT research and bringing together a network of cross-sector expertise. Complementing NIST’s existing initiatives like the GenAI Challenge, the centers’ research priorities would be guided by AISI and NAIIL, with expert input from the AISIC, the International Network of AISI, and other key stakeholders.  

A distributed research network offers several strategic advantages. It leverages elite expertise from industry and academia, and having permanent institutions dedicated to DCAT R&D enables the sustained, iterative development of authentication technologies to better keep pace with advancing generative AI capabilities. Meanwhile, central coordination by AISI and NAIIL would also ensure comprehensive coverage of research priorities while minimizing redundant efforts.  Such a structure provides the foundation for a robust, long-term research ecosystem essential for developing effective countermeasures against synthetic content threats.

There are multiple pathways via which dedicated DCAT research centers could be stood up.  One approach is direct NIST funding and oversight, following the model of Carnegie Mellon University’s AI Cooperative Research Center. Alternatively, centers could be established through the National AI Research Institutes Program, similar to the University of Maryland’s Institute for Trustworthy AI in Law & Society, leveraging NSF’s existing partnership with NIST.

The DCAT research agenda could be structured in two ways.  Informed by NIST’s report NIST AI 100-4, a vertical approach could be taken to centers’ research agendas, assigning specific technologies to each center (e.g. digital watermarking, metadata recording, provenance data tracking, or synthetic content detection). Centers would focus on all aspects of a specific technical capability, including: improving the robustness and security of existing countermeasures; developing new techniques to address current limitations; conducting real-world testing and evaluation, especially in a cross-platform environment; and studying interactions with other technical safeguards and non-technical countermeasures like regulations or educational initiatives. Conversely, a horizontal approach might seek to divide research agendas across areas such as: the advancement of multiple established DACT techniques, tools, and methods; innovation of novel techniques, tools, and methods; testing and evaluation of combined technical approaches in real-world settings; examining the interaction of multiple technical countermeasures with human factors such as label perception and non-technical countermeasures.  While either framework provides a strong foundation for advancing DCAT capabilities, given institutional expertise and practical considerations, a hybrid model combining both approaches is likely the most feasible option.

Recommendation 2. Build and Maintain a National Synthetic Content Database

NIST should also build and maintain a national database of synthetic content database to advance and accelerate DCAT R&D, similar to existing federal initiatives such as NIST’s National Software Reference Library and NSF’s AI Research Resource pilot. Current DCAT R&D is severely constrained by limited access to diverse, verified, and up-to-date training and testing data.  Many researchers, especially in academia, where a significant portion of DCAT research takes place, lack the resources to build and maintain their own datasets.  This results in less accurate and more narrowly applicable authentication tools that struggle to keep pace with rapidly advancing AI capabilities.  

A centralized database of synthetic and authentic content would accelerate DCAT R&D in several critical ways. First, it would significantly alleviate the effort on research teams to generate or collect synthetic data for training and evaluation, encouraging less well-resourced groups to conduct research as well as allowing researchers to focus more on other aspects of R&D. This includes providing much-needed resources for the NIST-facilitated university-based research centers and prize competitions proposed here. Moreover, a shared database would be able to provide more comprehensive coverage of the increasingly varied synthetic content being created today, permitting the development of more effective and robust authentication capabilities. The database would be useful for establishing standardized evaluation metrics for DCAT capabilities – one of NIST’s critical aims for addressing the risks posed by AI technology.

A national database would need to be comprehensive, encompassing samples of both early and state-of-the-art synthetic content. It should have controlled laboratory-generated along with verified “in the wild” or real world synthetic content datasets, including both benign and potentially harmful examples. Further critical to the database’s utility is its diversity, ensuring synthetic content spans multiple individual and combined modalities (text, image, audio, video) and features varied human populations as well as a variety of non-human subject matter. To maintain the database’s relevance as generative AI capabilities continue to evolve, routinely incorporating novel synthetic content that accurately reflects synthetic content improvements will also be required.

Initially, the database could be built on NIST’s GenAI Challenge project work, which includes “evolving benchmark dataset creation”, but as it scales up, it should operate as a standalone program with dedicated resources. The database could be grown and maintained through dataset contributions by AISIC members, industry partners, and academic institutions who have either generated synthetic content datasets themselves or, as generative AI technology providers, with the ability to create the large-scale and diverse datasets required. NIST would also direct targeted dataset acquisition to address specific gaps and evaluation needs.

Recommendation 3. Run Public Prize Competitions on DCAT Challenges

Third, NIST should set up and run a coordinated prize competition program, while also serving as federal oversight leads for prize competitions run by other agencies. Building on existing models such as the DARPA SemaFor’s AI FORCE and the FTC’s Voice Cloning challenge, the competitions would address expert-identified priorities as informed by the AISIC, International Network of AISI, and proposed DCAT national research centers. Competitions represent a proven approach to spurring innovation for complex technical challenges, enabling the rapid identification of solutions through diverse engagement. In particular, monetary prize competitions are especially successful at ensuring engagement. For example, the 2019 Kaggle Deepfake Detection competition, which had a prize of $1 million, fielded twice as many participants as the 2024 competition, which gave no cash prize. 

By providing structured challenges and meaningful incentives, public competitions can accelerate the development of critical DCAT capabilities while building a more robust and diverse research community.  Such competitions encourage novel technical approaches, rapid testing of new methods, facilitate the inclusion of new or non-traditional participants, and foster collaborations. The more rapid-cycle and narrow scope of the competitions would also complement the longer-term and broader research being conducted by the national DCAT research centers. Centralized federal oversight would also prevent the implementation gaps which have occurred in past approved federal prize competitions.  For instance, the 2020 National Defense Authorization Act (NDAA) authorized a $5 million machine detection/deepfakes prize competition (Sec. 5724), and the 2024 NDAA authorized a ”Generative AI Detection and Watermark Competition” (Sec. 1543). However, neither prize competition has been carried out, and Watermark Competition has now been delayed to 2025. Centralized oversight would also ensure that prize competitions are run consistently to address specific technical challenges raised by expert stakeholders, encouraging more rapid development of relevant technical countermeasures.

Some examples of possible prize competitions might include: machine detection and digital forensic methods to detect partial or fully AI-generated content across single or multimodal content; assessing the robustness, interoperability, and security of watermarking and other labeling methods across modalities; testing innovations in tamper-evident or -proofing content provenance tools and other data origin techniques. Regular assessment and refinement of competition categories will ensure continued relevance as synthetic content capabilities evolve.

Recommendation 4. Congressional Funding of DCAT Research and Activities

Finally, the 118th Congress should allocate funding for these three NIST initiatives in order to more effectively establish the foundations of a strong DCAT national research infrastructure. Despite widespread acknowledgement of the vital role of technical countermeasures in addressing synthetic content risks, the DCAT research field remains severely underfunded. Although recent initiatives, such as the $11 million allocated to the International Network of AI Safety Institutes, are a welcome step in the right direction, substantially more investment is needed. Thus far, the overall financing of DCAT R&D has been only a drop in the bucket when compared to the many billions of dollars being dedicated by industry alone to improve generative AI technology.

This stark disparity between investment in generative AI versus DCAT capabilities presents an immediate opportunity for Congressional action. To address the widening capability gap, and to support pending legislation which will be reliant on technical countermeasures such as DCAT, the 118th Congress should establish multi-year appropriations with matching fund requirements. This will encourage private sector investment and permit flexible funding mechanisms to address emerging challenges. This funding should be accompanied by regular reporting requirements to track progress and impact.

One specific action that Congress could take to jumpstart DCAT R&D investment would be to reauthorize and appropriate the budget that was earmarked for the unexecuted machine detection competition it approved in 2020. Despite the 2020 NDAA authorizing $5 million for it, no SAC-D funding was allocated, and the competition never took place. Another action would be for Congress to explicitly allocate prize money for the watermarking competition authorized by the 2024 NDAA, which currently does not have any monetary prize attached to it, to encourage higher levels of participation in the competition when it takes place this year.

Conclusion

The risks posed by synthetic content present an undeniable danger to U.S. national interests and security. Advancing DCAT capabilities is vital for protecting U.S. citizens against both the direct and more diffuse harms resulting from the proliferating misuse of synthetic content. A robust national DCAT research ecosystem is required to accomplish this. Critically, this is not a challenge that can be addressed through one-time solutions or limited investment—it will require continuous work and dedicated resources to ensure technical countermeasures keep pace alongside increasingly sophisticated synthetic content threats. By implementing these recommendations with sustained federal support and investment, the U.S. will be able to more successfully address current and anticipated synthetic content risks, further reinforcing its role as a global leader in responsible AI use.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.