Protecting Infant Nutrition Security:
Shifting the Paradigm on Breastfeeding to Build a Healthier Future for all Americans

The health and wellbeing of American babies have been put at risk in recent years, and we can do better. Recent events have revealed deep vulnerabilities in our nation’s infant nutritional security. For example: Pandemic-induced disruptions in maternity care practices that support  the establishment of breastfeeding; the infant formula recall and resulting shortage; and a spate of weather-related natural disasters have demonstrated infrastructure gaps and a lack of resilience to safety and supply chain challenges. All put babies in danger during times of crisis.

Breastfeeding is foundational to lifelong health and wellness, but systemic barriers prevent many families from meeting their breastfeeding goals. The policies and infrastructure surrounding postpartum families often limit their ability to succeed in breastfeeding. Despite important benefits, new data from the CDC shows that while 84.1% of infants start out breastfeeding, these numbers fall dramatically in the weeks after birth, with only 57.5% of infants breastfeeding exclusively at one month of age. Disparities persist across geographic location, and other sociodemographic factors, including race/ethnicity, maternal age, and education. Breastfeeding rates in North America are the lowest in the world. Longstanding evidence shows that it is not a lack of desire but rather a lack of support, access, and resources that creates these barriers.

This administration has an opportunity to take a systems approach to increasing support for breastfeeding and making parenting easier for new mothers. Key policy changes to address systemic barriers include providing guidance to states on expanding Medicaid coverage of donor milk, building breastfeeding support and protection into the existing emergency response framework at the Federal Emergency Management Agency, and expressing support for establishing a national paid leave program. 

Policymakers on both sides of the aisle agree that no baby should ever go hungry, as evidenced by the bipartisan passage of recent breastfeeding legislation (detailed below) and widely supported regulations. However, significant barriers remain. This administration has the power to address long-standing inequities and set the stage for the next generation of parents and infants to thrive. Ensuring that every family has the support they need to make the best decisions for their child’s health and wellness benefits the individual, the family, the community, and the economy. 

Challenge and Opportunity

Breastfeeding plays an essential role in establishing good nutrition and healthy weight, reducing the risk of chronic disease and infant mortality, and improving maternal and infant health outcomes. Breastfed children have a decreased risk of obesity, type 1 and 2 diabetes, asthma, and childhood leukemia. Women who breastfeed reduce their risk of specific chronic diseases, including type 2 diabetes, cardiovascular disease, and breast and ovarian cancers. On a relational level, the hormones produced while breastfeeding, like oxytocin, enhance the maternal-infant bond and emotional well-being. The American Academy of Pediatrics recommends infants be exclusively breastfed for approximately six months with continued breastfeeding while introducing complementary foods for two years or as long as mutually desired by the mother and child.  

Despite the well-documented health benefits of breastfeeding, deep inequities in healthcare, community, and employment settings impede success. Systemic barriers disproportionately impact Black, Indigenous, and other communities of color, as well as families in rural and economically distressed areas. These populations already bear the weight of numerous health inequities, including limited access to nutritious foods and higher rates of chronic disease—issues that breastfeeding could help mitigate. 

Breastfeeding Saves Dollars and Makes Sense 

Low breastfeeding rates in the United States cost our nation millions of dollars through higher health system costs, lost productivity, and higher household expenditures. Not breastfeeding is associated with economic losses of about $302 billion annually or 0.49% of world gross national income. At the national level, improving breastfeeding practices through programs and policies is one of the best investments a country can make, as every dollar invested is estimated to result in a $35 economic return

In the United States, chronic disease management results in trillions of dollars in annual healthcare costs, which increased breastfeeding rates could help reduce. In the workplace setting, employers see significant cost savings when their workers are able to maintain breastfeeding after returning to work. Increased breastfeeding rates are also associated with reduced environmental impact and associated expenses. Savings can be seen at home as well, as following optimal breastfeeding practices reduces household expenditures. Investments in infant nutrition last a lifetime, paying long-term dividends critical for economic and human development. Economists have completed cost-benefit analyses, finding that investments in nutrition are one of the best value-for-money development actions, laying the groundwork for the success of investments in other sectors.

Ongoing trends in breastfeeding outcomes indicate that there are entrenched policy-level challenges and barriers that need to be addressed to ensure that all infants have an opportunity to benefit from access to human milk. Currently, for too many families, the odds are stacked against them. It’s not a question of individual choice but one of systemic injustice. Families are often forced into feeding decisions that do not reflect their true desires due to a lack of accessible resources, support, and infrastructure.

While the current landscape is rife with challenges, the solutions are known and the potential benefits are tremendous. This administration has the opportunity to realize these benefits and implement a smart and strategic response to the urgent situation that our nation is facing just as the political will is at an all-time high. 

The History of Breastfeeding Policy

In the late 1960s and early 1970s less than 30 percent of infants were breastfed. The concerted efforts of individuals and organizations and the emergence of the field of lactation have worked to counteract or remove many barriers, and policymakers have sent a clear and consistent message that breastfeeding is bipartisan. This is evident in the range of recent lactation-friendly legislation, including: 

Administrative efforts ranging from the Business Case for Breastfeeding to The Surgeon General’s Call to Action to Support Breastfeeding and the armed services updates on uniform requirements for lactating soldiers demonstrate a clear commitment to breastfeeding support across the decades. 

These policy changes have made a difference. But additional attention and investment, with a particular focus on the birth and early postpartum period, as well as during and after emergencies, is needed to secure the potential health and economic benefits of comprehensive societal support for breastfeeding. This administration can take considerable steps toward improving  U.S. health and wellness and protecting infant nutrition security.  

Plan of Action

A range of federal agencies coordinate programs, services, and initiatives impacting the breastfeeding journey for new parents. Expanding and building on existing efforts through the following steps can help address some of today’s most pressing barriers to breastfeeding. 

Each of the recommended actions can be implemented independently and would create meaningful, incremental change for families. However, a comprehensive approach that implements all these recommendations would create the marked shift in the landscape needed to improve breastfeeding initiation and duration rates and establish this administration as a champion for breastfeeding families. 

AgencyAgency RoleRecommend ActionAnticipated Outcome
Federal Emergency Management Agency (FEMA)


FEMA coordinates within the federal government to make sure America is equipped to prepare for and respond to disasters.Require FEMA to participate in the Federal Interagency Breastfeeding Workgroup, a collection of federal agencies that come together to connect and collaborate on breastfeeding issues.Increased connection and coordination across agencies.
Federal Emergency Management Agency (FEMA)


FEMA coordinates within the federal government to make sure America is equipped to prepare for and respond to disasters.Update the FEMA Public Assistance Program and Policy Guide to include breastfeeding and lactation as a functional need so that emergency response efforts can include services from lactation support providers.Integration of breastfeeding support into emergency response and recovery efforts.
Office of Management & Budget (OMB)The OMB oversees the implementation of the President’s vision across the Executive Branch, including through budget development and execution.Include funding for the establishment of a national paid family and medical leave program as a priority in the President’s Budget.Setting the stage for Congressional action.
Domestic Policy Council (DPC)The DPC drives the development and implementation of the President’s domestic policy agenda in the White House and across the Federal government.Support the efforts of the bipartisan, bicameral congressional Paid Leave Working Group.Setting the stage for Congressional action.
This table summarizes the recommendations, grouped by the federal agency that would be responsible for implementing the change to increase breastfeeding rates in the U.S. for improved health and economic outcomes.

Recommendation 1. Increase access to pasteurized donor human milk by directing the Centers for Medicare & Medicaid Services (CMS) to provide guidance to states on expanding Medicaid coverage. 

Pasteurized donor human milk is lifesaving for vulnerable infants, particularly those born preterm or with serious health complications. Across the United States, milk banks gently pasteurize donated human milk and distribute it to fragile infants in need. This lifesaving liquid gold reduces mortality rates, lowers healthcare costs, and shortens hospital stays. Specifically, the use of donor milk is associated with increased survival rates and lowered rates of infections, sepsis, serious lung disease, and gastrointestinal complications. In 2022, there were 380,548 preterm births in the United States, representing 10.4% of live births, so the potential for health and cost savings is substantial. Data from one study shows that the cost of a neonatal intensive care unit stay for infants at very low birth weight is nearly $220,000 for 56 days. The use of donor human milk can reduce hospital length of stay by 18-50 days by preventing the development of necrotizing enterocolitis in preterm infants. The benefits of human milk extend beyond the inpatient stay, with infants receiving all human milk diets in the NICU experiencing fewer hospital readmissions and better overall long-term outcomes.

Although donor milk has important health implications for vulnerable infants in all communities and can result in significant economic benefit, donor milk is not equitably accessible. While milk banks serve all states, not all communities have easy access to donated human milk. Moreover, many insurers are not required to cover the cost, creating significant barriers to access and contributing to racial and geographic disparities.

To ensure that more babies in need have access to lifesaving donor milk, the administration should work with CMS to expand donor milk coverage under state Medicaid programs. Medicaid covers approximately 40% of all US births and 50% of all early preterm births. Medicaid programs in at least 17 states and the District of Columbia already include coverage of donor milk. The administration can expand access to this precious milk, help reduce health care costs, and address racial and geographic disparities by releasing guidance for the remaining states regarding coverage options in Medicaid.

Recommendation 2. Include infant feeding in Federal Emergency Management Agency (FEMA) emergency planning and response.

Infants and children are among the most vulnerable in an emergency, so it is critical that their unique needs are considered and included in emergency planning and response guidance. Breastfeeding provides clean, optimal nutrition, requires no fuel, water, or electricity, and is available, even in the direst circumstances. Human milk contains antibodies that fight infection, including diarrhea and respiratory infections common among infants in emergency situations. Yet efforts to protect infant and young child feeding in emergencies are sorely lacking, particularly in the immediate aftermath of disasters and emergencies. 

Ensuring access to lactation support and supplies as part of emergency response efforts is essential for protecting the health and safety of infants. Active support and coordination between federal, state, and local governments, the commercial milk formula industry, lactation support providers, and all other relevant actors involved in response to emergencies is needed to ensure safe infant and young child feeding practices and equitable access to support. There are two simple, cost-effective steps that FEMA can take to protect breastfeeding, preserve resources, and thus save additional lives during emergencies.

Recommendation 3. Expand access to paid family & medical leave by including paid leave as a priority in the President’s Budget and supporting the efforts of the bipartisan, bicameral congressional Paid Leave Working Group. 

Employment policies in the United States make breastfeeding harder than it needs to be. The United States is one of the only countries in the world without a national paid family and medical leave program. Many parents return to work quickly after birth, before a strong breastfeeding relationship is established, because they cannot afford to take unpaid leave or because they do not qualify for paid leave programs with their employer or through state or local programs. Nearly 1 in 4 employed mothers return to work within two weeks of childbirth.

Paid family leave programs make it possible for employees to take time for childbirth recovery, bond with their baby, establish feeding routines, and adjust to life with a new child without threatening their family’s economic well-being. This precious time provides the foundation for success, contributing to improved rates of breastfeeding initiation and duration, yet only a small portion of workers are able to access it. There are significant disparities in access to paid leave among racial and ethnic groups, with Black and Hispanic employees less likely than their white non-Hispanic counterparts to have access to paid parental leave. There are similar disparities in breastfeeding outcomes among racial groups.  

The momentum is building substantially to improve the paid family and medical leave landscape in the United States. Thirteen states and the District of Columbia have established mandatory state paid family leave systems. Supporting paid leave has become an important component of candidate campaign plans, and bipartisan support for establishing a national program remains strong among voters. The formation of Bipartisan Paid Family Leave Working Groups in both the House and Senate demonstrate commitment from policymakers on both sides of the aisle. 

By directing the Office of Management and Budget to include funding for paid leave in the President’s Budget recommendation and working collaboratively with the Congressional Paid Leave Working Groups, the administration can advance federal efforts to increase access to paid family and medical leave, improving public health and helping American businesses.  

Conclusion

These three strategies offer the opportunity for the White House to make an immediate and lasting impact by protecting infant nutrition security and addressing disparities in breastfeeding rates, on day one of the Presidential term. A systems approach that utilizes multiple strategies for integrating breastfeeding into existing programs and efforts would help shift the paradigm for new families by addressing long-standing barriers that disproportionately affect marginalized communities—particularly Black, Indigenous, and families of color. A clear and concerted effort from the Administration, as outlined, offers the opportunity to benefit all families and future generations of American babies. 

The administration’s focused and strategic efforts will create a healthier, more supportive world for babies, families, and breastfeeding parents, improve maternal and child health outcomes, and strengthen the economy. This administration has the chance to positively shape the future for generations of American families, ensuring that every baby gets the best possible start in life and that every parent feels empowered and supported.

Now is the time to build on recent momentum and create a world where families have true autonomy in infant feeding decisions. A world where paid family leave allows parents the time to heal, bond, and establish feeding routines; communities provide equitable access to donor milk; and federal, state, and local agencies have formal plans to protect infant feeding during emergencies, ensuring no baby is left vulnerable. Every family deserves to feel empowered and supported in making the best choices for their children, with equitable access to resources and support systems.

This policy memo was written with support from Suzan Ajlouni, Public Health Writing Specialist at the U.S. Breastfeeding Committee. The policy recommendations have been identified through the collective learning, idea sharing, and expertise of USBC members and partners.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
Isn’t the choice to breastfeed a personal one?

Rather than being a matter of personal choice, infant feeding practice is informed by circumstance and level (or lack) of support. When roadblocks exist at every turn, families are backed into a decision because the alternatives are not available, attainable, or viable. United States policies and infrastructure were not built with the realities of breastfeeding in mind. Change is needed to ensure that all who choose to breastfeed are able to meet their personal breastfeeding goals, and society at large reaps the beneficial social and economic outcomes.

How much would it cost to establish a national paid family and medical leave program?

The Fiscal Year 2024 President’s Budget proposed to establish a national, comprehensive paid family and medical leave program, providing up to 12 weeks of leave to allow eligible workers to take time off to care for and bond with a new child; care for a seriously ill loved one; heal from their own serious illness; address circumstances arising from a loved one’s military deployment; or find safety from domestic violence, sexual assault, or stalking. The budget recommendation included $325 billion for this program. It’s important to look at this with the return on investment in mind, including improved labor force attachment and increased earnings for women; better outcomes and reduced health care costs for ill, injured, or disabled loved ones; savings to other tax-funded programs, including Medicaid, SNAP, and other forms of public assistance; and national economic growth, jobs growth, and increased economic activity.

How will we know if these efforts are having an impact?

There are a variety of national monitoring and surveillance efforts tracking breastfeeding initiation, duration, and exclusivity rates that will inform how well these actions are working for the American people, including the National Immunization Survey (NIS), Pregnancy Risk Assessment and Monitoring System (PRAMS), Infant Feeding Practices Study, and National Vital Statistics System. The CDC Breastfeeding Report card is published biannually to bring these key data points together and place them into context. Significant improvements in the data have already been seen across recent decades, with breastfeeding initiation rates increasing from 73.1 percent in 2004 to 84.1 percent in 2021.

Is there enough buy-in from organizations and individuals to support these systemic changes?

The U.S. Breastfeeding Committee is a coalition bringing together approximately 140 organizations from coast to coast representing the grassroots to the treetops – including federal agencies, national, state, tribal, and territorial organizations, and for-profit businesses – that support the USBC mission to create a landscape of breastfeeding support across the United States. Nationwide, a network of hundreds of thousands of grassroots advocates from across the political spectrum support efforts like these. Together, we are committed to ensuring that all families in the U.S. have the support, resources, and accommodations to achieve their breastfeeding goals in the communities where they live, learn, work, and play. The U.S. Breastfeeding Committee and our network stand ready to work with the administration to advance this plan of action.

Supporting Device Reprocessing to Reduce Waste in Health Care

The U.S. healthcare system produces 5 million tons of waste annually, or approximately 29 pounds per hospital bed daily. Roughly 80 percent of the healthcare industry’s carbon footprint comes from the production, transportation, use, and disposal of single-use devices (SUDs), which are pervasive in the hospital. Notably, 95% of the environmental impact of single-use medical products results from the production of those products. 

While the Food and Drug Administration (FDA) oversees new devices being brought to market, it is up to the manufacturer to determine whether a device will be marketed as single-use or multiple-use. Manufacturers have a financial incentive to market devices for “single-use” or “disposable” as marketing a device as reusable requires expensive cleaning validations.

In order to decrease healthcare waste and environmental impact, FDA leads on identifying reusable devices that can be safely reprocessed and incentivizing manufacturers to test the reprocessing of their device. This will require the FDA to strengthen its management of single-use and reusable device labeling. Further, the Veterans Health Administration, the nation’s largest healthcare system, should reverse the prohibition on reprocessed SUDs and become a national leader in the reprocessing of medical devices.

Challenge and Opportunity

While healthcare institutions are embracing decarbonization and waste reduction plans, they cannot do this effectively without addressing the enormous impact of single-use devices (SUDs). The majority of research literature concludes that SUDs are associated with higher levels of environmental impact than reusable products. 

FDA regulations governing SUD reprocessing make it extremely challenging for hospitals to reprocess low-risk SUDs, which is inconsistent with the FDA’s “least burdensome provisions.” The FDA requires hospitals or commercial SUD reprocessing facilities to act as the device’s manufacturer, meaning they must follow the FDA’s rules for medical device manufacturers’ requirements and take on the associated liabilities. Hospitals are not keen to take on the liability of a manufacturer, yet commercial reprocessors do not offer many lower-risk devices that can be reprocessed. 

As a result, hospitals and clinics are no longer willing to sterilize SUDs through methods like autoclaving even despite documentation showing that sterilization is safe and precedent showing that similar devices have been safely sterilized and reused for many years without adverse events. Many devices, including pessaries for pelvic organ prolapse and titanium phacoemulsification tips for cataract surgery, can be safely reprocessed in their clinical use. These products, given their risk profile, need not be subject to the FDA’s full medical device manufacturer requirements.  

Further, manufacturers are incentivized to bring SUDs to market quicker than those that may be reprocessed. Manufacturers often market devices as single-use solely because the original manufacturer chose not to conduct expensive cleaning and sterilization validations, not because such cleaning and sterilization validations cannot be done. FDA regulations that govern SUDs should be better tailored to each device so that clinicians on the frontlines can provide appropriate and environmentally sustainable health care. 

Reprocessed devices cost 25 to 40% less. Thus, the use of reprocessed SUDs can reduce costs in hospitals significantly — about $465 million in 2023. Per the Association of Medical Device Reprocessors, if the reprocessing practices of the top 10% performing hospitals were maximized across all hospitals that use reprocessed devices, U.S. hospitals could have saved an additional $2.28 billion that same year. Indeed, enabling and encouraging the use of reprocessed SUDs can also yield significant cost reductions without compromising patient care. 

Plan of Action

As the FDA began regulating SUD reprocessing in 2000, it is imperative that the FDA take the lead on creating a clear, streamlined process for clearing or approving reusable devices in order to ensure the safety and efficacy of reprocessed devices. These recommendations would permit healthcare systems to reprocess and reuse medical devices without fear of noncompliance by the Joint Commission or Centers for Medicare and Medicaid Services that reply on FDA regulations. Further, the nation’s largest healthcare system, the Veterans Health Administration, should become a leader in medical device reprocessing, and lead on showcasing the standard of practice for sustainable healthcare.

  1. FDA should publish a list of SUDs that have a proven track record of safe reprocessing to empower hospitals to reduce waste, costs, and environmental impact without compromising patient safety. The FDA should change the labels of single-use devices to multi-use when reuse by hospitals is possible and validated via clinical studies, as the “single-use” label has promoted the mistaken belief that SUDs cannot be safely reprocessed. Per the FDA, the single-use label simply means a given device has not undergone the original equipment manufacturer (OEM) validation tests necessary to label a device “reusable.” The label does not mean the device cannot be cleared for reprocessing. 
  1. In order to help governments and healthcare systems prioritize the environmental and cost benefits of reusable devices over SUDs, FDA should incentivize applications of reusable or commercially reprocessable devices, such as through expediting review. The FDA can also incentivize use of reprocessed devices through payments to hospitals for meeting reprocessing benchmarks. 
  1. The FDA should not subject low-risk devices that can be safely reprocessed for clinical use to full device manufacturer requirements. The FDA should further encourage healthcare procurement staff by creating an accessible database of devices cleared for reprocessing and alerting healthcare systems about regulated reprocessing options. In doing so, the FDA can help reduce the burden on hospitals in reprocessing low-risk SUDs and encourage healthcare systems to sterilize SUDs through methods like autoclaving. 
  1. As the only major health system in the U.S. to prohibit the use of reprocessed SUDs, the U.S. Veterans Health Administration should reverse its prohibition as soon as possible. This prohibition likely remains because of outdated determinations of risks, which comes at major costs for the environment and Americans. Doing so would be consistent with the FDA’s conclusions that reprocessed SUDs are safe and effective.  
  1. FDA should recommend that manufacturers publicly report the materials used in the composition of devices so that end-users can more easily compare products and determine the environmental impact of devices. As explained by AMDR, some Original Equipment Manufacturer (OEM) practices discourage or fully prevent the use of reprocessed devices. It is imperative that the FDA vigorously track and impede these practices. Not only will requiring public reporting device composition help healthcare buyers make more informed decisions, it will also help promote a more circular economy that uplifts sustainability efforts. 

Conclusion

To decrease costs, waste, and environmental impact, the healthcare sector urgently needs to increase its use of reusable devices. One of the largest barriers is FDA requirements that result in needlessly stringent requirements of hospitals, hindering the adoption of less wasteful, less costly reprocessed devices.

FDA’s critical role in medical device labeling, clearing, or approving more devices as reusable, has down market implications and influences many other regulatory and oversight bodies, including the Centers for Medicare & Medicaid Services (CMS), the Association for the Advancement of Medical Instrumentation (AAMI), the Joint Commission, hospitals, health care offices, and health care providers. It is essential for the FDA to step up and take the lead in revising the device reprocessing pipeline. 

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Taking on the World’s Factory: A Path to Contain China on Legacy Chips

Last year the Federation of American Scientists (FAS), Jordan Schneider (of ChinaTalk), Chris Miller (author of Chip War) and Noah Smith (of Noahpinion) hosted a call for ideas to address the U.S. chip shortage and Chinese competition. A handful of ideas were selected based on the feasibility of the idea and its and bipartisan nature. This memo is one of them.

Challenge and Opportunity

The intelligent and autonomous functioning of physical machinery is one of the key societal developments of the 21st century, changing and assisting in the way we live our lives. In this context, semiconductors, once a niche good, now form the physical backbone of automated and intelligent systems. The supply chain disruptions of 2020 laid bare the vulnerability of the global economy in the face of a chip shortage, which created scarcity and inflation in everything from smartphones to automobiles. In an even more extreme case, a lack of chips could impact critical infrastructure, such as squeezing the supply of medical devices necessary for many modern procedures. 

The deployment of partially- or fully-automated warfighting further means that Artificial Intelligence (AI) systems now have direct and inescapable impacts on national security. With great power conflict opening on the horizon, threats toward and emanating from the semiconductor supply chain have become even more evident. 

In this context, the crucial role of the People’s Republic of China (PRC) in chip production represents a clear and present danger to global security. Although the PRC currently trails in the production of cutting-edge sub-16 nm chips used for the development of AI models, the country’s market dominance in the field of so-called “trailing edge chips” of 28 nm or above has a much wider impact due to their ubiquity in all traditional use cases outside of AI. 

The most important harm of this is clear: by leveraging its control of a keystone international industry, the Chinese Communist Party will be able to exert greater coercive pressure on other nations. In a hypothetical invasion of Taiwan, this could mean credibly threatening the U.S. and other democratic countries not to intervene under the threat of a semiconductor embargo. Even more dramatically, given the reliance of modern military manufacture on digital equipment, in the case of a full-scale war between the People’s Republic of China and the United States, China could produce enormous amounts of materiel while severely capping the ability of the rest of the world to meet their challenge. 

A secondary but significant risk involves the ability of China to build defects or vulnerabilities into its manufactured hardware. Control over the physical components that underlie critical infrastructure, or even military hardware, could allow targeted action to paralyze U.S. society or government in the face of a crisis. While defense and critical infrastructure supply chains represent only a small fraction of all semiconductor-reliant industrial products, mitigation of this harm represents a baseline test of the ability of the United States to screen imports relevant to national security. 

Beyond Subsidies: A Blueprint for Global Manufacturing

Wresting back control of the traditional semiconductor supply chain away from China is widely recognized as a prime policy goal for the United States and allied democratic countries. The U.S. has already begun with the passage of the CHIPS and Science Act in 2022, providing subsidies and tax incentives to encourage the creation of new fabrication plants (fabs) in the United States. But a strategic industry cannot survive on subsidies alone. Preferential tax treatment and government consumption may stand up to some degree of semiconductor manufacture. But it cannot rival the size of China’s if the PRC is able to establish itself as the primary chip supplier in both its domestic market and the rest of the world.

Nascent American foundries and the multinational companies that operate them must be able to survive in a competitive international environment without relying on unpredictable future support. They must do this while fighting against PRC-backed chip manufacturers operating with both a strong domestic market and massively developed economies of scale. Given the sheer size of both the Chinese manufacturing base and its domestic market, the U.S. cannot hope to accomplish this goal alone. Only a united coalition of developed and developing countries can hope to compete. 

The good news is that the United States and its partners in Europe and the Indo-Pacific have all the necessary ingredients to realize this vision. Developing countries in South and Southeast Asia and the Pacific have a vast and expanding industrial base, augmented by Special Economic Zones and technical universities. America and its developed partners bring the capital investment and intellectual property necessary to kickstart semiconductor production abroad. 

The goal of a rest-of-world semiconductor alliance will be twofold: to drive down the cost of chips made in the U.S. and its allies while simultaneously pushing up the cost of purchasing legacy semiconductors produced in China to meet it. Only when these two intersect will the balance of global trade begin to tip back toward the democratic world. The first two slates of policy recommendations will focus on decreasing the cost of non-China production and increasing the cost of Chinese imports, respectively. 

Finally, even in the case in which non-Chinese-influenced semiconductors become competitive with those made in the PRC, it will likely be impossible to fully exclude Chinese hardware from American and allied markets. Therefore, the final raft of policy recommendations will focus on mitigating the threat of Chinese chips in American and allied markets, including possible risks of inbuilt cyber vulnerability. 

The creation of an autonomous and secure supply chain entirely outside of China is possible. The challenge will be to achieve semiconductor independence in time to prevent China from successively weaponizing chip dominance in a future war. With clashes escalating in the South China Sea and threats across the Taiwan Strait growing ever more ominous, the clock is ticking. But America’s Indo-Pacific partners are also increasingly convinced of the urgency of cooperation. The policies presented aim to make maximum use of this critical time to build strategic independence and ensure peace. 

Plan of Action

Recommendation #1: Boosting non-China Manufacturing 

The first and most crucial step toward semiconductor sovereignty is to build and strengthen a semiconductor supply chain outside of China. No matter the ability to protect domestic markets from Chinese competition, U.S. industrial productivity relies on cheap and reliable access to chips. Without this, it is impossible to ramp up industrial production in key industries from defense contracting to consumer electronics. 

According to a report released by the CSIS Wadhwani Center for AI and Advanced Technologies, the global semiconductor value chain broadly involves three components. At the top is design, which involves creating Electronic Design Automation (EDA) software, generating IP, and producing manufacturing equipment. Next is fabrication, which entails printing and manufacturing the wafers that are the key ingredient for finished semiconductors. The final stage is assembly test, and packaging, which entails packaging wafers into fully-functioning units that are fit for sale and verifying they work as expected. 

Of these three, the United States possesses the greatest competitive advantage in the field of design, where American intellectual property and research prowess drive many of the innovations of the modern semiconductor industry. Electronic Design Automation software, the software that allows engineers to design chips, is dominated by three major firms, of which two, Cadence and Synopsys, are American companies. The third, Mentor Graphics, is a U.S.-based subsidiary of the German industrial company Siemens. U.S. Semiconductor Manufacturing Equipment (SME) is also an important input in the design stage, with U.S.-based companies currently comprising over 40 percent of the global market share. The United States and Japan alone account for more than two-thirds. 

Meanwhile, the PRC has aggressively ramped up wafer production, aiming to make China an integral part of the global supply chain, often stealing foreign intellectual property along the way to ease its production. Recent reported efforts by the PRC to illicitly acquire U.S. SME underscore that China recognizes the strategic importance of both IP and SME as primary inputs to the chip making process. By stealing the products of American research, China further creates an unfair business environment in which law-abiding third countries are unable to keep up with Chinese capacity. 

Semiconductor Lend-Lease: A Plan for the 21st Century 

The only way to the international community is to level this playing field. In order to do so, we propose that the United States encourage and incentivize its companies to license their IP and SME to third countries looking to build wafer capacity. 

Before the United States officially entered into the Second World War, the administration of President Franklin Delano Roosevelt undertook the “Lend-Lease” policy, agreeing to continue to supply allied countries including Great Britain with weapons and materials, without immediate expectation of repayment. Recently, Lend-Lease has been resurrected in the modern context of the defense of Ukraine, with the United States and European powers supplying Ukraine with armaments and munitions Ukrainian industry could not produce itself. 

The crucial point of Lend-Lease is that it takes the form of immediate donations of critical outputs, rather than simple monetary donations, which require time and investment to convert into the desired goods. World War II-era Lend-Lease was not based on a long-term economic or development strategy, but rather on the short-term assessment that without American support, the United Kingdom would fall to Nazi occupation. Given the status of semiconductors as a key strategic good, the parallels with a slow-rolling crisis in the South China Sea and the Taiwan Strait become clear. While in the long term, South, East, and Southeast Asia will likely be able to level up with China in the field of semiconductors, the imminent threats of both Chinese wafer dominance and a potential invasion of Taiwan mean that this development must be accelerated. Rather than industrial and munitions production, as in 1941, the crucial ingredients the United States brought to this process were intellectual property, design tools, and SME. These are thus the tools that should be leased to U.S. partners and allies, particularly in the Indo-Pacific. By allowing dedicated foreign partners to take advantage of the gains of American research, we will allow them to level up with China and truly compete in the international market. 

Although the economics of such a plan are complex, we present a sketch here of how one iteration might look. The United States federal government could negotiate with the “Big Three” EDA firms to purchase transferable licenses for their EDA software. The U.S. could then “Lend-Lease” licenses to major semiconductor producers in partner countries such as Singapore, Malaysia, Vietnam, the Philippines, or even in Latin America. The U.S. could license this software on the condition that products produced by such companies will be made available at discounted prices to the American market, and that companies should disavow further investment from or cooperation with Chinese entities. Partner companies in the Indo-Pacific could further agree to share any further research results produced using American IP, making further advancements available to American companies in the global market. 

When growing companies attain a predetermined level of market value they can offer compensation to the United States in the form of fees or stock options, which will be collected by the United States under the terms of the treaty and awarded to the EDA firms. Similar approaches can be taken toward licensing American IP, or even physically lending SME to countries in need. 

Licensing American research to designated partner countries comes with some risks and challenges. For one, it creates a greater attack surface for Chinese companies hoping to steal software and design processes created in the United States. Preventing such theft is already highly difficult, but the U.S. should extend cooperation in standardizing industrial security practices for strategic industries. 

A recent surge in fab construction in countries such as Singapore and India means that the expansion of the global semiconductor industry is already in motion. The United States can leverage its expertise and research prowess to speed up the growth of wafer production in third countries, while simultaneously countering China’s foreign influence on global supply chains. 

A Semiconductor Reserve? 

The comparison of semiconductors to oil is frequently made and has a key strategic justification: for more than a century, oil was a key input to virtually all industrial processes, from transportation to defense production. Semiconductors now play a similar role, serving as a key ingredient in virtually all manufacturing processes. 

A further ambitious policy to mitigate the harm of Chinese chips is to create a centralized reservoir of semiconductors, akin to the Strategic Petroleum Reserve. Such a reserve would be operated by the Commerce Department and maintain centralized holdings of both leading- and trailing-edge chips, obtained from free dealings on the open market. By taking advantage of bulk pricing and guaranteed, recurring contracts, the government could acquire a large number of semiconductors at reasonable prices, sourced exclusively from American and partner nation foundries. 

In the event of a chip shortage, the United States could sell chips back into the market, allowing key industries to continue to function with a trusted source of secure chips. In the absolute worst case of a geopolitical crisis involving China, a strategic stockpile would create a bulwark for the American defense industry to continue producing armaments during a period of disrupted chip supply. This buffer of time would be intended for domestic and allied production to ramp up and continue to supply security functions. 

However, allowing the U.S. to participate in the chip industry has a further impact on economic development. By making the U.S. a first-order purchaser of semiconductors at an industrial scale, the United States could create a reliable source of demand for fledgling businesses. The United States could serve as a transitory consumer buying up excess capacity when demand is weak, ensuring that new foundries are both capable of operation and shielded from attempts from China to smother demand. The direct participation of the U.S. in the global semiconductor market would help to kickstart industry in partner countries while providing a further incentive to build collaboration with the United States. 

Recommendation #2: Fencing in Chinese Semiconductor Exports 

A second step toward semiconductor independence will be in containing Chinese exports, with the goal of reducing China’s access to global markets and starving their industrial machine. 

The most direct approach to reducing demand for Chinese semiconductors is the imposition of tariffs. The U.S. consumer market is a potent economic force. By squeezing Chinese manufacturers seeking to compete in the U.S. market, the United States can avoid feeding additional production capacity that might be weaponized in a future conflict. These tariffs can take a variety of forms and justifications, from increased probes into Chinese labor standards and human rights practices to dumping investigations pursued at the WTO. The deep challenges of effective tariffs is how to enforce these tariffs once they come into play and how to coordinate tariffs with international partners. 

Broad Tariffs, Deep Impact 

No rule works without an enforcement mechanism, and in the worst case, a strong public stance against Chinese semiconductors that is not effectively implemented may actually weaken U.S. credibility and embolden the Chinese government. Therefore, it is imperative to have unambiguous rules on trade restrictions, with a strong enforcement mechanism to match. 

These measures should not just apply to chips that are bought directly from China but rather include those that are assembled and packaged in third countries to circumvent U.S. tariffs. The maximal interpretation of the tariffs mandate would further include a calculated tariff on products that make use of Chinese semiconductors as an intermediate input. 

In the case of semiconductors made in China but assembled, packaged, or tested in other countries, we suggest an expansion of the Biden Administration’s 50% tariff on Chinese semiconductors to include all chips, consumer, or industrial products that include a wafer manufactured in the People’s Republic of China, based on their international market rate. That is, if an Indonesian car manufacturer purchases a wafer manufactured in China with a market value of $3,000, and uses it to manufacture a $35,000 car, importing this vehicle to the United States would be subject to an additional tax of $1,500. 

While fears abound of the inflationary effects of additional tariffs, they are necessary for the creation of an incentive structure that properly contains Chinese manufacturing. In the absence of proportional tariffs on chips and products assembled outside China, Chinese fabs will be able to circumvent U.S. trade restrictions by boosting wafer production that then takes advantage of Assembly, Testing, and Packaging (ATP) in third countries. Further, it is imperative for the United States to not only restrict Chinese chip growth but to encourage the development of domestic and foreign non-China chip manufacturers. Imposing tariffs on Chinese chips as an intermediate ingredient is necessary to create a proper competitive environment. Ultimately, the goal is to ensure a diversification of fab locations beyond China that will create lower prices for consumers overall. 

How would tariffs on final goods containing Chinese chips be enforced? The policy issue of sanctioning and restricting an intermediate product is, unfortunately, not new. It is well known that Chinese precursor chemicals, often imported into Mexico, form much of the raw inputs for deadly fentanyl that is driving the United States opioid epidemic. Taking a cue from this example, we further suggest the creation of an internationally maintained database of products manufactured using Chinese semiconductors. As inspiration, the National Institutes of Health / NCATS maintains the Global Substance Registration System, a database that categorizes chemical substances, along with their commonly used names, regulatory classification, and relationships with other related chemicals. Such a database could be administered by the Commerce Department’s Bureau of Industry and Security, allowing the personnel who enforce the tariffs to also collect all relevant information in one place. 

Companies importing products into the U.S. would be required to register the make and model of all Chinese chips used in each of their products so that the United States and participating countries could impose corresponding sanctions. Products imported to the U.S. would be subject to random checks involving disassembly in Commerce Department workshops, with failure to report a sanctioned semiconductor component making a company subject to additional tariffs and fines. Manual disassembly is painstaking and difficult, but regular, randomized inspections of imported products are the only way to truly verify their content. 

The maintenance of such a database would bring follow-on national security benefits, in that the disclosure of any future vulnerability in a Chinese electronic component would allow quick diagnosis of what systems, including critical infrastructure, might be immediately vulnerable. We believe that creating an enforcement infrastructure that coordinates information between the U.S. and partner countries is a necessary first step to ensuring that tariffs are effective. 

Zone Defense: International Cooperation in the Semiconductor Tariffs 

At first glance, tariff action by the United States on Chinese-produced goods would appear to be a difficult coordination problem. By voluntarily declining an influx of cheaply-priced goods, American consumers exacerbate an existing trade glut in world semiconductor markets, allowing and incentivizing other nations to purchase these chips in greater volume and at a lower price. 

However, rather than dissuading further sanctions in world markets, tariffs may actually spur further coordination in blocking Chinese imports. The Biden Administration’s imposition of tariffs on Chinese electric vehicles coincided with parallel sanctions imposed by the European Union, India, and Brazil. As Chinese overcapacity in EVs is rejected by U.S. markets, other countries face intensified concerns about the potential for below-price “dumping” of products that could harm domestic industry. 

However, this ad-hoc international cooperation is still in a fragile and tentative stage and must be encouraged in order to create an “everywhere but China” semiconductor supply chain. Further, while countries impose tariffs to protect existing automotive and steel industries, global semiconductor manufacturing is currently concentrated in the Indo-Pacific. Thus, coordinating against China calls on countries to not just impose tariffs to protect existing industries, but to impose “nursery” tariffs that will protect nascent semiconductor development, even in places where an existing industry does not yet exist. 

A commonsense first step to building an atmosphere of trust is to take actions protecting partner countries from retaliation in the form of Chinese trade restrictions. In response to EU tariffs on Chinese EVs, Beijing has already threatened retaliatory restrictions on chicken feet, pork, and brandy. For a bloc as large as the European Union, these restrictive sanctions can irritate an important political constituency. For a smaller or less economically powerful country, these measures might be decisive in sending the message that semiconductor tariffs are not worth the risk. 

The United States should negotiate bilateral treaties with partner nations to erect tariffs against Chinese manufacturing, with the agreement that, in the case of Chinese retaliation against predetermined fundamental national industries, the United States government will buy up excess capacity at slightly discounted prices and release it to the American market. This preemptive protection of allied trade will blunt China’s ability to harm U.S. partners and allies. Raising tariffs on imported goods also imposes costs on the Chinese consumer, meaning that in the best case, the decreased effectiveness of these tools will deter the PRC from attempting such measures in the first place. 

Recommendation #3: Mitigating the Threat of Existing Chips 

No matter the success of the previous measures mentioned, it will be impossible to keep Chinese products entirely outside the U.S. market. Therefore, a strategy is required for managing the operational risks posed by Chinese chips that have and will exist inside the U.S. domestic sphere. 

Precisely defining the scope of the threat is very important. A narrow definition of threats might allow threats to pass through, while an overly wide definition may expend time and resources over nothing. A recent British effort to exclude Chinese-made cap badges presents a cautionary tale. By choosing a British supplier over an existing Chinese one after the acquisition process was already underway, the UK incurred an additional delay in its military pipeline, not to mention the additional confusion caused by such an administrative pivot. Implanting GPS-tracking or listening devices within small pieces of metal by one company within the Chinese supply chain seems both impractical and far-fetched–though the PRC surely enjoys the chaos and expense such a panic can cause. 

We consider it analogously unlikely that China is currently aiming to insert intentional defects into its semiconductor manufacturing. First, individual wafers are optimized for their extremely low cost of production, meaning that inserting a carefully designed (and hidden) flaw would introduce additional costs that could compromise the goal of low-cost manufacturing. Any kind of remotely activated “kill switch” would require some kind of wireless receiver–and a receiver of any reasonable strength could not be effectively hidden on a large scale. Second, such a vulnerability would have to be inserted only into wafers that are eventually purchased by the U.S. and its allies. If not, then any attempt to activate a remote exploit could risk compromising uninvolved countries or even the Chinese domestic market, either by accidentally triggering unintended chips or by providing a hardware vulnerability that could be re-used by Western cyber operations. Deliberately planting such vulnerabilities would thus require not just extreme technical precision, but a careful accounting of where vulnerable chips arrive in the supply chain.

Nonetheless, the existence of Chinese chips in the American market can accomplish much without explicitly-designed defects or “kill switches”. Here, a simple lack of transparency may be enough. China currently requires that all software vulnerabilities be reported to the Ministry of Industry and Information Technology, but does not have any corresponding public reporting requirement. This raises the fear that the Chinese government may be “stockpiling” vulnerabilities in Chinese-produced products, which may be used in future cyber operations. Here, China does not need to explicitly build backdoors into its own hardware but may simply decline to publicly disclose vulnerabilities in software in order to attack the rest of the world. 

Shining a Light on Untrusted Hardware 

The likelihood of cooperation between Chinese industry and the CCP exposes a potentially important risk. Chinese software is often deployed atop or alongside Chinese semiconductors and is particularly dangerous in the form of hardware drivers, the “glue code” that ties together software with the low-level hardware components. These drivers by default operate with high privileges and are typically closed-source and thus difficult to examine. We believe that vulnerable drivers may be a key vector of Chinese espionage or cyber threats. In 2019, Microsoft disclosed the existence of a privilege escalation vulnerability found in a Huawei driver. Although Huawei cooperated with Microsoft, it is unclear under the current legal regime whether the discovery of similar vulnerabilities by Huawei would be reported and patched, or if it would be kept as an asset by the Chinese government. The promulgation of Chinese drivers packaged with cheap hardware thus means that the Chinese Communist Party will have access to a broad, and potentially pre-mapped, attack surface with which to exploit U.S. government services. 

The first policy step here is obvious: banning the use of Chinese chips in U.S. federal government acquisitions. This has already been proposed as a Defense Department regulation set to take effect in 2027. If possible, this date should be moved up to 2026 or earlier. In order to enforce this ban, continuous research should be undertaken to map supply chains that produce U.S. government semiconductors. How to accelerate and enforce this ban is an ongoing policy question that is beyond the scope of this paper. 

However, a deeper question is how to protect the myriad components of critical infrastructure, both formal and informal. The Cybersecurity and Infrastructure Security Agency (CISA) has defined 16 sectors of critical infrastructure whose failure could materially disrupt or endanger the lives of American citizens. The recent discovery of the Volt Typhoon threat group revealed the willingness of the Chinese government to penetrate U.S. critical infrastructure using vulnerable components. 

While some of the 16 CISA sectors, such as Government Services and the Defense Industrial Base are within the purview of the federal government, many others, such as Healthcare, Food and Agriculture, and Information Technology, are run via complex partnerships between State, Local, Tribal, and Territorial (SLTT) governments and private industry. Although the best effort should be made to insulate these sectors from over-reliance on China, fully quarantining them from Chinese chips is simply unrealistic. Therefore we should explore proactive efforts at mitigation in the case of disruption. 

A first step would be to establish a team at CISA to decompile or reverse-engineer the drivers for Chinese hardware that is known to operate within U.S. critical infrastructure. Like manual disassembly, this is an expensive and arduous process, but it has the advantage of reducing an unknown or otherwise intractable problem to an issue of engineering. In this case, special care should be taken to catalog and prioritize pieces of Chinese hardware that impact the most critical infrastructure systems, such as Programmable Logic Controllers (PLCs) in energy infrastructure and processors in hospital databases. This approach can be coordinated with the threat database described in the previous section to disassemble and profile the drivers of the highest-impact semiconductor products first. If any vulnerabilities are found, warnings can be issued to critical infrastructure providers, and patches issued to the relevant parties. 

Brace for Impact: Building Infrastructure Resiliency 

Even in the case that neither the reduction of Chinese hardware nor the proactive search for driver vulnerabilities is able to prevent a Chinese attack, the United States should be prepared to mitigate the harms of a cyber crisis. 

A further step toward this goal would be the institution of resiliency protocols and drills for designated critical infrastructure providers. The 2017 WannaCry ransomware attack substantially incapacitated the UK National Health Service by locking providers out of Electronic Medical Record (EMR) systems. Mandating routine paper backups of digital medical records is one example of a resiliency strategy that could be deployed to ensure emergency functioning even in the case of a major service disruption. 

A further step to protect pieces of critical infrastructure is to mandate regular cyber training for critical infrastructure providers. CISA could work in cooperation with State, Local, Tribal, and Territorial regulatory bodies to identify critical pieces of infrastructure. CISA could develop hypothetical scenarios involving outages of critical Information Technology services, and work with local infrastructure providers, such as hospitals, municipal water services, transit providers, and the like, to create plans for how to continue to operate in the event of a crisis. CISA could also prepare baseline strategies, such as having non-internet connected control systems or offline backups of critical information. Such strategies could be adapted by individual infrastructure providers to best protect their services in the event of an attack. These plans could then be carried out in mock ‘cyber drills’ to exercise preparedness in the event of an incident. 

Ultimately, plans of this kind only prepare for service disruptions and do not address the longer-reaching impacts of breaches of confidentiality or the targeted manipulation of sensitive data. However, as we believe that the likelihood of targeted or sophisticated vulnerabilities in Chinese chips is relatively low, these kinds of brute force attacks are the most likely threat models. Preparing for the most basic and unsophisticated service disruptions is a good first step toward mitigating the harm of any potential cyber attack, including those not directly facilitated by Chinese hardware. This cyber-resiliency planning is therefore a strong general recommendation for protecting Americans from future threats. 

Conclusion

We have presented the issue of international semiconductor competition along three major axes: increasing production outside of China, containing an oversupply of Chinese semiconductors, and mitigating the risks of remaining Chinese chips in the U.S. market. We have proposed three slates of policies corresponding to each challenge with some guidance on how to proceed, divided into three complementary categories: 

Boosting non-China semiconductor production 

Containing Chinese exports 

Mitigating the threat of chips in the U.S. market 

We hope that this contribution will advance future discussions on the semiconductor trade and make a measurable impact on bolstering U.S. national security. 

Using Targeted Industrial Policy to Address National Security Implications of Chinese Chips

Last year the Federation of American Scientists (FAS), Jordan Schneider (of ChinaTalk), Chris Miller (author of Chip War) and Noah Smith (of Noahpinion) hosted a call for ideas to address the U.S. chip shortage and Chinese competition. A handful of ideas were selected based on the feasibility of the idea and its and bipartisan nature. This memo is one of them.

In recent years, China has heavily subsidized its legacy chip manufacturing capabilities. Although U.S. sanctions have restricted China’s access to and ability to develop advanced AI chips, they have done nothing to undermine China’s production of “legacy chips,” which are semiconductors built on process nodes 28nm or larger. It is important to clarify that the “22 nm” “20nm” “28nm” or “32nm” lithography process is simply a commercial name for a generation of a certain size and its technology that has no correlation to the actual manufacturing specification, such as the gate length or half pitch. Furthermore, it is important to note that different firms have different specifications when it comes to manufacturing. For instance, Intel’s 22nm lithography process uses a 193nm wavelength argon fluoride laser (ArF laser) with a 90nm Gate Pitch and a 34 nm Fin height. These specifications vary between fab plats such as Intel and TSMC. The prominence of these chips makes them a critical technological component in applications as diverse as medical devices, fighter jets, computers, and industrial equipment. Since 2014, state-run funds in China have invested more than $40 billion into legacy chip production to meet their goal of 70% chip sufficiency by 2030. Chinese legacy chip dominance—made possible only through the government’s extensive and unfair support—will undermine the position of Western firms and render them less competitive against distorted market dynamics.

Challenge and Opportunity

Growing Chinese capacity and “dumping” will deprive non-Chinese chipmakers of substantial revenue, making it more difficult for these firms to maintain a comparative advantage. China’s profligate industrial policy has damaged global trade equity and threatens to create an asymmetrical market. The ramifications of this economic problem will be most felt in America’s national security, as opposed to from the lens of consumers, who will benefit from the low costs of Chinese dumping programs until a hostile monopoly is established. Investors—anticipating an impending global supply glut—are already encouraging U.S. firms to reduce capital expenditures by canceling semiconductor fabs, undermining the nation’s supply chain and self-sufficiency. In some cases, firms have decided to cease manufacturing particular types of chips outright due to profitability concerns and pricing pressures. Granted, the design of chip markets is intentionally opaque, so pricing data is insufficient to fully define the extent of this phenomenon; however, instances such as Taiwan’s withdrawal from certain chip segments shortly after a price war between China and its competitors in late 2023 indicate the severity of this issue. If they continue, similar price disputes are capable of severely subverting the competitiveness of non-Chinese firms, especially considering how Chinese firms are not subject to the same fiscal constraints as their unsubsidized counterparts. In an industry with such high fixed costs, the Chinese state’s subsidization gives such firms a great advantage and imperils U.S. competitiveness and national security.

Were the U.S. to engage in armed conflict with China, reduced industrial capacity could quickly impede the military’s ability to manufacture weapons and other materials. Critical supply chain disruptions during the COVID-19 pandemic illustrate how the absence of a single chip can hold hostage entire manufacturing processes; if China gains absolute legacy chip manufacturing dominance, these concerns would be further amplified as Chinese firms become able to outright deny American access to critical chips, impose harsh costs through price hikes, or impose diplomatic compromises and quid-pro-quo.Furthermore, decreased Chinese reliance on Taiwanese semiconductors reduces their economic incentive to pursue a diplomatic solution in the Taiwan Strait—making armed conflict in the region more likely. This weakened posture endangers global norms and the balance of power in Asia—undermining American economic and military hegemony in the region.  

China’s legacy chip manufacturing is fundamentally an economic problem with national security consequences. The state ought to interfere in the economy only when markets do not operate efficiently and in cases where the conduct of foreign adversaries creates market distortion. While the authors of this brief do not support carte blanche industrial policy to advance the position of American firms, it is the belief of these authors that the Chinese government’s efforts to promote legacy chip manufacturing warrant American interference to ameliorate harms that they have invented. U.S. regulators have forced American companies to grapple with the sourcing problems surrounding Chinese chips; however, the issue with chip control is largely epistemic. It is not clear which firms do and do not use Chinese chips, and even if U.S. regulators knew, there is little political appetite to ban them as corporations would then have to pass higher costs onto consumers and exacerbate headline inflation. Traditional policy tools for achieving economic objectives—such as sanctions—are therefore largely ineffectual in this circumstance. More innovative solutions are required.  

If its government fully commits to the policy, there is little U.S. domestic or foreign policy can do to prevent  China from developing chip independence. While American firms can be incentivized to outcompete their  Chinese counterparts, America cannot usurp Chinese political directives to source chips locally. This is true because China lacks the political restraints of Western countries in financially incentivizing production, but also because in the past—under lighter sanctions regimes—China’s Semiconductor Manufacturing International  Corporation (SMIC) acquired multiple Advanced Semiconductor Materials Lithography (ASML) DUV (Deep  Ultraviolet Light) machines. Consequently, any policy that seeks to mitigate the perverse impact of Chinese dominance of the legacy chip market must a) boon the competitiveness of American and allied firms in “third markets” such as Indonesia, Vietnam, and Brazil and b) de-risk America’s supply chain from market distortions and the overreliance that Chinese policies have affected. China’s growing global share of legacy chip manufacturing threatens to recreate the global chip landscape in a way that will displace U.S. commercial and security interests. Consequently, the United States must undertake both defensive and offensive measures to ensure a coordinated response to Chinese disruption.  

Plan of Action

Considering the above, we propose the United States enact a policy mutually predicated on innovative technological reform and targeted industrial policy to onshore manufacturing capabilities. 

Recommendation 1. Weaponizing electronic design automation 

Policymakers must understand that from a lithography perspective, the United States controls all essential technologies when it comes to the design and manufacturing of integrated circuits. This is a critically overlooked dimension in contemporary policy debates because electronic design automation (EDA) software closes the gap between high-level chip design in software and the lithography system itself. Good design simulates a proposed circuit before manufacturing, plans large integrated circuits (IC) by “bundling” small subcomponents together,  and verifies the design is connected correctly and will deliver the required performance. Although often overlooked, the photolithography process, as well as the steps required before it, is a process as complex as coming up with the design of the chip itself. 

No profit-maximizing manufacturer would print a chip “as designed” because it would suffer certain distortions and degradations throughout the printing process; therefore, EDA software is imperative to mitigate imperfections throughout the photolithography process. In much the same way that software within a home-use printer automatically screens for paper material (printer paper vs glossy photo paper) and automatically adjusts the mixture of solvent, resins, and additives to display properly, EDA software learns design kinks and responds dynamically. In the absence of such software, the yield of usable chips would be much lower, making these products less commercially viable. Contemporary public policy discourse focuses only on chips as a commodified product, without recognizing the software ecosystem that is imperative in their design and use. 

Today, there exist only two major suppliers of EDA software for semiconductor manufacturing: Synopsys and Cadence Design Systems. This reality presents a great opportunity for the United States to assert dominance in the legacy chips space. In hosting all EDA in a U.S.-based cloud—for instance, a data center located in Las Vegas or another secure location—America can force China to purchase computing power needed for simulation and verification for each chip they design. This policy would mandate Chinese reliance on U.S. cloud services to run electromagnetic simulations and validate chip design. Under this proposal, China would only be able to use the latest EDA software if such software is hosted in the U.S., allowing American firms to a) cut off access at will, rendering their technology useless and b) gain insight into homegrown Chinese designs built on this platform. Since such software would be hosted on a U.S.-based cloud, Chinese users would not download the software which would greatly mitigate the risk of foreign hacking or intellectual property theft. While the United States cannot control chips outright considering Chinese production, it can control where they are integrated. A machine without instructions is inoperable, and the United States can make China’s semiconductors obsolete.  

The emergence of machine learning has introduced substantial design innovation in older lithography technologies. For instance, Synopsis has used new technologies to discern the optimal route for wires that link chip circuits, which can factor in all the environmental variables to simulate the patterns a photo mask design would project throughout the lithography process. While the 22nm process is not cutting edge, it is legacy only in the sense of its architecture. Advancements in hardware design and software illustrate the dynamism of this facet in the semiconductor supply chain. In extraordinary circumferences, the United States could also curtail usage of such software in the event of a total trade war. Weaponizing this proprietary software could compel China to divulge all source code for auditing purposes since hardware cannot work without a software element.

The United States must also utilize its allied partnerships to restrict critical replacement components from enabling injurious competition from the Chinese. Software notwithstanding, China currently has the capability to produce 14nm nodes because SMIC acquired multiple ASML DUV machines under lighter Department of  Commerce restrictions; however, SMIC heavily relies on chip-making equipment imported from the Netherlands and Japan. While the United States cannot alter the fact of possession, it has the capacity to take limited action against the realization of these tools’ potential by restricting China’s ability to import replacement parts to service these machines, such as the lenses they require to operate. Only the German firm Zeiss has the capability to produce such lenses that ArF lasers require to focus—illustrating the importance of adopting a regulatory outlook that encompasses all verticals within the supply chain. The utility of controlling critical components is further amplified by the fact that American and European firms have limited efficacy in enforcing copyright laws against Chinese entities. For instance, while different ICs are manufactured within the 22nm instruction set, not all run on a common instruction set such as ARM. However, even if such designs run on a copyrighted instruction set, the United States has no power to enforce domestic copyright law in a Chinese jurisdiction. China’s capability to reverse engineer and replicate Western-designed chips further underscores the importance of controlling 1) the EDA landscape and 2) ancillary components in the chip manufacturing process. This reality presents a tremendous yet overlooked opportunity for the United States to reassert control over China’s legacy chip market.  

Recommendation 2. Targeted industrial policy 

In the policy discourse surrounding semiconductor manufacturing, this paper contends that too much emphasis has been placed on the chips themselves. It is important to note that there are some areas in which the United States is not commercially competitive with China, such as in the NAND flash memory space. China’s Yangtze Memory Technologies has become a world leader in flash storage and can now manufacture a 232-layer 3D NAND on par with the most sophisticated American and Korean firms, such as Western Digital and Samsung, at a lower cost. However, these shortcomings do not preclude America from asserting dominance over the semiconductor market as a whole by leveraging its dynamic random-access memory (DRAM) dominance, bolstering nearshore NAND manufacturing, and developing critical mineral processing capabilities. Both DRAM and NAND are essential components for any computationally integrated technology.  

While the U.S. cannot compete on rote manufacturing prowess because of high labor costs, it would be strategically beneficial to allow supply chain redundancies with regard to NAND and rare earth metal processing. China currently processes upwards of 90% of the world’s rare earth metals, which are critical to any type of semiconductor chips.  While the U.S. possesses strategic reserves for commodities such as oil, it does not have any meaningful reserve when it comes to rare earth metals—making this a critical national security threat.  Should China stop processing rare earth metals for the U.S., the price of any type of semiconductor—in any form factor—would increase dramatically. Furthermore, as a strategic matter, the United States would not have accomplished its national security objectives if it built manufacturing capabilities yet lacked critical inputs to supply this potential. Therefore, any legacy chips proposal must first establish sufficient rare earth metal processing capabilities or a strategic reserve of these critical resources.  

Furthermore, given the advanced status of U.S. technological manufacturing prowess, it makes little sense to outright onshore legacy chip manufacturing capabilities—especially considering high U.S. costs and the substantial diversion of intellectual capital that such efforts would require. Each manufacturer must develop their own manufacturing process from scratch. A modern fab runs 24×7 and has a complicated workflow, with its own technique and software when it comes to lithography. For instance, since their technicians and scientists are highly skilled, TSMC no longer focuses on older generation lithography (i.e., 22nm) because it would be unprofitable for them to do so when they cannot fulfill their demand for 3nm or 4nm. The United States is better off developing its comparative advantage by specializing in cutting-edge chip manufacturing capabilities, as well as research and development initiatives; however, while American expertise remains expensive, America has wholly neglected the potential utility of its southern neighbors in shoring up rare earth metals processing. Developing Latin American metals processing—and legacy chip production—capabilities can mitigate national security threats. Hard drive manufacturers have employed a similar nearshoring approach with great success. 

To address both rare earth metals and onshoring concerns, the United States should pursue an economic integration framework with nations in Latin America’s Southern Cone, targeting a partialized (or multi-sectoral) free trade agreement with the Southern Common Market (MERCOSUR) bloc. The United States should pursue this policy along two industry fronts, 1) waiving the Common External Tariff for United States’ petroleum and other fuel exports, which currently represent the largest import group for Latin American members of the bloc, and 2) simultaneously eliminating all trade barriers on American importation of critical minerals––namely arsenic, germanium, and gallium––which are necessary for legacy chip manufacturing. Enacting such an agreement and committing substantial capital to the project over a long-term time horizon would radically increase semiconductor manufacturing capabilities across all verticals of the supply chain. Two mutually inclusive premises underpin this policy’s efficacy: 

Firstly, the production of economic interdependence with a bloc of Latin American states (as opposed to a single nation) serves to diversify risk in the United States; each nation provides different sets and volumes of critical minerals and has competing foreign policy agendas. This reduces the capacity of states to exert meaningful and organized diplomatic pressure on the United States, as supply lines can be swiftly re-adjusted within the bloc. Moreover, MERCOSUR countries are major energy importers, specifically with regard to Bolivian natural gas and American petroleum. Under an energy-friendly U.S. administration, the effects of this policy would be especially pronounced: low petroleum costs enable the U.S. to subtly reassert its geopolitical sway within its regional sphere of influence, notably in light of newly politically friendly Argentinian and Paraguayan governments. China has been struggling to ratify its own trade accords with the bloc given industry vulnerability, this initiative would further undermine its geopolitical influence in the region. Refocusing critical mineral production within this regional geography would decrease American reliance on Chinese production. 

Secondly, nearshoring the semiconductor supply chain would reduce transport costs, decrease American vulnerability to intercontinental disruptions, and mitigate geopolitical reliance on China. Reduced extraction costs in Latin America, minimized transportation expenses, and reduced labor costs in especially Midwestern and Southern U.S. states enable America to maintain export competitiveness as a supplier to ASEAN’s booming technology industry in adjacent sectors, which indicates that China will not automatically fill market distortions. Furthermore, establishing investment arbitration procedures compliant with the General  Agreement on Tariffs and Trade’s Dispute Settlement Understanding should accompany the establishment of transcontinental commerce initiatives, and therefore designate the the World Trade Organization as the exclusive forum for dispute settlement. 

This policy is necessary to avoid the involvement of corrupt states’ backpaddling on established systems, which has historically impeded corporate involvement in the Southern Cone. This international legal security mechanism serves to assure entrepreneurial inputs that will render cooperation with American enterprises mutually attractive. However, partial free trade accords for primary sector materials are not sufficient to revitalize American industry and shift supply lines. To address the demand side, the exertion of downward pressure on pricing, alongside the reduction of geopolitical risk, should be accompanied by the institution of a state-subsidized low-interest loan, with available rate reset for approved legacy chip manufacturers, and a special-tier visa for hired personnel working in legacy chip manufacturing. Considering the sensitive national security interests at stake, the U.S. Federal Contractor Registration ought to employ the same awarding mechanisms and security filtering criteria used for federal arms contracts in its company auditing mechanisms.  Under this scheme, vetted and historically capable legacy chip manufacturing firms will be exclusively able to take advantage of significant subventions and exceptional ‘wartime’ loans. Two reasons underpin the need for this martial, yet market-oriented industrial policy. 

Firstly, legacy chip production requires highly specialized labor and immensely expensive fixed costs given the nature of accompanying machinery. Without targeted low-interest loans, the significant capital investment required for upgrading and expanding chip manufacturing facilities would be prohibitively high–potentially eroding the competitiveness of American and allied industries in markets that are heavily saturated with Chinese subsidies. Such mechanisms for increased and cheap liquidity also render it easier to import highly specialized talent from China, Taiwan, Germany, the Netherlands, etc., by offering more competitive compensation packages and playing onto the attractiveness of the United States lifestyle. This approach would mimic the Second World War’s “Operation Paperclip,” executed on a piecemeal basis at the purview of approved legacy chip suppliers.  

Secondly, the investment fluidity that accompanies significant amounts of accessible capital serves to reduce stasis in the research and development of sixth-generation automotive, multi-use legacy chips (in both autonomous and semi-autonomous systems). Much of this improvement a priori occurs through trial-and-error processes within state-of-the-art facilities under the long-term commitment of manufacturing, research, and operations teams. 

Acknowledging the strategic importance of centralizing, de-risking, and reducing reliance on foreign suppliers will safeguard the economic stability, national defense capabilities, and the innovative flair of the United States––restoring the national will and capacity to produce on its own shores. The national security ramifications of Chinese legacy chip manufacturing are predominantly downstream of their economic consequences, particularly vis-à-vis the integrity of American defense manufacturing supply chains. In implementing the aforementioned solutions and moving chip manufacturing to closer and friendlier locales, American firms can be well positioned to compete globally against Chinese counterparts and supply the U.S. military with ample chips in the event of armed conflict.  

In 2023, the Wall Street Journal exposed the fragility of American supply chain resilience when they profiled how one manufacturing accident took offline 100% of the United States’ production capability for black powder—a critical component of mortar shells, artillery rounds, and Tomahawk missiles.  This incident illustrates how critical a consolidated supply chain can be for national security and the importance of mitigating overreliance on China for critical components. As firms desire lower prices for their chips, ensuring adequate capacity is a significant component of a successful strategy to address China’s growing global share of legacy chip manufacturing. However, there are additional national security concerns for legacy chip manufacturing that supersede their economic significance–mitigating supply chain vulnerabilities is among the most consequential of these considerations.  

Lastly, when there are substantial national security objectives at stake, the state is justified in acting independently of economic considerations; markets are sustained only by the imposition of binding and common rules. Some have argued that the possibility of cyber sabotage and espionage through military applications of Chinese chip technology warrants accelerating the timeline of procurement restrictions. The National Defense Authorization Act for Fiscal Year 2023’s Section 5949 prohibits the procurement of China-sourced chips from 2027 onwards. Furthermore, the Federal Communications Commission has the power to restrict China-linked semiconductors in U.S. critical infrastructure under the Secure Networks Act and the U.S.  Department of Commerce reserves the right to restrict China-sourced semiconductors if they pose a threat to critical communications and information technology infrastructure.  

However, Matt Blaze’s 1994 article “Protocol Failure in the Escrowed Encryption Standard” exposed the shortcomings of supposed hardware backdoors, such as the NSA’s “clipper chip” that they designed in the 1990s to surveil users. In the absence of functional software, a Chinese-designed hardware backdoor into sensitive applications could not function. This scenario would be much like a printer trying to operate without an ink cartridge. Therefore, instead of outright banning inexpensive Chinese chips and putting American firms at a competitive disadvantage, the federal government should require Chinese firms to release source code to firmware and supporting software for the chips they sell to Western companies. This would allow these technologies to be independently built and verified without undermining the competitive position of American industry. The U.S. imposed sanctions against Huawei in 2019 on suspicion of the potential espionage risks that reliance on Chinese hardware poses. While tighter regulation of Chinese semiconductors in sensitive areas seems to be a natural and pragmatic extension of this logic, it is unnecessary and undermines American dynamism.

Conclusion

Considering China’s growing global share of legacy chip manufacturing as a predominantly economic problem with substantial national security consequences, the American foreign policy establishment ought to pursue 1) a new technological outlook that exploits all facets of the integrated chip supply chain—including EDA software and allied replacement component suppliers—and 2) a partial free-trade agreement with MERCOSUR to further industrial policy objectives.  

To curtail Chinese legacy chip dominance, the United States should weaponize its monopoly on electronic design automation software. By effectively forcing Chinese firms to purchase computing services from a U.S.-based cloud, American EDA software firms can audit and monitor Chinese innovations while reserving the ability to deny them service during armed conflict. Restricting allied firms’ ability to supply Chinese manufacturers with ancillary components can likewise slow the pace of Chinese legacy chip ascendence.  

Furthermore, although China no longer relies on the United States or allied countries for NAND  manufacturing, the United States and its allies maintain DRAM superiority. The United States must leverage capabilities to maintain Chinese reliance on its DRAM prowess and sustain its competitive edge while considering restricting the export of this technology for Chinese defense applications under extraordinary circumstances. Simultaneously, efforts to nearshore NAND technologies in South America can delay the pace of Chinese legacy chip ascendence, especially if implemented alongside a strategic decision to reduce reliance on Chinese rare earth metals processing.  

In nearshoring critical mineral inputs to the end of preserving national security and reducing costs, the United States should adopt a market-oriented industrial policy of rate-reset, and state-subsidized low-interest loans for vetted legacy chip manufacturing firms. Synergy between greater competitiveness, capital solvency, and de-risked supply chains would enable U.S. firms to compete against Chinese counterparts in critical “third markets,” and reduce supply chain vulnerabilities that undermine national security. As subsidy-induced Chinese market distortions weigh less on the commercial landscape, the integrity of American defense capabilities will simultaneously improve, especially if bureaucratic agencies move to further insulate critical U.S. infrastructure against potential cyber espionage.

An “Open Foundational” Chip Design Standard and Buyers’ Group to Create a Strategic Microelectronics Reserve

Last year the Federation of American Scientists (FAS), Jordan Schneider (of ChinaTalk), Chris Miller (author of Chip War) and Noah Smith (of Noahpinion) hosted a call for ideas to address the U.S. chip shortage and Chinese competition. A handful of ideas were selected based on the feasibility of the idea and its and bipartisan nature. This memo is one of them.

Semiconductors are not one industry, but thousands. They range from ultra-high value advanced logic chips like H100s to bulk discrete electronic components and basic integrated circuits (IC). Leading-edge chips are advanced logic, memory, and interconnect devices manufactured in cutting-edge facilities requiring production processes at awe-inspiring precision. Leading-edge chips confer differential capabilities, and “advanced process nodes”1 enable the highest performance computation and the most compact and energy-efficient devices. This bleeding edge performance is derived from the efficiencies enabled by more densely packed circuit elements in a computer chip. Smaller transistors require lower voltages and more finely packed ones can compute faster. 

Devices manufactured with older process nodes, 65nm and above, form the bulk by volume of devices we use. These include discrete electrical components like diodes or transistors, power semiconductors, and low-value integrated circuits such as bulk microcontrollers (MCU). These inexpensive logic chips like MCUs, memory controllers, and clock setters I term “commodity ICs”. While the keystone components in advanced applications are manufactured at the leading edge, older nodes are the table stakes of electrical systems. These devices supply power, switch voltages, transform currents, command actuators, and sense the environment. These devices we’ll collectively term foundational chips, as they provide the platform upon which all electronics rest. And their supply can be a point of failure. The automotive MCU shortage provides a bitter lesson that even the humblest device can throttle production. 

Foundational devices themselves do not typically enable differentiating capabilities. In many applications, such as computing or automotive, they simply enable basic functions. These devices are low-cost, low-margin goods, made with a comparatively simpler production process. Unfortunately, a straightforward supply does not equate to a secure one. Foundational chips are manufactured by a small number of firms concentrated in China. This is in part due to long-running industrial policy efforts by the Chinese government, with significant production subsidies. The Chips and Science Act was mainly about innovation and international competitiveness. Reshoring a significant fraction of leading-edge production to the United States in the hope of returning valuable communities of engineering practice (Fuchs & Kirchain, 2010). While these policy goals are vital, foundational chip supply represents a different challenge and must be addressed by other interventions. 

The main problem posed by the existing foundational chip supply is resilience. They are manufactured by a few geographically clustered firms and are thus vulnerable to disruption, from geopolitical conflicts (e.g. export controls on these devices) or more quotidian outages such as natural disasters or shipping disruptions. 

There is also concern that foreign governments may install hardware backdoors in chips manufactured within their borders, enabling them to deactivate the deployed stock of chips. While this meaningful security consideration, it is less applicable in foundational devices, as their low complexity makes such backdoors more challenging. A DoD analysis found mask and wafer production to be the manufacturing process steps most resilient to adversarial interference (Coleman, 2023, p. 36).  There already exist “trusted foundry” electronics manufacturers for critical U.S. defense applications concerned about confidentiality; these  policy interventions seek to address the vulnerability to a conventional supply disruption. This report will first outline the technical and economic features of foundational chip supply which are barriers to a resilient supply, and then propose policy to address these barriers. 

Challenge and Opportunity

Technical characteristics of the manufacture and end-use of foundational microelectronics make supply especially vulnerable to disruption. Commodity logic ICs such as MCUs or memory controllers vary in their clock speed, architecture, number of pins, number of inputs/outputs (I/O), mapping of I/O to pins, package material, circuit board connection, and other design features. Some of these features, like operating temperature range, are key drivers of performance in particular applications. However most custom features in commodity ICs do not confer differential capability or performance advantages to the final product, the pin-count of a microcontroller does not determine the safety or performance of a vehicle. Design lock-in combined with this feature variability results in dramatically reduced short-run substitutability of these devices; while MCUs exist in a commodity-style market, they are not interchangeable without significant redesign efforts. This phenomenon, designs based on specialized components that are not required in the application, is known as over-specification (Smith & Eggert, 2018). This means that while there are numerous semiconductor manufacturing firms, in practice there may only be a single supplier for a specified foundational component. 

These over-specification risks are exacerbated by a lack of value chain visibility. Firms possess little knowledge of their tier 2+ suppliers. The fractal symmetry of this knowledge gap means that even if an individual firm secures robust access to the components they directly use, they may still be exposed to disruption through their suppliers. Value chains are only as strong as their weakest link. Physical characteristics of foundational devices also uncouple them from the leading edge. Many commodity ICs just don’t benefit from classical feature shrinkage; bulk MCUs or low-end PMICs don’t improve in performance with transistor density as their outputs are essentially fixed. Analog devices experience performance penalties at too small a feature scale, with physically larger transistors able to process higher voltages and produce lower sympathetic capacitance. Manufacturing commodity logic ICs using leading-edge logic fabs would be prohibitively expensive and would be actively detrimental to analog device performance. These factors, design over-specification, supply chain opacity, and insulation from leading-edge production, combine to functionally decrease the already narrow supply of legacy chips. 

Industrial dynamics impede this supply from becoming more robust without policy intervention. Foundational chips, whether power devices or memory controllers are low-margin commodity-style products sold in volume. The extreme capital intensity of the industry combined with the low margin for these makes supply expansion unattractive for producers, with short-term capital discipline a common argument against supply buildout (Connatser, 2024). The premium firms pay for performance results in significant investment in leading-edge design and production capacity as firms compete for this demand. The commodity environment of foundational devices in contrast is challenging to pencil out as even trailing-edge fabs are highly capital-intensive (Reinhardt, 2022). Chinese production subsidies also impede the expansion of foundational fabs, as they further narrow already low margins. Semiconductor demand is historically cyclical, and producers don’t make investment decisions based on short-run demand signals. These factors make foundational device supply challenging to expand: firms manufacture commodity-style products manufactured in capital-intensive facilities, competing with subsidized producers, to meet widely varying demands. Finally, foundational chip supply resilience is a classic positive externality good. No individual firm captures all or even most of the benefit of a more robust supply ecosystem. 

Plan of Action

To secure the supply of foundational chips, this memo recommends the development of an “Open Foundational” design standard and buyers’ group. One participant in that buyer’s group will be the U.S. federal government, which would establish a strategic microelectronics reserve to ensure access to critical chips. This reserve would be initially stocked through a multi-year advanced market commitment for Open Foundational devices. 

The foundational standard would be a voluntary consortium of microelectronics users in critical sectors, inspired by the Open Compute Project. It would ideally contain firms from critical sectors such as enterprise computation, automotive manufacturing, communications infrastructure, and others. The group would initially convene to identify a set of foundational devices that are necessary to their sectors (i.e. system architecture commodity ICs and power devices for computing) and identify design features that don’t significantly impact performance, and thus could be standardized. From these, a design standard could be developed. Firms are typically locked to existing devices for their current design; one can’t place a 12-pin MCU into a board built for 8. Steering committee firms will thus be asked to commit some fraction of future designs to use Open Foundational microelectronics, ideally on a ramping-up basis. The goal of the standard is not to mandate away valuable features, unique application needs should still be met by specialized devices, such as rad-hardened components in satellites. By adopting a standard platform of commodity chips in future designs, the buyers’ group would represent demand of sufficient scale to motivate investment, and supply would be more robust to disruptions once mature. 

Government should adopt the standard where feasible, to build greater resilience in critical systems if nothing else. This should be accompanied by a diplomatic effort for key democratic allies to partner in adopting these design practices in their defense applications. The foundational standard should seek geographic diversity in suppliers, as manufacturing concentrated anywhere represents a point of failure. The foundational standard also allows firms to de-risk their suppliers as well as themselves. They can stipulate in contracts that their tier-one suppliers need to adopt Foundational Standards in their designs, and OEMs who do so can market the associated resilience advantage. 

Having developed the open standard through the buyers’ group, Congress should authorize the purchase through the Department of Commerce a strategic microelectronics reserve (SMR). Inspired by the strategic petroleum reserve, the microelectronics reserve is intended to provide the backstop foundational hardware for key government and societal operations during a crisis. The composition of the SMR will likely evolve as technologies and applications develop, but at launch, the purchasing authority should commit to a long-term high-volume purchase of Foundational Standard devices, a policy structure known as an advanced market commitment. 

Advanced market commitments are effective tools to develop supply when there is initial demand uncertainty, clear product specification, and a requirement for market demand to mature (Ransohoff, 2024). The foundational standard provides the product specification, and the advanced government commitment provides demand at a duration that should exceed both the product development and fab construction lifecycle, on the order of 5 years or more. This demand should be steady, with regular annual purchases at scale, ensuring producers’ consistent demand through the ebbs and flows of a volatile industry. If these efforts are successful, the U.S. government will cultivate a more robust and resilient supply ecosystem both for its own core services and for firms and citizens. The SMR could also serve as a backstop when supply fluctuations do occur, as with the strategic petroleum reserve.

The goal of the SMR is not to fully substitute for existing stockpiling efforts, either by firms or by the government for defense applications. Through the expanded supply base for foundational chips enabled by the SMR, and through the increase in substitutability driven by the Foundational Standard, users can concentrate their stockpiling efforts on the chips which confer differentiated capabilities. As resources can be concentrated in more application-specific chips, stockpiling becomes more efficient, enabling more production for the same investment. In the long run, the SMR should likely diversify to include more advanced components such as high-capacity memory, and field-programmable processors. This would ensure government access to core computational capabilities in a disaster or conflict scenario. But as all systems are built on a foundation, the SMR should begin with Foundational Standard devices. 

There are potential risks to this approach. The most significant is that this model of foundational chips does not accurately reflect physical reality. Interfirm cooperation in setting and adhering to the standards is conditional on these devices not determining performance. If firms perceive foundational chips as providing a competitive advantage to their system or products, they shall not crucify capability on a cross of standards. Alternatively, each sector may have a basket of foundational devices as we describe, but there may be little to no overlap sector-to-sector. In this case, the sectors representing the largest demand, such as enterprise computing, may be able to develop their own standard, but without resilience spillovers into other applications. These scenarios should be identifiably early in the standard-setting process before significant physical investment is made. In such cases, the government should explore using fab lines in the national prototyping facility to flexibly manufacture a variety of foundational chips when needed, by developing adaptive production lines and processes. This functionally shifts the policy goal up the value chain, achieving resilience through flexible manufacture of devices rather than flexible end-use.

Value chains may be so opaque that the buyers’ group might fail to identify a vulnerable chip. The Department of Commerce developing an office of supply mapping, and applying a tax penalty to firms who fail to report component flows are potential mitigation strategies. Existing subsidized foundational chip supply by China may make virtually any greenfield production uncompetitive. In this case, trade restrictions or a counter-subsidy may be required until the network effects of the Foundational Standard enable long-term viability.  We do not want the Foundational standard to lock in technological stagnation, in fact the opposite. Accordingly, there should be a periodic and iterative review of the devices within the standard and their features. The problems of legacy chips are distinct from those at the technical frontier.  

Foundational chips are necessary but not sufficient for modern electronic systems. It was not the hundreds of dollar System-on-a-Chip components that brought automotive production to a halt, but the sixteen-cent microcontroller. The technical advances fueled by leading-edge nodes are vital to our long-term competitiveness, but they too rely on legacy devices. We must in parallel fortify the foundation on which our security and dynamism rests.

Blank Checks for Black Boxes: Bring AI Governance to Competitive Grants

The misuse of AI in federally-funded projects can risk public safety and waste taxpayer dollars.

The Trump administration has a pivotal opportunity to spot wasteful spending, promote public trust in AI, and safeguard Americans from unchecked AI decisions. To tackle AI risks in grant spending, grant-making agencies should adopt trustworthy AI practices in their grant competitions and start enforcing them against reckless grantees.

Federal AI spending could soon skyrocket. One ambitious legislative plan from a Senate AI Working Group calls for doubling non-defense AI spending to $32 billion a year by 2026. That funding would grow AI across R&D, cybersecurity, testing infrastructure, and small business support. 

Yet as federal AI investment accelerates, safeguards against snake oil lag behind. Grants can be wasted on AI that doesn’t work. Grants can pay for untested AI with unknown risks. Grants can blur the lines of who is accountable for fixing AI’s mistakes. And grants offer little recourse to those affected by an AI system’s flawed decisions. Such failures risk exacerbating public distrust of AI, discouraging possible beneficial uses. 

Oversight for federal grant spending is lacking, with: 

Watchdogs, meanwhile, play a losing game, chasing after errant programs one-by-one only after harm has been done. Luckily, momentum is building for reform. Policymakers recognize that investing in untrustworthy AI erodes public trust and stifles genuine innovation. Steps policymakers could take include setting clear AI quality standards, training grant judges, monitoring grantee’s AI usage, and evaluating outcomes to ensure projects achieve their potential. By establishing oversight practices, agencies can foster high-potential projects for economic competitiveness, while protecting the public from harm. 

Challenge and Opportunity

Poor AI Oversight Jeopardizes Innovation and Civil Rights

The U.S. government advances public goals in areas like healthcare, research, and social programs by providing various types of federal assistance. This funding can go to state and local governments or directly to organizations, nonprofits, and individuals. When federal agencies award grants, they typically do so expecting less routine involvement than they would with other funding mechanisms, for example cooperative agreements. Not all federal grants look the same—agencies administer mandatory grants, where the authorizing statute determines who receives funding, and competitive grants (or “discretionary grants”), where the agency selects award winners. In competitive grants, agencies have more flexibility to set program-specific conditions and award criteria, which opens opportunities for policymakers to structure how best to direct dollars to innovative projects and mitigate emerging risks. 

These competitive grants fall short on AI oversight. Programmatic policy is set in cross-cutting laws, agency-wide policies, and grant-specific rules; a lack of AI oversight mars all three. To date, no government-wide AI regulation extends to AI grantmaking. Even when President Biden’s 2023 AI Executive Order directed agencies to implement responsible AI practices, the order’s implementing policies exempted grant spending (see footnote 25) entirely from the new safeguards. In this vacuum, the 26 grantmaking agencies are on their own to set agency-wide policies. Few have. Agencies can also set AI rules just for specific funding opportunities. They do not. In fact, in a review of a large set of agency discretionary grant programs, only a handful of funding notices announced a standard for AI quality in a proposed program. (See: One Bad NOFO?) The net result? A policy and implementation gap for the use of AI in grant-funded programs.

Funding mistakes damage agency credibility, stifle innovation, and undermines the support for people and communities financial assistance aims to provide. Recent controversies highlight how today’s lax measures—particularly in setting clear rules for federal financial assistance, monitoring how they are used, and responding to public feedback—have led to inefficient and rights-trampling results. In just the last few years, some of the problems we have seen include:

Any grant can attract controversy, and these grants are no exception. But the cases above spotlight transparency, monitoring, and participation deficits—the same kinds of AI oversight problems weakening trust in government that policymakers aim to fix in other contexts.

Smart spending depends on careful planning. Without it, programs may struggle to drive innovation or end up funding AI that infringes peoples’ rights. OMB, as well as agency Inspectors General, and grant managers will need guidance to evaluate what money is going towards AI and how to implement effective oversight. Government will face tradeoffs and challenges promoting AI innovation in federal grants, particularly due to:

1) The AI Screening Problem. When reviewing applications, agencies might fail to screen out candidates that exaggerate their AI capabilities—or fail to report bunk AI use altogether. Grantmaking requires calculated risks on ideas that might fail. But grant judges who are not experts in AI can make bad bets. Applicants will pitch AI solutions directly to these non-experts, and grant winners, regardless of their original proposal, will likely purchase and deploy AI, creating additional oversight challenges. 

2) The grant-procurement divide. When planning a grant, agencies might set overly burdensome restrictions that dissuade qualified applicants from applying or otherwise take up too much time, getting in the way of grant goals. Grants are meant to be hands-off;  fostering breakthroughs while preventing negligence will be a challenging needle to thread. 

 3) Limited agency capacity. Agencies may be unequipped to monitor grant recipients’ use of AI. After awarding funding, agencies can miss when vetted AI breaks down on launch. While agencies audit grantees, those audits typically focus on fraud and financial missteps. In some cases, agencies may not be measuring grantee performance well at all (slides 12-13).  Yet regular monitoring, similar to the oversight used in procurement, will be necessary to catch emergent problems that affect AI outcomes. Enforcement, too, could be cause for concern; agencies clawback funds for procedural issues, but “almost never withhold federal funds when grantees are out of compliance with the substantive requirements of their grant statutes.” Even as the funding agency steps away, an inaccurate AI system can persist, embedding risks over a longer period of time.

Plan of Action

Recommendation 1. OMB and agencies should bake-in pre-award scrutiny through uniform requirements and clearer guidelines

Recommendation 2. OMB and grant marketplaces should coordinate information sharing between agencies

To support review of AI-related grants, OMB and grantmaking agency staff should pool knowledge on AI’s tricky legal, policy, and technical matters. 

Recommendation 3. Agencies should embrace targeted hiring and talent exchanges for grant review boards

Agencies should have experts in a given AI topic judging grant competitions. To do so requires overcoming talent acquisition challenges. To that end:

Recommendation 4. Agencies should step up post-award monitoring and enforcement

You can’t improve what you don’t measure—especially when it comes to AI. Quantifying, documenting, and enforcing against careless AI uses can be a new task for grantmaking agencies.  Incident reporting will improve the chances that existing cross-cutting regulations, including civil rights laws, can reel back AI gone awry. 

Recommendation 5. Agencies should encourage and fund efforts to investigate and measure AI harms 

Conclusion

Little limits how grant winners can spend federal dollars on AI. With the government poised to massively expand its spending on AI, that should change. 

The federal failure to oversee AI use in grants erodes public trust, civil rights, effective service delivery and the promise of government-backed innovation. Congressional efforts to remedy these problems–starting probes, drafting letters–are important oversight measures, but only come after the damage is done. 

Both the Trump and Biden administrations have recognized that AI is exceptional and needs exceptional scrutiny. Many of the lessons learned from scrutinizing federal agency AI procurement apply to grant competitions. Today’s confluence of public will, interest, and urgency is a rare opportunity to widen the aperture of AI governance to include grantmaking.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
What authorities allow agencies to make grant competitions?

Enabling statutes for agencies often are the authority for grant competitions. For grant competitions, the statutory language leaves it to agencies to place further specific policies on the competition. Additionally, laws, like the DATA Act and Federal Grant and Cooperative Agreement Act, offer definitions and guidance to agencies in the use of federal funds.

What kinds of steps do agencies take in pre-award funding?

Agencies already conduct a great deal of pre-award planning to align grantmaking with Executive Orders. For example, in one survey of grantmakers, a little over half of respondents updated their pre-award processes, such as applications and organization information, to comply with an Executive Order. Grantmakers aligning grant planning with the Trump administration’s future Executive Orders will likely follow similar steps.

Who receives federal grant funding for the development and use of AI?

A wide range of states, local governments, companies, and individuals receive grant competition funds. Spending records, available on USASpending.gov, give some insight into where grant funding goes, though these records too, can be incomplete.

Fighting Fakes and Liars’ Dividends: We Need To Build a National Digital Content Authentication Technologies Research Ecosystem

The U.S. faces mounting challenges posed by increasingly sophisticated synthetic content. Also known as digital media ( images, audio, video, and text), increasingly, these are produced or manipulated by generative artificial intelligence (AI).  Already, there has been a proliferation in the abuse of generative AI technology to weaponize synthetic content for harmful purposes, such as financial fraud, political deepfakes, and the non-consensual creation of intimate materials featuring adults or children. As people become less able to distinguish between what is real and what is fake, it has become easier than ever to be misled by synthetic content, whether by accident or with malicious intent. This makes advancing alternative countermeasures, such as technical solutions, more vital than ever before. To address the growing risks arising from synthetic content misuse, the National Institute of Standards and Technology (NIST) should take the following steps to create and cultivate a robust digital content authentication technologies research ecosystem: 1) establish dedicated university-led national research centers, 2) develop a national synthetic content database, and 3) run and coordinate prize competitions to strengthen technical countermeasures. In turn, these initiatives will require 4) dedicated and sustained Congressional funding of these initiatives. This will enable technical countermeasures to be able to keep closer pace with the rapidly evolving synthetic content threat landscape, maintaining the U.S.’s role as a global leader in responsible, safe, and secure AI.

Challenge and Opportunity

While it is clear that generative AI offers tremendous benefits, such as for scientific research, healthcare, and economic innovation, the technology also poses an accelerating threat to U.S. national interests. Generative AI’s ability to produce highly realistic synthetic content has increasingly enabled its harmful abuse and undermined public trust in digital information. Threat actors have already begun to weaponize synthetic content across a widening scope of damaging activities to growing effect. Project losses from AI-enabled fraud are anticipated to reach up to $40 billion by 2027, while experts estimate that millions of adults and children have already fallen victim to being targets of AI-generated or manipulated nonconsensual intimate media or child sexual abuse materials – a figure that is anticipated to grow rapidly in the future. While the widely feared concern of manipulative synthetic content compromising the integrity of the 2024 U.S. election did not ultimately materialize, malicious AI-generated content was nonetheless found to have shaped election discourse and bolstered damaging narratives. Equally as concerning is the accumulative effect this increasingly widespread abuse is having on the broader erosion of public trust in the authenticity of all digital information. This degradation of trust has not only led to an alarming trend of authentic content being increasingly dismissed as ‘AI-generated’, but has also empowered those seeking to discredit the truth, or what is known as the “liar’s dividend”.

From the amusing… to the not-so-benign.

A. In March 2023, a humorous synthetic image of Pope Francis, first posted on Reddit by creator Pablo Xavier, wearing a Balenciaga coat quickly went viral across social media.

B. In May 2023, this synthetic image was duplicitously published on X as an authentic photograph of an explosion near the Pentagon. Before being debunked by authorities, the image’s widespread circulation online caused significant confusion and even led to a temporary dip in the U.S. stock market.

Research has demonstrated that current generative AI technology is able to produce synthetic content sufficiently realistic enough that people are now unable to reliably distinguish between AI-generated and authentic media. It is no longer feasible to continue, as we currently do, to rely predominantly on human perception capabilities to protect against the threat arising from increasingly widespread synthetic content misuse. This new reality only increases the urgency of deploying robust alternative countermeasures to protect the integrity of the information ecosystem. The suite of digital content authentication technologies (DCAT), or techniques, tools, and methods that seek to make the legitimacy of digital media transparent to the observer, offers a promising avenue for addressing this challenge. These technologies encompass a range of solutions, from identification techniques such as machine detection and digital forensics to classification and labeling methods like watermarking or cryptographic signatures. DCAT also encompasses technical approaches that aim to record and preserve the origin of digital media, including content provenance, blockchain, and hashing.

Evolution of Synthetic Media

Screenshot from an AI-manipulated video of President Obama

Published in 2018, this now infamous PSA sought to illustrate the dangers of synthetic content. It shows an AI-manipulated video of President Obama, using narration from a comedy sketch by comedian Jordan Peele.

In 2020, a hobbyist creator employed an open-source generative AI model to ‘enhance’ the Hollywood CGI version of Princess Leia in the film Rouge One.

In 2020, a hobbyist creator employed an open-source generative AI model to ‘enhance’ the Hollywood CGI version of Princess Leia in the film Rouge One.

The hugely popular Tiktok account @deeptomcruise posts parody videos featuring a Tom Cruise imitator face-swapped with the real Tom Cruise’s real face, including this 2022 video, racking up millions of views.

The hugely popular Tiktok account @deeptomcruise posts parody videos featuring a Tom Cruise imitator face-swapped with the real Tom Cruise’s real face, including this 2022 video, racking up millions of views.

The 2024 film Here relied extensively on generative AI technology to de-age and face-swap actors in real-time as they were being filmed.

The 2024 film Here relied extensively on generative AI technology to de-age and face-swap actors in real-time as they were being filmed.

Robust DCAT capabilities will be indispensable for defending against the harms posed by synthetic content misuse, as well as bolstering public trust in both information systems and AI development. These technical countermeasures will be critical for alleviating the growing burden on citizens, online platforms, and law enforcement to manually authenticate digital content. Moreover, DCAT will be vital for enforcing emerging legislation, including AI labeling requirements and prohibitions on illegal synthetic content. The importance of developing these capabilities is underscored by the ten bills (see Fig 1) currently under Congressional consideration that, if passed, would require the employment of DCAT-relevant tools, techniques, and methods.

Figure 1. Congressional bills which would require the use of DCAT tools, techniques, and methods.
Bill NameSenateHouse
AI Labelling ActS.2691H.R.6466
Take It Down ActS.4569H.R.8989
DEFIANCE ActS.3696H.R.7569
Preventing Deepfakes of Intimate Images ActH.R.3106
DEEPFAKES Accountability ActH.R.5586
AI Transparency in Elections ActS.3875H.R.8668
Securing Elections From AI Deception ActH.R. 8858
Protecting Consumers from Deceptive AI ActH.R. 7766
COPIED ActS.4674
NO FAKES ActS.4875H.R.9551

However, significant challenges remain. DCAT capabilities need to be improved, with many currently possessing weaknesses or limitations such brittleness or security gaps. Moreover, implementing these countermeasures must be carefully managed to avoid unintended consequences in the information ecosystem, like deploying confusing or ineffective labeling to denote the presence of real or fake digital media. As a result, substantial investment is needed in DCAT R&D to develop these technical countermeasures into an effective and reliable defense against synthetic content threats.

The U.S. government has demonstrated its commitment to advancing DCAT to reduce synthetic content risks through recent executive actions and agency initiatives. The 2023 Executive Order on AI (EO 14110) mandated the development of content authentication and tracking tools. Charged by the EO 14110 to address these challenges, NIST has taken several steps towards advancing DCAT capabilities. For example, NIST’s recently established AI Safety Institute (AISI) takes the lead in championing this work in partnership with NIST’s AI Innovation Lab (NAIIL).  Key developments include: the dedication of one of the U.S. Artificial Intelligence Safety Institute Consortium’s (AISIC) working groups to identifying and advancing DCAT R&D; the publication of NIST AI 100-4, which “examines the existing standards, tools, methods, and practices, as well as the potential development of further science-backed standards and techniques” regarding current and prospective DCAT capabilities; and the $11 million dedicated to international research on addressing dangers arising from synthetic content announced at the first convening of the International Network of AI Safety Institutes. Additionally, NIST’s Information Technology Laboratory (ITL) has launched the GenAI Challenge Program to evaluate and advance DCAT capabilities. Meanwhile, two pending bills in Congress, the Artificial Intelligence Research, Innovation, and Accountability Act (S. 3312) and the Future of Artificial Intelligence Innovation Act (S. 4178), include provisions for DCAT R&D.

Although these critical first steps have been taken, an ambitious and sustained federal effort is necessary to facilitate the advancement of technical countermeasures such as DCAT. This is necessary to more successfully combat the risks posed by synthetic content—both in the immediate and long-term future. To gain and maintain a competitive edge in the ongoing race between deception and detection, it is vital to establish a robust national research ecosystem that fosters agile, comprehensive, and sustained DCAT R&D.

Plan of Action

NIST should engage in three initiatives: 1) establishing dedicated university-based DCAT research centers, 2) curating and maintaining a shared national database of synthetic content for training and evaluation, as well as 3) running and overseeing regular federal prize competitions to drive innovation in critical DCAT challenges. The programs, which should be spearheaded by AISI and NAIIL, are critical for enabling the creation of a robust and resilient U.S. DCAT research ecosystem. In addition, the 118th Congress should 4) allocate dedicated funding to supporting these enterprises.

These recommendations are not only designed to accelerate DCAT capabilities in the immediate future, but also to build a strong foundation for long-term DCAT R&D efforts. As generative AI capabilities expand, authentication technologies must too keep pace, meaning that developing and deploying effective technical countermeasures will require ongoing, iterative work. Success demands extensive collaboration across technology and research sectors to expand problem coverage, maximize resources, avoid duplication, and accelerate the development of effective solutions. This coordinated approach is essential given the diverse range of technologies and methodologies that must be considered when addressing synthetic content risks.

Recommendation 1. Establish DCAT Research Institutes

NIST should establish a network of dedicated university-based research to scale up and foster long-term, fundamental R&D on DCAT. While headquartered at leading universities, these centers would collaborate with academic, civil society, industry, and government partners, serving as nationwide focal points for DCAT research and bringing together a network of cross-sector expertise. Complementing NIST’s existing initiatives like the GenAI Challenge, the centers’ research priorities would be guided by AISI and NAIIL, with expert input from the AISIC, the International Network of AISI, and other key stakeholders.  

A distributed research network offers several strategic advantages. It leverages elite expertise from industry and academia, and having permanent institutions dedicated to DCAT R&D enables the sustained, iterative development of authentication technologies to better keep pace with advancing generative AI capabilities. Meanwhile, central coordination by AISI and NAIIL would also ensure comprehensive coverage of research priorities while minimizing redundant efforts.  Such a structure provides the foundation for a robust, long-term research ecosystem essential for developing effective countermeasures against synthetic content threats.

There are multiple pathways via which dedicated DCAT research centers could be stood up.  One approach is direct NIST funding and oversight, following the model of Carnegie Mellon University’s AI Cooperative Research Center. Alternatively, centers could be established through the National AI Research Institutes Program, similar to the University of Maryland’s Institute for Trustworthy AI in Law & Society, leveraging NSF’s existing partnership with NIST.

The DCAT research agenda could be structured in two ways.  Informed by NIST’s report NIST AI 100-4, a vertical approach could be taken to centers’ research agendas, assigning specific technologies to each center (e.g. digital watermarking, metadata recording, provenance data tracking, or synthetic content detection). Centers would focus on all aspects of a specific technical capability, including: improving the robustness and security of existing countermeasures; developing new techniques to address current limitations; conducting real-world testing and evaluation, especially in a cross-platform environment; and studying interactions with other technical safeguards and non-technical countermeasures like regulations or educational initiatives. Conversely, a horizontal approach might seek to divide research agendas across areas such as: the advancement of multiple established DACT techniques, tools, and methods; innovation of novel techniques, tools, and methods; testing and evaluation of combined technical approaches in real-world settings; examining the interaction of multiple technical countermeasures with human factors such as label perception and non-technical countermeasures.  While either framework provides a strong foundation for advancing DCAT capabilities, given institutional expertise and practical considerations, a hybrid model combining both approaches is likely the most feasible option.

Recommendation 2. Build and Maintain a National Synthetic Content Database

NIST should also build and maintain a national database of synthetic content database to advance and accelerate DCAT R&D, similar to existing federal initiatives such as NIST’s National Software Reference Library and NSF’s AI Research Resource pilot. Current DCAT R&D is severely constrained by limited access to diverse, verified, and up-to-date training and testing data.  Many researchers, especially in academia, where a significant portion of DCAT research takes place, lack the resources to build and maintain their own datasets.  This results in less accurate and more narrowly applicable authentication tools that struggle to keep pace with rapidly advancing AI capabilities.  

A centralized database of synthetic and authentic content would accelerate DCAT R&D in several critical ways. First, it would significantly alleviate the effort on research teams to generate or collect synthetic data for training and evaluation, encouraging less well-resourced groups to conduct research as well as allowing researchers to focus more on other aspects of R&D. This includes providing much-needed resources for the NIST-facilitated university-based research centers and prize competitions proposed here. Moreover, a shared database would be able to provide more comprehensive coverage of the increasingly varied synthetic content being created today, permitting the development of more effective and robust authentication capabilities. The database would be useful for establishing standardized evaluation metrics for DCAT capabilities – one of NIST’s critical aims for addressing the risks posed by AI technology.

A national database would need to be comprehensive, encompassing samples of both early and state-of-the-art synthetic content. It should have controlled laboratory-generated along with verified “in the wild” or real world synthetic content datasets, including both benign and potentially harmful examples. Further critical to the database’s utility is its diversity, ensuring synthetic content spans multiple individual and combined modalities (text, image, audio, video) and features varied human populations as well as a variety of non-human subject matter. To maintain the database’s relevance as generative AI capabilities continue to evolve, routinely incorporating novel synthetic content that accurately reflects synthetic content improvements will also be required.

Initially, the database could be built on NIST’s GenAI Challenge project work, which includes “evolving benchmark dataset creation”, but as it scales up, it should operate as a standalone program with dedicated resources. The database could be grown and maintained through dataset contributions by AISIC members, industry partners, and academic institutions who have either generated synthetic content datasets themselves or, as generative AI technology providers, with the ability to create the large-scale and diverse datasets required. NIST would also direct targeted dataset acquisition to address specific gaps and evaluation needs.

Recommendation 3. Run Public Prize Competitions on DCAT Challenges

Third, NIST should set up and run a coordinated prize competition program, while also serving as federal oversight leads for prize competitions run by other agencies. Building on existing models such as the DARPA SemaFor’s AI FORCE and the FTC’s Voice Cloning challenge, the competitions would address expert-identified priorities as informed by the AISIC, International Network of AISI, and proposed DCAT national research centers. Competitions represent a proven approach to spurring innovation for complex technical challenges, enabling the rapid identification of solutions through diverse engagement. In particular, monetary prize competitions are especially successful at ensuring engagement. For example, the 2019 Kaggle Deepfake Detection competition, which had a prize of $1 million, fielded twice as many participants as the 2024 competition, which gave no cash prize. 

By providing structured challenges and meaningful incentives, public competitions can accelerate the development of critical DCAT capabilities while building a more robust and diverse research community.  Such competitions encourage novel technical approaches, rapid testing of new methods, facilitate the inclusion of new or non-traditional participants, and foster collaborations. The more rapid-cycle and narrow scope of the competitions would also complement the longer-term and broader research being conducted by the national DCAT research centers. Centralized federal oversight would also prevent the implementation gaps which have occurred in past approved federal prize competitions.  For instance, the 2020 National Defense Authorization Act (NDAA) authorized a $5 million machine detection/deepfakes prize competition (Sec. 5724), and the 2024 NDAA authorized a ”Generative AI Detection and Watermark Competition” (Sec. 1543). However, neither prize competition has been carried out, and Watermark Competition has now been delayed to 2025. Centralized oversight would also ensure that prize competitions are run consistently to address specific technical challenges raised by expert stakeholders, encouraging more rapid development of relevant technical countermeasures.

Some examples of possible prize competitions might include: machine detection and digital forensic methods to detect partial or fully AI-generated content across single or multimodal content; assessing the robustness, interoperability, and security of watermarking and other labeling methods across modalities; testing innovations in tamper-evident or -proofing content provenance tools and other data origin techniques. Regular assessment and refinement of competition categories will ensure continued relevance as synthetic content capabilities evolve.

Recommendation 4. Congressional Funding of DCAT Research and Activities

Finally, the 118th Congress should allocate funding for these three NIST initiatives in order to more effectively establish the foundations of a strong DCAT national research infrastructure. Despite widespread acknowledgement of the vital role of technical countermeasures in addressing synthetic content risks, the DCAT research field remains severely underfunded. Although recent initiatives, such as the $11 million allocated to the International Network of AI Safety Institutes, are a welcome step in the right direction, substantially more investment is needed. Thus far, the overall financing of DCAT R&D has been only a drop in the bucket when compared to the many billions of dollars being dedicated by industry alone to improve generative AI technology.

This stark disparity between investment in generative AI versus DCAT capabilities presents an immediate opportunity for Congressional action. To address the widening capability gap, and to support pending legislation which will be reliant on technical countermeasures such as DCAT, the 118th Congress should establish multi-year appropriations with matching fund requirements. This will encourage private sector investment and permit flexible funding mechanisms to address emerging challenges. This funding should be accompanied by regular reporting requirements to track progress and impact.

One specific action that Congress could take to jumpstart DCAT R&D investment would be to reauthorize and appropriate the budget that was earmarked for the unexecuted machine detection competition it approved in 2020. Despite the 2020 NDAA authorizing $5 million for it, no SAC-D funding was allocated, and the competition never took place. Another action would be for Congress to explicitly allocate prize money for the watermarking competition authorized by the 2024 NDAA, which currently does not have any monetary prize attached to it, to encourage higher levels of participation in the competition when it takes place this year.

Conclusion

The risks posed by synthetic content present an undeniable danger to U.S. national interests and security. Advancing DCAT capabilities is vital for protecting U.S. citizens against both the direct and more diffuse harms resulting from the proliferating misuse of synthetic content. A robust national DCAT research ecosystem is required to accomplish this. Critically, this is not a challenge that can be addressed through one-time solutions or limited investment—it will require continuous work and dedicated resources to ensure technical countermeasures keep pace alongside increasingly sophisticated synthetic content threats. By implementing these recommendations with sustained federal support and investment, the U.S. will be able to more successfully address current and anticipated synthetic content risks, further reinforcing its role as a global leader in responsible AI use.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Supporting Federal Decision Making through Participatory Technology Assessment

The incoming administration needs a robust, adaptable and scalable participatory assessment capacity to address complex issues at the intersections of science, technology, and society. As such, the next administration should establish a special unit within the Science and Technology Policy Institute (STPI)—an existing federally funded research and development center (FFRDC)—to provide evidence-based, just-in-time, and fit-for-purpose capacity for Participatory Technology Assessment (pTA) to the White House Office of Science and Technology Policy and across executive branch agencies.

Robust participatory and multi-stakeholder engagement supports responsible decision making where neither science nor existing policy provide clear guidance. pTA is an established and evidence-based process to assess public values, manage sociotechnical uncertainties, integrate living and lived knowledge, and bridge democratic gaps on contested and complex science and society issues. By tapping into broader community expertise and experiences, pTA identifies plausible alternatives and solutions that may be overlooked by experts and advocates.

pTA provides critical and informed public input that is currently missing in technocratic policy- and decision-making processes. Policies and decisions will have greater legitimacy, transparency, and accountability as a result of enhanced use of pTA. When systematically integrated into research and development (R&D) processes, pTA can be used for anticipatory governance—that is, assessing socio-technical futures, engaging communities, stakeholders and publics, and  directing decisions, policies, and investments toward desirable outcomes.

A pTA unit within STPI will help build and maintain a shared repository of knowledge and experience of the state of the art and innovative applications across government, and provide pTA as a design, development, implementation, integration and training service for the executive branch regarding emerging scientific and technological issues and questions. By integrating public and expert value assessments, the next administration can ensure that federal science and technology decisions provide the greatest benefit to society.

Challenge and Opportunity

Science and technology (S&T) policy problems always involve issues of public values—such as concerns for safety, prosperity, and justice—alongside issues of fact. However, few systematic and institutional processes meaningfully integrate values from informed public engagement alongside expert consultation. Existing public-engagement mechanisms such as public- comment periods, opinion surveys, and town halls have devolved into little more than “checkbox” exercises. In recent years, transition to online commenting, intended to improve access and participation, have also amplified the negatives. They have “also inadvertently opened the floodgates to mass comment campaigns, misattributed comments, and computer-generated comments, potentially making it harder for agencies to extract the information needed to inform decision making and undermining the legitimacy of the rulemaking process. Many researchers have found that a large percentage of the comments received in mass comment responses are not highly substantive, but rather contain general statements of support or opposition.  Commenters are an entirely self selected group, and there is no reason to believe that they are in any way representative of the larger public. … Relatedly, the group of commenters may represent a relatively privileged group, with less advantaged members of the public less likely to engage in this form of political participation.”

Moreover, existing engagement mechanisms tend to be dominated by a small number of experts and organized interest groups: people and institutions who generally have established pathways to influence policy anyway. 

Existing engagement mechanisms leave out the voices of people who may lack the time, awareness, and/or resources to voice their opinions in response to the Federal Register, such as the roofer, the hair stylist, or the bus driver. This means that important public values—widely held ideas about the rights and benefits that ought to guide policy making in a democratic system—go overlooked. For S&T policy, a failure to assess and integrate public values may result in lack of R&D and complementary investments that produce market successes with limited public value, such as treatments for cancer that most patients cannot afford or public failure when there is no immediately available technical or market response, such as early stages of a global pandemic. Failure to integrate public values may also mean that little to no attention gets paid to key areas of societal need, such as developing low-cost tools and approaches for mitigating lead and other contaminants in water supplies or designing effective policy response, such as behavioral and logistical actions to contain viral infections and delivering vaccination to resistant populations.

In its 2023 Letter to the President, the President’s Council of Advisors on Science and Technology (PCAST), observed that, “As a nation, we must strive to develop public policies that are informed by scientific understandings and community values. Achieving this goal will require both access to accurate and trusted scientific information and the ability to create dialogue and participatory engagement with the American people.” The PCAST letter recommends issuing “a clarion call to Federal agencies to make science and technology communication and public engagement a core component of their mission and strategy.” It also recommended the establishment of “a new office to support Federal agencies in their continuing efforts to develop and build participatory public engagement and effective science and technology communications.”

Institutionalizing pTA within the Federal Government would provide federal agencies access to the tools and resources they need to apply pTA to existing and emerging complex S&T challenges, enabling experts, publics, and decision makers to tackle pressing issues together.pTA can be applied toward resolving long-standing issues, as well as to anticipate and address questions around emerging or novel S&T issues.

pTA for Long-Standing S&T Issues

Storage and siting of disposal sites for nuclear waste is an example of the type of ongoing, intractable problems for which pTA is ideally suited. Billions of dollars have been invested to develop a government-managed site for storing nuclear waste in the United States, yet essentially no progress has been made. Entangled political and environmental concerns, such as the risks of leaving nuclear waste in a potentially unsafe state for the long term, have stalled progress. There is also genuine uncertainty and expert disagreement surrounding safety and efficacy of various storage alternatives. Our nation’s inability to address the issue of nuclear waste has long impacted development of new and alternative nuclear power plants and thus has contributed to the slowing the adoption of nuclear energy.

There are rarely unencumbered or obvious optimal solutions to long-standing S&T issues like nuclear-waste disposal. But a nuanced and informed dialogue among a diverse public, experts, and decision makers—precisely the type of dialogue enabled through pTA—can help break chronic stalemates and address misaligned or nonexistent incentives. By bringing people together to discuss options and to learn about the benefits and risks of different possible solutions, pTA enables stakeholders to better understand each other’s perspectives. Deliberative engagements like pTA often generate empathy, encouraging participants to collaborate and develop recommendations based on shared exploration of values. pTA is designed to facilitate timely, adequate, and pragmatic choices in the context of uncertainty, conflicting goals, and various real-world constraints. This builds transparency and trust across diverse stakeholders while helping move past gridlock.

pTA for Emerging and Novel Issues

pTA is also useful for anticipating controversies and governing emerging S&T challenges, such as the ethical dimensions of gene editing or artificial intelligence or nuclear adoption. pTA helps grow institutional knowledge and expertise about complex topics as well as about public attitudes and concerns salient to those topics at scale. For example, challenges associated with COVID-19 vaccines presented several opportunities to deploy pTA. Public trust of the government’s pandemic response was uneven at best. Many Americans reported specific concerns about receiving a COVID-19 vaccine. Public opinion polls have delivered mixed messages regarding willingness to receive a COVID- 19 vaccine, but polls can overlook other historically significant concerns and socio-political developments in rapidly changing environments. Demands for expediency in vaccine development complicated the situation when normal safeguards and oversights were relaxed. Apparent pressure to deliver a vaccine as soon as possible raised public concern that vaccine safety is not being adequately vetted. Logistical and ethical questions about vaccine rollout were also abound: who should get vaccinated first, at what cost, and alongside what other public health measures? The nation needed a portfolio of differentiated and locally robust strategies for vaccine deployment. pTA would help officials anticipate equity challenges and trust deficits related to vaccine use and inform messaging and means of delivery, helping effective and socially robust rollout strategies for different communities across the country.

pTA is an Established Practice

pTA has a history of use in the European Union and more recently in the United States. Inspired partly by the former U.S. Office of Technology Assessment (OTA), many European nations and the European Parliament operate their own technology assessment (TA) agencies. European TA took a distinctive turn from the OTA in further democratizing science and technology decision-making by developing and implementing a variety of effective and economical practices involving citizen participation (or pTA). Recent European Parliamentary Technology Assessment reports have taken on issues of assistive technologies, future of work, future of mobility, and climate-change innovation.

In the United States, a group of researchers, educators, and policy practitioners established the Expert and Citizen Assessment of Science and Technology (ECAST) network in 2010 to develop a distinctive 21st-century model of TA. Over the course of a decade, ECAST developed an innovative and reflexive participatory technology assessment (pTA) method to support democratic decision-making in different technical, social, and political contexts. After a demonstration project providing citizen input to the United Nations Convention on Biological Diversity in collaboration with the Danish Board of Technology, ECAST, worked with the National Aeronautics and Space Administration (NASA) on the agency’s Asteroid Initiative. NASA-sponsored pTA activities about asteroid missions revealed important concerns about mitigating asteroid impact alongside decision support for specific NASA missions. Public audiences prioritized a U.S. role in planetary defense from asteroid impacts. These results were communicated to NASA administrators and informed the development of NASA’s Planetary Defense Coordination Office, demonstrating how pTA can identify novel public concerns to inform decision making.

This NASA pTA paved the way for pTA projects with the Department of Energy on nuclear-waste disposal and with the National Oceanic and Atmospheric Administration on community resilience. ECAST’s portfolio also includes projects on climate intervention research, the future of automated vehicles, gene editing, clean energy demonstration projects and interim storage of spent nuclear fuel. These and other pTA projects have been supported by more than six million dollars of public and philanthropic funding over the past ten years. Strong funding support in recent years highlights a growing demand for public engagement in science and technology decision-making.

However, the current scale of investment in pTA projects is vastly outstripped by the number of agencies and policy decisions that stand to benefit from pTA and are demanding applications for different use cases from public education, policy decisions, public value mapping and process and institutional innovations. ECAST’s capacity and ability to partner with federal agencies is limited and constrained by existing administrative rules and procedures on the federal side and resources and capacity deficiencies and flexibilities on the network side. Any external entity like ECAST will encounter difficulties in building institutional memory and in developing cooperative-agreement mechanisms across agencies with different missions as well as within agencies with different divisions. Integrating public engagement as a standard component of decision making will require aligning the interests of sponsoring agencies, publics, and pTA practitioners within the context of broad and shifting political environments. An FFRDC office dedicated to pTA would provide the embedded infrastructure, staffing, and processes necessary to achieve these challenging tasks. A dedicated home for pTA within the executive branch would also enable systematic research, evaluation, and training related to pTA methods and practices, as well as better integration of pTA tools into decision making involving public education, research, innovation and policy actions.

Plan of Action

The next administration should support and conduct pTA across the Federal Government by expanding the scope of the Science and Technology Policy Institute (STPI) to include a special unit with a separate operating budget dedicated specifically to pTA. STPI is an existing federally funded research and development center (FFRDC) that already conducts research on emerging technological challenges for the Federal Government. STPI is strategically associated with the White House Office of Science and Technology Policy (OSTP). Integrating pTA across federal agencies aligns with STPI’s mission to provide technical and analytical support to agency sponsors on the assessment of critical and emerging technologies.

A dedicated pTA unit within STPI would (1) provide expertise and resources to conduct pTA for federal agencies and (2) document and archive broader public expertise captured through pTA. Much publicly valuable knowledge generated from one area of S&T is applicable to and usable in other areas. As part of an FFRDC associated with the executive branch, STPI’s pTA unit could collaborate with universities to help disseminate best practices across all executive agencies.

We envision that STPI’s pTA unit would conduct activities related to the general theory and practice of pTA as well as partner with other federal agencies to integrate pTA into projects large and small. Small-scale projects, such as a series of public focus groups, expert consultations, or general topic research could be conducted directly by the pTA unit’s staff. Larger projects, such as a series of in-person or online deliberative engagements, workshops, and subsequent analysis and evaluation, would require additional funding and support from the requesting agencies. The STPI pTA unit could also establish longer-term partnerships with universities and science centers (as in the ECAST network), thereby enabling the federal government to leverage and learn from pTA exercises sponsored by non-federal entities.

The new STPI pTA unit would be funded in part through projects requested by other federal agencies. An agency would fund the pTA unit to design, plan, conduct, assess, and analyze a pTA effort on a project relevant to the agency. This model would enable the unit to distribute costs across the executive branch and would ensure that the unit has access to subject-matter experts (i.e., agency staff) needed to conduct an informed pTA effort. Housing the unit within STPI would contribute to OSTP’s larger portfolio of science and technology policy analysis, open innovation and citizen science, and a robust civic infrastructure.

Cost and Capacities

Adding a pTA unit to STPI would increase federal capacity to conduct pTA, utilizing existing pathways and budget lines to support additional staff and infrastructure for pTA capabilities. Establishing a semi-independent office for pTA within STPI would make it possible for the executive branch to share support staff and other costs. We anticipate that $3.5–5 million per year would be needed to support the core team of researchers, practitioners, leadership, small-scale projects, and operations within STPI for the pTA unit. This funding would require congressional approval.

The STPI pTA unit and its staff would be dedicated to housing and maintaining a critical infrastructure for pTA projects, including practical know-how, robust relationships with partner organizations (e.g., science centers, museums, or other public venues for hosting deliberative pTA forums), and analytic capabilities. This unit would not wholly be responsible for any given pTA effort. Rather, sponsoring agencies should provide resources and direction to support individual pTA projects.

We expect that the STPI pTA unit would be able to conduct two or three pTA projects per year initially. Capacity and agility of the unit would expand as time went on to meet the growth and demands from the federal agencies. In the fifth year of the unit (the typical length of an FFRDC contract), the presidential administration should consider whether there is sufficient agency demand for pTA—and whether the STPI pTA unit has sufficiently demonstrated proof-of-concept—to merit establishment of a new and independent FFRDC or other government entity fully dedicated to pTA.

Operations

The process for initiating, implementing and finalizing a pTA project would resemble the following:

Pre:

During:

Post:

Conclusion

Participatory Technology Assessment (pTA) is an established suite of tools and processes for eliciting and documenting informed public values and opinions to contribute to decision making around complex issues at the intersections of science, technology, and society.

However, its creative adaptation and innovative use by federal agencies in recent years demonstrate their utility beyond providing decision support: from increasing scientific literacy and social acceptability to diffusing tensions and improving mutual trust. By creating capacity for pTA within STPI, the incoming administration will bolster its ability to address longstanding and emerging issues that lie at the intersection of scientific progress and societal well-being, where progress depends on aligning scientific, market and public values. Such capacity and capabilities will be crucial to improving the legitimacy, transparency, and accountability of decisions regarding how we navigate and tackle the most intractable problems facing our society, now and for years to come.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
Expert panels are the best way to address complex S&T issues. Why should S&T assessments focus on involving the public and public values?

Experts can help map potential policy and R&D options and their implications. However, there will always be an element of judgment when it comes to deciding among options. This stage is often more driven by ethical and social concerns than by technical assessments. For instance, leaders may need to figure out a fair and just process to govern hazardous-waste disposal, or weigh the implications of using genetically modified organisms to control diseases, or siting clean energy research and demonstration projects in resistant or disadvantaged communities. Involving the public in decision-making can help counter challenges associated with expert judgment (for example, “groupthink”) while bringing in perspectives, values, and considerations that experts may overlook or discount.

How do we know that members of the public are sufficiently informed to be able to contribute to a decision?

pTA incorporates a variety of measures to inform discussion, such as background materials distributed to participants and multimedia tools to provide relevant information about the issue. The content of background materials is developed by experts and stakeholders prior to a pTA event to give the public the information they need to thoughtfully engage with the topic at hand. Evaluation tools, such as those from the informal science-education community, can be used to assess how effective background materials are at preparing the public for an informed discussion, and to identify ineffective materials that may need revision or supplementation. Evaluations of several past pTA efforts have 1) shown consistent learning among public participants and 2) have documented robust processes for the creation, testing, and refinement of pTA activities that foster informed discussions among pTA participants.

Will doing pTA enhance the communications missions of federal agencies?

pTA can result in products and information, such as reports and data on public values, that are relevant and useful for the communication missions of agencies. However, pTA should avoid becoming a tool for strategic communications or a procedural “checkbox” activity for public engagement. Locating the Federal Government’s dedicated pTA unit within an FFRDC will ensure that pTA is informed by and accountable to a broader community of pTA experts and stakeholders who are independent of any mission agency.

Why does the Federal Government need in-house capacity to conduct pTA?

The work of universities, science centers, and nonpartisan think tanks have greatly expanded the tools and approaches available for using pTA to inform decision-making. Many past and current pTA efforts have been driven by such nongovernmental institutions, and have proven agile, collaborative, and low cost. These efforts, while successful, have limited or diffuse ties to federal decision making.


Embedding pTA within the federal government would help agencies overcome the opportunity and time cost of integrating public input into tight decision-making timelines. ECAST’s work with federal agencies has shown the need for a stable bureaucratic infrastructure surrounding pTA at the federal level to build organizational memory, create a federal community of practice, and productively institutionalize pTA into federal decision-making.


Importantly, pTA is a nonpartisan method that can help reduce tensions and find shared values. Involving a diversity of perspectives through pTA engagements can help stakeholders move beyond impasse and conflict. pTA engagements emphasize recruiting and involving Americans from all walks of life, including those historically excluded from policymaking.

How would a pTA unit within STPI complement existing technology assessment capacity? How would it differ from that existing capacity?

Currently, the Government Accountability Office’s Science, Technology Assessment, and Analytics team (STAA) conducts technology assessments for Congress. Technology Assessment (TA) is designed to enhance understanding of the implications of new technologies or existing S&T issues. The STAA certainly has the capacity to undertake pTA studies on key S&T issues if and when requested by Congress. However, the distinctive form of pTA developed by ECAST and exemplified in ECAST’s work with NASA, NOAA, and DOE follows a knowledge co- production model in which agency program managers work with pTA practitioners to co-design, co-develop, and integrate pTA into their decision-making processes. STAA, as a component of the legislative branch, is not well positioned to work alongside executive agencies in this way. The proposed pTA unit within STPI would make the proven ECAST model available to all executive agencies, nicely complementing the analytical TA capacity that STAA offers the federal legislature.

Why should the government establish a pTA unit within an FFRDC instead of using executive orders to conduct pTA or requiring agencies to undertake pTA?

Executive orders could support one-off pTA projects and require agencies to conduct pTA. However, establishing a pTA unit within an FFRDC like STPI would provide additional benefits that would lead to a more robust pTA capacity. 


FFRDCs are a special class of research institutions owned by the federal government but operated by contractors, including universities, nonprofits, and industrial firms. The primary purpose of FFRDCs is to pursue research and development that cannot be effectively provided by the government or other sectors operating on their own. FFRDCs also enable the government to recruit and retain diverse experts without government hiring and pay constraints, providing the government with a specialized, agile workforce to respond to agency needs and societal challenges.
Creating a pTA unit in an FFRDC would provide an institutional home for general pTA know-how and capacity: a resource that all agencies could tap into. The pTA unit would be staffed by a small but highly-trained staff who are well-versed in the knowledge and practice of pTA. The pTA unit would not preclude individual agencies from undertaking pTA on their own, but would provide a “help center” to help agencies figure out where to start and how to overcome roadblocks. pTA unit staff could also offer workshops and other opportunities to help train personnel in other agencies on ways to incorporate the public perspective into their activities.


Other potential homes for a dedicated federal pTA unit include the Government Accountability Office (GAO) or the National Academies of Sciences, Engineering, and Medicine. However, GAO’s association with Congress would weaken the unit’s connections to agencies. The National Academies historically conduct assessments driven purely by expert consensus, which may compromise the ability of National Academies-hosted pTA to include and/or emphasize broader public values.

How will the government evaluate the performance and outcomes of pTA efforts?

Evaluating a pTA effort means answering four questions:


First, did the pTA effort engage a diverse public not otherwise engaged in S&T policy formulation? pTA practitioners generally do not seek statistically representative samples of participants (unlike, for instance, practitioners of mass opinion polling). Instead, pTA practitioners focus on including a diverse group of participants, with particular attention paid to groups who are generally not engaged in S&T policy formulation.


Second, was the pTA process informed and deliberative? This question is generally answered through strategies borrowed from the informal science-learning community, such as “pre- and post-“ surveys of self-reported learning. Qualitative analysis of the participant responses and discussions can evaluate if and how background information was used in pTA exercises. Involving decision makers and stakeholders in the evaluation process—for example, through sharing initial evaluation results—helps build the credibility of participant responses, particularly when decision makers or agencies are skeptical of the ability of lay citizens to provide informed opinions.


Third, did pTA generate useful and actionable outputs for the agency and, if applicable, stakeholders? pTA practitioners use qualitative tools for assessing public opinions and values alongside quantitative tools, such as surveys. A combination of qualitative and quantitative analysis helps to evaluate not just what public participants prefer regarding a given issue but why they hold that preference and how they justify those preferences. To ensure such information is useful to agencies and decision makers, pTA practitioners involve decision makers at various points in the analysis process (for example, to probe participant responses regarding a particular concern). Interviews with decision makers and other stakeholders can also assess the utility of pTA results.


Fourth, what impact did pTA have on participants, decisions and decision-making processes, decision makers, and organizational culture? This question can be answered through interviews with decision makers and stakeholders, surveys of pTA participants, and impact assessments.

How will the government evaluate the performance and outcomes of a dedicated pTA unit? How has pTA been evaluated previously?

Evaluation of a pTA unit within an existing FFRDC would likely involve similar questions as above: questions focused on the impact of the unit on decisions, decision-making processes, and the culture and attitudes of agency staff who worked with the pTA unit. An external evaluator, such as the Government Accountability Office or the National Academies of Sciences, could be tasked with carrying out such an evaluation.

How publicly accessible should the work of a pTA unit be? Should pTA results and processes be made public?

pTA results and processes should typically be made public as long as few risks are posed to pTA participants (in line with federal regulations protecting research participants). Publishing results and processes ensures that stakeholders, other members of government (e.g., Congress), and broader audiences can view and interpret the public values explored during a pTA effort. Further, making results and processes publicly available serves as a form of accountability, ensuring that pTA efforts are high quality.

Unpacking Hiring: Toward a Regional Federal Talent Strategy

Government, like all institutions, runs on people. We need more people with the right skills and expertise for the many critical roles that public agencies are hiring for today. Yet hiring talent in the federal government is a longstanding challenge. The next Administration should unpack hiring strategy from headquarters and launch a series of large scale, cross-agency recruitment and hiring surges throughout the country, reflecting the reality that 85% of federal employees are outside the Beltway. With a collaborative, cross-agency lens and a commitment to engaging jobseekers where they live, the government can enhance its ability to attract talent while underscoring to Americans that the federal government is not a distant authority but rather a stakeholder in their communities that offers credible opportunities to serve. 

Challenge and Opportunity

The Federal Government’s hiring needs—already severe across many mission-critical occupations—are likely to remain acute as federal retirements continue, the labor market remains tight, and mission needs continue to grow. Unfortunately, federal hiring is misaligned with how most people approach job seeking. Most Americans search for employment in a geographically bounded way, a trend which has accelerated following the labor market disruptions of the COVID-19 pandemic. In contrast, federal agencies tend to engage with jobseekers in a manner siloed to a single agency and across a wide variety of professions. 

The result is that the federal government tends to hire agency by agency while casting a wide geographic net, which limits its ability to build deep and direct relationships with talent providers, while also duplicating searches for similar roles across agencies. Instead, the next Administration should align with jobseekers’ expectations by recruiting across agencies within each geography. 

By embracing a new approach, the government can begin to develop a more coordinated cross-agency employer profile within regions with significant federal presence, while still leveraging its scale by aggregating hiring needs across agencies. This approach would build upon the important hiring reforms advanced under the Biden-Harris Administration, including cross-agency pooled hiring, renewed attention to hiring experience for jobseekers, and new investments to unlock the federal government’s regional presence through elevation of the Federal Executive Board (FEB) program. FEBs are cross-agency councils of senior appointees and civil servants in regions of significant federal presence across the country. They are empowered to identify areas for cross-agency cooperation and are singularly positioned to collaborate to pool talent needs and represent the federal government in communities across the country.

Plan of Action

The next Administration should embrace a cross-agency, regionally-focused recruitment strategy and bring federal career opportunities closer to Americans through a series of 2-3 large scale, cross-agency recruitment and hiring pilots in geographies outside of Washington, DC. To be effective, this effort will need both sponsorship from senior leaders at the center of government, as well as ownership from frontline leaders who can build relationships on the ground. 

Recommendation 1. Provide Strategic Direction from the Center of Government 

The Office of Personnel Management (OPM) and the Office of Management and Budget (OMB) should launch a small team, composed of leaders in recruitment, personnel policy and workforce data, to identify promising localities for coordinated regional hiring surges. They should leverage centralized workforce data or data from Human Capital Operating Plan workforce plans to identify prospective hiring needs by government-wide and agency-specific mission-critical occupations (MCOs) by FEB region, while ensuring that agency and sub-agency workforce plans consistently specify where hiring will occur in the future. They might also consider seasonal or cyclical cross-agency hiring needs for inclusion in the pilot to facilitate year-to-year experimentation and analysis. With this information, they should engage the FEB Center of Operations and jointly select 2-3 FEB regions outside of the capital where there are significant overlapping needs in MCOs. 

As this pilot moves forward, it is imperative that OMB and OPM empower on-the-ground federal leaders to drive surge hiring and equip them with flexible hiring authorities where needed. 

Recommendation 2. Empower Frontline Leadership from the FEBs

FEB field staff are well positioned to play a coordinating role to help drive surges, starting by convening agency leadership in their regions to validate hiring needs and make amendments as necessary. Together, they should set a reasonable, measurable goal for surge hiring in the coming year that reflects both total need and headline MCOs (e.g., “in the next 12 months, federal agencies in greater Columbus will hire 750 new employees, including 75 HR Specialists, 45 Data Scientists, and 110 Engineers”). 

To begin to develop a regional talent strategy, the FEB should form a small task force drawn from standout hiring managers and HR professionals, and then begin to develop a stakeholder map of key educational institutions and civic partners with access to talent pools in the region, sharing existing relationships and building new ones. The FEB should bring these external partners together to socialize shared needs and listen to their impressions of federal career opportunities in the region.

With these insights, the project team should announce publicly the number and types of roles needed and prepare sharp public-facing collateral that foregrounds headline MCOs and raises the profile of local federal agencies. In support, OPM should launch regional USAJOBS skins (e.g., “Columbus.USAJOBS.gov”) to make it easy to explore available positions. The team should make sustained, targeted outreach at local educational institutions aligned with hiring needs, so all federal agencies are on graduates’ and administrators’ radar. 

These activities should build toward one or more signature large, in-person, cross-agency recruitment and hiring fairs, perhaps headlined by a high profile Administration leader. Candidates should be able to come to an event, learn what it means to hold a job in their discipline in federal service, and apply live for roles at multiple agencies, all while exploring what else the federal government has to offer and building tangible relationships with federal recruiters. Ahead of the event, the project team should work with agencies to align their hiring cycles so the maximum number of jobs are open at the time of the event, potentially launching a pooled hiring action to coincide. The project team should capture all interested jobseekers from the event to seed the new Talent Campaigns function in USAStaffing that enables agencies to bucket tranches of qualified jobseekers for future sourcing. 

Recommendation 3. Replicate and Celebrate

Following each regional surge, the center of government and frontline teams should collaborate to distill key learnings and conclude the sprint engagement by developing a playbook for regional recruitment surges. Especially successful surges will also present an opportunity to spotlight excellence in recruitment and hiring, which is rarely celebrated. 

The center of government team should also identify geographies with effective relationships between agencies and talent providers for key roles and leverage the growing use of remote work and location negotiable positions to site certain roles in “friendly” labor markets. 

Conclusion

Regional, cross-agency hiring surges are an opportunity for federal agencies to fill high-need roles across the country in a manner that is proactive and collaborative, rather than responsive and competitive. They would aim to facilitate a new level of information sharing between the frontline and the center of government, and inform agency strategic planning efforts, allowing headquarters to better understand the realities of recruitment and hiring on the ground. They would enable OPM and OMB to reach, engage, and empower frontline HR specialists and hiring managers who are sufficiently numerous and fragmented that they are difficult to reach in the present course of business. 

Finally, engaging regionally will emphasize that most of the federal workforce resides outside of Washington, D.C., and build understanding and respect for the work of federal public servants in communities across the nation.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Extreme Heat and Wildfire Smoke: Consequences for Communities

More Extreme Weather Leads to More Public Health Emergencies

Extreme heat and wildfire smoke both pose significant and worsening public health threats in the United States. Extreme heat causes the premature deaths of an estimated 10,000 people in the U.S. each year, while more frequent and widespread wildfire smoke exposure has set back decades of progress on air quality in many states. Importantly, these two hazards are related: extreme heat can worsen and prolong wildfire risk, which can increase smoke exposure. 

Extreme heat and wildfire smoke events are independently becoming more frequent and severe, but what is overlooked is that they are often occurring in the same place at the same time. Emerging research suggests that the combined impact of these hazards may be worse than the sum of their individual impacts. These combined impacts have the potential to put additional pressure on already overburdened healthcare systems, public budgets, and vulnerable communities. Failing to account for these combined impacts could leave communities unprepared for these extreme weather events in 2025 and beyond.

To ensure resilience and improve public health outcomes for all, policymakers should consider the intersection of wildfire smoke and extreme heat at all levels of government. Our understanding of how extreme heat and wildfire smoke compound is still nascent, which limits national and local capacity to plan ahead. Researchers and policymakers should invest in understanding how extreme heat and wildfire smoke compound and use this knowledge to design synergistic solutions that enhance infrastructure resilience and ultimately save lives. 

Intersecting Health Impacts of Extremely Hot, Smoky Days

Wildfire smoke and extreme heat can each be deadly. As mentioned, exposure to extreme heat causes the premature deaths of an estimated 10,000 people in the U.S. a year. Long-term exposure to extreme heat can also worsen chronic conditions like kidney disease, diabetes, hypertension, and asthma. Exposure to the primary component of wildfire smoke, known as fine particulate matter (PM2.5), contributes to an additional estimated 16,000 American deaths annually. Wildfire smoke exacerbates and causes various respiratory and cardiovascular effects along with other health issues, such as asthma attacks and heart failure, increasing risk of early death

New research suggests that the compounding health impacts of heat and smoke co-exposure could be even worse. For example, a recent analysis found that the co-occurrence of extreme heat and wildfire smoke in California leads to more hospitalizations for cardiopulmonary problems than on heat days or smoke days alone. 

Extreme heat also contributes to the formation of ground-level ozone. Like wildfire smoke, ground-level ozone can cause respiratory problems and exacerbate pre-existing conditions. This has already happened at scale: during the 2020 wildfire season, more than 68% of the western U.S. – about 43 million people – were affected in a single day by both ground-level ozone extremes and fine particulate matter from wildfire smoke.

Impacts on Populations Most Vulnerable to Combined Heat and Smoke

While extreme heat and wildfire smoke can pose health risks to everyone, there are some groups that are more vulnerable either because they are more likely to be exposed, they are more likely to suffer more severe health consequences when they are exposed, or both. Below, we highlight groups that are most vulnerable to extreme heat and smoke and therefore may be vulnerable to the compound impacts of these hazards. More research is needed to understand how the compound impacts will affect the health of these populations.

Housing-Vulnerable and Housing-Insecure People

Access to air conditioning at home and work, tree canopy cover, buildings with efficient wildfire smoke filtration and heat insulation and cooling capacities, and access to  smoke centers are all important protective factors against the effects of extreme heat and/or wildfire smoke. People lacking these types of infrastructure are at higher risk for the health effects of these two hazards as a result of increased exposure. In California, for example, communities with lower incomes and higher population density experience a greater likelihood of negative health impacts from hazards like wildfire smoke and extreme heat. 

Outdoor Workers

Representing about 33% of the national workforce, outdoor workers — farmworkers, firefighters, and construction workers — experience much higher rates of exposure to environmental hazards, including wildfire smoke and extreme heat, than other workers. Farmworkers are particularly vulnerable even among outdoor workers; in fact, they face a 35 times greater risk of heat exposure death than other outdoor workers. Additionally, outdoor workers are often lower-income, making it harder to afford protections and seek necessary medical care. Twenty percent of agricultural worker families live below the national poverty line.

Wildfire smoke exposure is estimated to have caused $125 billion in lost wages annually from 2007 to 2019 and extreme heat exposure is estimated to cause $100 billion in wage losses each year. Without any changes to policies and practice, these numbers are only expected to rise. These income losses may exacerbate inequities in poverty rates and economic mobility, which determine overall health outcomes.

Pregnant Mothers and Infants

Extreme heat and wildfire smoke also pose a significant threat to the health of pregnant mothers and their babies. For instance, preterm birth is more likely during periods of higher temperatures and during wildfire smoke events. This correlation is significantly stronger among people who were simultaneously exposed to extreme heat and wildfire smoke PM2.5.

Preterm birth comes with an array of risks for both the pregnant mothers and baby and is the leading cause of infant mortality. Babies born prematurely are more likely to have a range of serious health complications in addition to long-term developmental challenges. For the parent, having a preterm baby can have significant mental health impacts and financial challenges.

Children

Wildfire smoke and extreme heat both have significant impacts on children’s health, development, and learning. Children are uniquely vulnerable to heat because their bodies do not regulate temperatures as efficiently as adults, making it harder to cool down and putting their bodies under stress. Children are also more vulnerable to air pollution from wildfire smoke as they inhale more air relative to their weight than adults and because their bodies and brains are still developing.  PM2.5 exposure from wildfires has been attributed to neuropsychological effects, such as ADHD, autism, impaired school performance, and decreased memory

When schools remain open during extreme weather events like heat and smoke, student learning is impacted. Research has found that each 1℉ increase in temperature leads to 1% decrease in annual academic achievement. However, when schools close due to wildfire smoke or heat events, children lose crucial learning time and families must secure alternative childcare.

Low-income students are more likely to be in schools without adequate air conditioning because their districts have fewer funds available for school improvement projects. This barrier has only been partially remedied in recent years through federal investments.

Older Adults

Older adults are more likely to have multiple chronic conditions, many of which increase vulnerability to extreme heat, wildfire smoke, and their combined effects. Older adults are also more likely to take regular medication, such as beta blockers for heart conditions, which increase predisposition to heat-related illness.

The most medically vulnerable older adults are in long-term care facilities. There is currently a national standard for operating temperatures for long-term care facilities, requiring them to operate at or below 81℉. There is no correlatory standard for wildfire smoke. Preliminary studies have found that long-term care facilities are unprepared for smoke events; in some facilities the indoor air quality is no better than the outdoor air quality.

Challenges and Opportunities for the Healthcare Sector

The impacts of extreme heat and smoke have profound implications for public health and therefore for healthcare systems and costs. Extreme heat alone is expected to lead to $1 billion in U.S. healthcare costs every summer, while wildfire smoke is estimated to cost the healthcare system $16 billion every year from respiratory hospital visits and PM2.5 related deaths. 

Despite these high stakes, healthcare providers and systems are not adequately prepared to address wildfire smoke, extreme heat, and their combined effects. Healthcare preparedness and response is limited by a lack of real-time information about morbidity and mortality expected from individual extreme heat and smoke events. For example, wildfire smoke events are often reported on a one-month delay, making it difficult to anticipate smoke impacts in real time. Further, despite the risks posed by heat and smoke independently and when combined, healthcare providers are largely not receiving education about environmental health and climate change. As a result, physicians also do not routinely screen their patients for health risk and existing protective measures, such as the existence of air conditioning and air filtration in the home. 

Potential solutions to improve preparedness in the healthcare sector include developing more reliable real-time information about the potential impacts of smoke, heat, and both combined; training physicians in screening patients for risk of heat and smoke exposure; and training physicians in how to help patients manage extreme weather risks. 

Challenges and Opportunities for Federal, State, and Local Governments 

State and local governments have a role to play in building facilities that are resilient to extreme heat and wildfire smoke as well as educating people about how to protect themselves. However, funding for extreme heat and wildfire smoke is scarce and difficult for local jurisdictions in need to obtain. While some federal funding is available specifically to support smoke preparedness (e.g., EPA’s Wildfire Smoke Preparedness in Community Buildings Grant Program) and heat preparedness (e.g. NOAA NIHHIS’ Centers of Excellence), experts note that the funding landscape for both hazards is “limited and fragmented.” To date, communities have not been able to secure federal disaster funding for smoke or heat events through the Public Health Emergency Declaration or the Stafford Act. FEMA currently excludes the impacts on human health from economic valuations of losses from a disaster. As a result, many of these impacted communities never see investments from post-disaster hazard mitigation, which could potentially build community resilience to future events. Even if a declaration was made, it would likely be for one “event”, e.g. wildfire smoke or extreme heat, with recovery dollars targeted towards mitigating the impacts of that event. Without careful consideration, rebuilding and resilience investments might be maladaptive for addressing the combined impacts.

Next Steps

The Wildland Fire Mitigation and Management Commission report offers a number of recommendations to improve how the federal government can better support communities in preparing for the impacts of wildfire smoke and acknowledges the need for more research on how heat and wildfire smoke compound. FAS has also developed a whole-government strategy towards extreme heat response, resilience, and preparedness that includes nearly 200 recommendations and notes the need for more data inputs on compounding hazards like wildfire smoke. Policymakers at the federal level should support research at the intersection of these topics and explore opportunities for providing technical assistance and funding that builds resilience to both hazards.

Understanding and planning for the compound impacts of extreme heat and wildfire smoke will improve public health preparedness, mitigate public exposure to extreme heat and wildfire smoke, and minimize economic losses. As the overarching research at this intersection is still emerging, there is a need for more data to inform policy actions that effectively allocate resources and reduce harm to the most vulnerable populations. The federal government must prioritize protection from both extreme heat and wildfire smoke, along with their combined effects, to fulfill its obligation to keep the public safe.

2025 Heat Policy Agenda

It’s official: 2024 was the hottest year on record. But Americans don’t need official statements to tell them what they already know: our country is heating up, and we’re deeply unprepared.

Extreme heat has become a national economic crisis: lowering productivity, shrinking business revenue, destroying crops, and pushing power grids to the brink. The impacts of extreme heat cost our Nation an estimated $162 billion in 2024 – equivalent to nearly 1% of the U.S. GDP.

Extreme heat is also taking a human toll. Heat kills more Americans every year than hurricanes, floods, and tornadoes combined. The number of heat-related illnesses is even higher. And even when heat doesn’t kill, it severely compromises quality of life. This past summer saw days when more than 100 million Americans were under a heat advisory. That means that there were days when it was too hot for a third of our country to safely work or play.

We have to do better. And we can.

Attached is a comprehensive 2025 Heat Policy Agenda for the Trump Administration and 119th Congress to better prepare for, manage, and respond to extreme heat. The Agenda represents insights from hundreds of experts and community leaders. If implemented, it will build readiness for the 2025 heat season – while laying the foundation for a more heat-resilient nation.

Core recommendations in the Agenda include the following:

  1. Establish a clear, sustained federal governance structure for extreme heat. This will involve elevating, empowering, and dedicating funds to the National Interagency Heat Health Information System (NIHHIS), establishing a National Heat Executive Council, and designating a National Heat Coordinator in the White House.
  2. Amend the Stafford Act to explicitly define extreme heat as a “major disaster”, and expand the definition of damages to include non-infrastructure impacts.
  3. Direct the Secretary of Health and Human Services (HHS) to consider declaring a Public Health Emergency in the event of exceptional, life-threatening heat waves, and fully fund critical HHS emergency-response programs and resilient healthcare infrastructure.
  4. Direct the Federal Emergency Management Agency (FEMA) to include extreme heat as a core component of national preparedness capabilities and provide guidance on how extreme heat events or compounding hazards could qualify as major disasters.
  5. Finalize a strong rule to prevent heat injury and illness in the workplace, and establish Centers of Excellence to protect troops, transportation workers, farmworkers, and other essential personnel from extreme heat.
  6. Retain and expand home energy rebates, tax credits, LIHEAP, and the Weatherization Assistance Program, to enable deep retrofits that cut the costs of cooling for all Americans and prepare homes and other infrastructure against threats like power outages.
  7. Transform the built and landscaped environment through strategic investments in urban forestry and green infrastructure to cool communities, transportation systems to secure safe movement of people and goods, and power infrastructure to ready for greater load demand.

The way to prevent deaths and losses from extreme heat is to act before heat hits. Our 60+ organizations, representing labor, industry, health, housing, environmental, academic and community associations and organizations, urge President Trump and Congressional leaders to work quickly and decisively throughout the new Administration and 119th Congress to combat the growing heat threat. America is counting on you.


Executive Branch

Federal agencies can do a great deal to combat extreme heat under existing budgets and authorities. By quickly integrating the actions below into an Executive Order or similar directive, the President could meaningfully improve preparedness for the 2025 heat season while laying the foundation for a more heat-resilient nation in the long term. 

Streamline and improve extreme heat management.

More than thirty federal agencies and offices share responsibility for acting on extreme heat. A better structure is needed for the federal government to seamlessly manage and build resilience. To streamline and improve the federal extreme heat response, the President must:

Boost heat preparedness, response, and resilience in every corner of our nation.

Extreme heat has become a national concern, threatening every community in the United States. To boost heat preparedness, response, and resilience nationwide, the President must:

Usher in a new era of heat forecasting, research, and data.

Extreme heat’s impacts are not well-quantified, limiting a systematic national response. To usher in a new era of heat forecasting, research, and data, the President must:

Protect workers and businesses from heat.

Americans become ill and even die due to heat exposure in the workplace, a moral failure that also threatens business productivity. To protect workers and businesses, the President must:

Prepare healthcare systems for heat impacts.

Extreme heat is both a public health emergency and a chronic stress to healthcare systems. Addressing the chronic disease epidemic will be impossible without treating the symptom of extreme heat. To prepare healthcare systems for heat impacts, the President must:

Ensure affordably cooled and heat-resilient housing, schools, and other facilities.

Cool homes, schools, and other facilities are crucial to preventing heat illness and death. To prepare the build environment for rising temperatures, the President must:

Promote Housing and Cooling Access

Prepare Schools and Other Facilities


Legislative Branch

Congress can support the President in combating extreme heat by increasing funds for heat-critical federal programs and by providing new and explicit authorities for federal agencies.

Treat extreme heat like the emergency it is.

Extreme heat has devastating human and societal impacts that are on par with other federally recognized disasters. To treat extreme heat like the emergency it is, Congress must:

Build community heat resilience by readying critical infrastructure.

Investments in resilience pay dividends, with every federal dollar spent on resilience returning $6 in societal benefits. Our nation will benefit from building thriving communities that are prepared for extreme heat threats, adapted to rising temperatures, and capable of withstanding extreme heat disruptions. To build community heat resilience, Congress must:

Leveraging the Farm Bill to build national heat resilience.

Farm, food, forestry, and rural policy are all impacted by extreme heat. To ensure the next Farm Bill is ready for rising temperatures, Congress should:

Funding critical programs and agencies to build a heat-ready nation.

To protect Americans and mitigate the $160+ billion annual impacts of extreme heat, Congress will need to invest in national heat preparedness, response, and resilience. The tables on the following pages highlight heat-critical programs that should be extended, as well as agencies that need more funding to carry out heat-critical work, such as key actions identified in the Executive section of this Heat Policy Agenda.

An Agenda for Ensuring Child Safety in the AI Era

The next administration should continue to make responsible policy on Artificial intelligence (AI) and children, especially in K-12, a top priority and create an AI and Kids Initiative led by the administration. AI is transforming how children learn and live, and policymakers, industry, and educators owe it to the next generation to set in place a responsible policy that embraces this new technology while at the same time ensuring all children’s well-being, privacy, and safety is respected. The federal government should develop clear prohibitions, enforce them, and serve as a national clearinghouse for AI K-12 educational policy. It should also support comprehensive digital literacy related to AI.

Specifically, we think these policy elements need to be front of mind for decision-makers: build a coordinated framework for AI Safety; champion legislation to support youth privacy and online safety in AI; and ensure every child can benefit from the promise of AI. 

In terms of building a coordinated framework for AI Safety, the next administration should: ensure parity with existing child data protections; develop safety guidance for developers, including specific prohibitions to limit harmful designs, and inappropriate uses; and direct the National Institute of Standards and Technology (NIST) to serve as the lead organizer for federal efforts on AI safety for children. When championing legislation to support youth privacy and online safety in AI, the next administration should support the passage of online safety laws that address harmful design features that can lead to medically recognized mental health disorders and patterns of use indicating addiction-like behavior, and modernize federal children’s privacy laws including updating The Family Educational Rights and Privacy Act (FERPA) and passing youth privacy laws to explicitly address AI data use issues, including prohibiting developing commercial models from students’ educational information, with strong enforcement mechanisms. And, in order to ensure every child can benefit from the promise of AI, the next administration should  support comprehensive digital literacy efforts and prevent deepening the digital divide.

Importantly, policy and frameworks need to have teeth and need to take the burden off of individual states, school districts, or actors to assess AI tools for children. Enforcement should be tailored to specific laws, but should include as appropriate private rights of action, well-funded federal enforcers, and state and local enforcement. Companies should feel incentivized to act. The framework cannot be voluntary, enabling companies to pick and choose whether or not to follow recommendations.. We’ve seen what happens when we do not put in place guardrails for tech, such as increased risk of child addiction, depression and self-harm–and it should not happen again. We cannot say that this is merely a nascent technology and that we can delay the development of protections. We already know AI will critically impact our lives. We’ve watched tech critically impact lives and AI-enabled tech is both faster and potentially more extreme. 

Challenge and Opportunity

AI is already embedded in children’s lives and education. According to Common Sense Media research, seven in ten teens have used generative AI, and the most common use is for help with homework. The research also found most parents are in the dark about their child’s generative AI use–only a third of parents whose children reported using generative AI were aware of such use. Beyond generative AI, machine learning systems are embedded in just about every application kids use at school and at home. Further,  most teens and parents say schools have either no AI policy or have not communicated one. 

Educational uses of AI are recognized to pose higher risk, according to the EU Artificial Intelligence Act and other  international frameworks. The EU  recognized that risk management requires special consideration when an AI system is likely to be accessed by children. The U.S. has developed a risk management framework, but the U.S. has not yet articulated risk levels or developed a specific educational or youth profile using NIST’s Risk Management Framework. There is still a deep need to ensure that AI systems likely to be accessed by children, including in schools, to be assessed in terms of risk management and impact on youth.

It is well established that children and teenagers are vulnerable to manipulation by technology. Youth report struggling to set boundaries from technology, and according to a U.S. Surgeon General report, almost a third of teens say they are on social media almost constantly. Almost half of youth say social media has reduced their attention span, and takes time away from other activities they care about. They are unequipped to assess sophisticated and targeted advertising, as most children cannot distinguish ads from content until they are at least eight years old, and most children do not realize ads can be customized. Additionally,  social media design features lead, in addition to addiction, to teens suffering other mental or physical harm: from unattainable beauty filters to friend comparison to recommendation systems that promote harmful content, such as the algorithmic promotion of viral “challenges” that can lead to deathAI technology is particularly concerning given its novelness, and the speed and autonomy at which the technology can operate, and the frequent opacity even to developers of AI systems about how inputs and outputs may be used or exposed. 

Particularly problematic uses of AI in products used in education and/or by children so far include products that use emotion detection, biometric data, facial recognition (built from scraping online images that include children), companion AI, automated education decisions, and social scoring.  This list will continue to grow as AI is further adopted.

There are numerous useful frameworks and toolkits from expert organizations like EdSafe, and TeachAI, and from government organizations like NIST, the National Telecommunications and Information Administration (NTIA), and Department of Education (ED). However, we need the next administration to (1) encourage Congress to pass clear rules regarding AI products used with children, (2) have NIST develop risk management frameworks specifically addressing use of AI in education and by children more broadly, and serve as a clearinghouse function so individual actors and states do not bear that responsibility, and (3) ensure frameworks are required and prohibitions are enforced. This is also reflected in the lack of updated federal privacy and safety laws that protect children and teens. 

Plan of Action

The federal government should take note of the innovative policy ideas bubbling up at the state level. For example, there is legislation and proposals in Colorado, California, Texas, and detailed guidance in over 20 states, including Ohio, Alabama, and Oregon

Policymakers should take a multi-pronged approach to address AI for children and learning, recognizing they are higher risk and therefore additional layers of protection should apply:

Recommendation 1. Build a coordinated framework an AI Safety and Kids Initiative at NIST

As the federal government further details risk associated with uses of AI, common uses of AI by kids should be designated or managed as high risk.  This is a foundational step to support the creation of guardrails or ensure protections for children as they use AI systems. The administration should clearly categorize education and use by children with in a risk level framework. For example, the EU is also considering risk in AI with the EU AI Act, which has different risk levels. If the risk framework includes education and AI systems that are likely to be accessed by children it provides a strong signal to policymakers at the state and federal level that these are uses that require protections (audits, transparency, or enforcement) to prevent or address potential harm. 

NIST, in partnership with others, should develop risk management profiles for platform developers building AI products for use in Education and for products likely to be accessed by children. Emphasis should be on safety and efficacy before technology  products come to market, with audits throughout development. NIST should:

Work in partnership with NTIA, FTC, CPSC, and HHS to  refine risk levels and risk management profiles for AI systems likely to be accessed by children.

The administration should task NIST’s Safety Institute to provide clarity on how safety should be considered for the use of AI in education and for AI systems likely to be accessed by children. This is accomplished through:

Recommendation 2. Ensure every child benefits from the promise of AI innovations 

The administration should support comprehensive digital literacy and prevent a deepening of the digital divide. 

Recommendation 3. Encourage Congress to pass clear enforceable rules re privacy and safety for AI products used by children

Champion Congressional updates to privacy laws like COPPA and FERPA to address use (especially for training) and sharing of personal information (PI) by AI tools. These laws can work in tandem, see for example recent proposed COPPA updates that would address use of technology in educational settings by children. 

Push for Congress to pass AI specific legislation addressing the development and deployment of AI systems for use by children

Support Congressional passage of online safety laws that address harmful design features in technology–specifically addressing design features that can lead to medically recognized mental health disorders like anxiety, depression, eating disorders, substance use, and suicide, and patterns of use indicating addiction-like behavior, as in Title I of the Senate-passed Kids Online Safety and Privacy Act.

Moving Forward

One ultimate recommendation is that, critically, standards and requirements need teeth. Frameworks should require that companies comply with legal requirements or face effective enforcement (such as by a well-funded expert regulator, or private lawsuits), with tools such as fines and injunctions. We have seen with past technological developments that voluntary frameworks and suggestions will not adequately protect children. Social media for example has failed to voluntarily protect children and poses risks to their mental health and well being. From exacerbating body image issues to amplifying peer pressure and social comparison, from encouraging compulsive device use to reducing attention spans, from connecting youth to extremism, illegal products, and deadly challenges, the financial incentives do not appear to exist for technology companies to appropriately safeguard children on their own. The next Administration can support enforcement by funding government positions who will be enforcing such laws.