
What’s Progress and What’s Not in the Trump Administration’s AI Action Plan
Artificial intelligence is already shaping how Americans work, learn, and receive vital services—and its influence is only accelerating. To steer this technology toward the public good, the United States needs a coherent, government-wide AI agenda that encourages innovation and trustworthiness, grounded in the best scientific evidence.
In February 2025, the Trump Administration sought public comment on its development of an AI Action Plan. The Federation of American Scientists saw this as an opportunity to contribute expert, nonpartisan guidance, combining insights from our policy team with ideas from the broader science and technology community, developed as part of our Day One Project. In our comments to the White House Office of Science and Technology Policy we recommended incorporating responsible policies to unleash AI innovation, accelerate AI adoption, ensure secure and trustworthy AI, and strengthen our existing world-class government institutions.
Last week, the Trump Administration released their AI Action Plan. The document contains many promising aspects related to AI research and development, interpretability and control, managing national security risks, and new models for accelerating scientific research. However, there are also concerning provisions, such as those inhibiting state regulations and removing mentions of diversity, equity, and inclusion and climate change from the NIST AI Risk Management Framework. These omissions weaken the United States’ ability to lead on some of the most pressing societal challenges associated with AI technologies.
Despite the AI Action Plan’s ambitious proposals, it will remain aspirational without funding, proper staffing, and clear timelines. The deep cuts to budgets and personnel across the government present an incongruous picture of the Administration’s priorities and policy agenda for emerging technologies, and places pressure on Congress to ensure this plan is properly supported.
Promising Advances & Opportunities
AI Interpretability
As an organization, we’ve developed and shared concrete ideas for advancing AI interpretability—the science of understanding how AI works under the hood. The Administration’s elevation of AI interpretability in the plan is a promising step. Improving interpretability is not only critical for technical progress but also essential to fostering public trust and confidence in AI systems.
We have provided a roadmap for the government to deliver on the promise of interpretable AI in both our AI Action Plan comments and a more detailed memo. In these documents we’ve advocated for advancing AI explainability through open-access resources, standardized benchmarks, common tasks, user-centered research, and a robust repository of techniques to ensure consistent, meaningful, and widely applicable progress across the field. We’ve also argued for the federal government to prioritize interpretable AI in procurement—especially for high-stakes applications—and to establish research and development agreements with AI companies and interpretability research organizations to red team critical systems and conduct targeted interpretability research.
AI Research and Development
Beyond interpretability, the AI Action Plan lays out an ambitious and far-reaching agenda for AI research and development, including robustness and control, advancing the science of AI, and building an AI evaluation ecosystem. We recognize that the Administration has incorporated forward-looking proposals that echo those from our Day One Project—such as building world-class scientific datasets and using AI to accelerate materials discovery. These policy proposals showcase our perspective that the federal government has a critical role to play in supporting groundbreaking scientific and technical research.
A Toolbox for AI Procurement
The Administration’s focus on strengthening the federal workforce’s capacity to use and manage AI is an essential step toward responsible deployment, cross-agency coordination, and reliability in government AI use. The proposed GSA-led AI procurement toolbox closely mirrors our recommendation for a resource to guide agencies through the AI acquisition process. Proper implementation of this policy could further support government efficiency and agility to respond to the needs of constituents.
Managing National Security Risks
The Administration also clearly recognizes the emerging national security risks posed by AI. While the exact nature of many of these risks remains uncertain, the plan contains prudent recommendations on key areas like biosecurity and cybersecurity, and highlights the important role that the Center for AI Standards and Innovation can play in responding to these risks. FAS has previously published policy ideas on how to prepare for emerging AI threats and create a system for reporting AI incidents, as well as outlining how CAISI can play a greater role in advancing AI reliability and security. These proposals can help the government implement the recommendations advanced in the Action Plan.
Focused Research Organizations
The Administration’s support of Focused Research Organizations (FROs) is a promising step. FROs are organizations that address well-defined challenges that require scale and coordination but that are not immediately profitable, and are an exciting model for accelerating scientific progress. FAS first published on FROs in 2020, and has since released a range of proposals from experts that are well-suited to the FRO model. Since 2020, various FROs have gained over $100 million in philanthropic funding, but we believe that this is the first time that the U.S. government has explicitly embraced the FRO model.
Where the AI Action Plan Falls Short
Restricting State-Level Guardrails
The Administration’s AI Action Plan proposes to restrict federal AI funding to states when state AI rules “hinder the effectiveness” of that funding. While avoiding unnecessary red tape is sensible, this unclear standard could offer the administration a wide latitude to block state rules at its discretion. FAS has recently opposed preemption of state-level AI regulation by Congress in the absence of federal action. Without national standards for AI, state rules provide an opportunity to develop best practices for responsible AI adoption.
Failing to Address Bias in AI Systems
We are also concerned by the recommended revision to the NIST AI Risk Management Framework (RMF) that would eliminate references to diversity, equity, and inclusion. AI bias is a proven, measurable phenomenon, as documented by a broad scientific consensus from leading researchers and practitioners across sectors. Failing to address such biases leaves the public vulnerable to the harms of discriminatory or unfair systems that can affect people in areas like healthcare, housing, hiring, and access to public services. This includes deeply consequential biases, such as those affecting rural communities. A lack of action to address AI bias will only inhibit beneficial adoption and further erode trust in the accuracy of algorithmic systems.
The AI Action Plan contains a direction for the federal government to only procure AI models from developers who “ensure that their systems are objective and free from top-down ideological bias,” which is implemented via an associated executive order. Building modern AI systems involves a huge range of choices, including which data to use for training, how to “fine tune” the model for particular use-cases, and the “system prompt” which guides model behavior. Each of these stages can affect model outputs in ways that are not well understood and can be difficult to control. There is no standard definition for what constitutes a model that is “free from top-down ideological bias”, and this vague standard could easily be misused or improperly implemented at the agency level with unintended consequences for the public. We encourage the administration to instead focus on increasing transparency and explainability of systems as a mechanism to prevent unintended bias in outputs.
Ignoring the Environmental Costs and Opportunities
The Administration’s direction to remove mention of climate change from the RMF overlooks the very real climate and environment impacts associated with the growing resource demands of large-scale AI systems. Measuring and managing environmental impacts is an important component of AI infrastructure buildout, and removing this policy lever will also restrict AI adoption. This is also a missed opportunity to push forward the ways that AI can help tackle climate change and other environmental issues. In our recent AI and Energy Policy Sprint, we developed policy memos which highlighted the benefits AI could bring to our energy system and environment, while also highlighting ways of responding to AI’s environmental and health impacts.
The Importance of Public Trust
The current lack of public trust in AI risks inhibiting innovation and adoption of AI systems, meaning new methods will not be discovered and new benefits won’t be felt. A failure to uphold high standards in the technology we deploy will also place our nation at a strategic disadvantage compared to our competitors. Recognizing this issue, both the first and second Trump administrations have emphasized public trust as a key theme in their AI policy documents. Many of the research directions outlined in the administration’s AI Action Plan promise to steer AI technology in a more trustworthy direction and deliver widespread benefits to the public. However, several measures simultaneously threaten to undermine important guardrails, while cuts to important government programs also work against the goals the administration has set for itself.
The Federation of American Scientists will continue to collaborate with the scientific community to place rigorous evidence-based policy at the heart of delivering AI that works for all Americans.
If AI systems are not always reliable and secure, this could inhibit their adoption, especially in high-stakes scenarios, potentially compromising American AI leadership.
As AI becomes more capable and integrated throughout the United States economy, its growing demand for energy, water, land, and raw materials is driving significant economic and environmental costs, from increased air pollution to higher costs for ratepayers.
The federal government is responsible for ensuring the safety and privacy of the processing of personally identifiable information within commercially available information used for the development and deployment of artificial intelligence systems
NIST’s guidance on “Managing Misuse Risk for Dual-Use Foundation Models” represents a significant step forward in establishing robust practices for mitigating catastrophic risks associated with advanced AI systems.