
Policy Ideas for a Safer AI Future
AI holds immense promise, from revolutionizing science and medicine to strengthening national competitiveness and improving how governments serve the public. However, leading experts have warned that frontier AI systems can introduce real and complex risks, ranging from national security threats to the potential for unintended negative consequences, all of which require comprehensive governance structures that reduce risks, promote reliability, and protect our institutions.
To put it another way, the path to responsible AI innovation runs through safety. For AI to reach its full potential, public trust must be earned and sustained. Today, AI safety is a rapidly evolving field that draws attention from a diverse range of policymakers across the political spectrum, including those in Congress, federal agencies, and state and local governments. Many of these leaders are eager to act but face a common challenge: they lack the time, technical capacity, and staffing support needed to develop and implement effective policy solutions.
That’s where policy entrepreneurship comes in. At FAS, we define policy entrepreneurship as empowering experts in science, technology, and policy with the tools, guidance, and networks to shape actionable and impactful policies and to work directly with policymakers to bring those ideas to life.
To meet that need, we created the Policy Entrepreneurship Fellowship (PEF), a six-month, part-time program that supports individuals in advancing their policy ideas into practice. This cycle, we focused exclusively on AI safety. From January to June 2025, our fellows worked as part-time affiliates of the Federation of American Scientists, in partnership with the Aspen Institute, to develop timely, actionable proposals. Along the way, they received mentorship, strategic support, and opportunities to connect with key actors in government and civil society.
We are proud to publish the following memos, which reflect our fellows’ ambition and insight, and offer concrete, compelling policy options to advance AI safety in the public interest.
At this inflection point, the choice is not between speed and safety but between ungoverned acceleration and a calculated momentum that allows our strategic AI advantage to be both sustained and secured.
Improved detection could strengthen deterrence, but only if accompanying hazards—automation bias, model hallucinations, exploitable software vulnerabilities, and the risk of eroding assured second‑strike capability—are well managed.
A dedicated and properly resourced national entity is essential for supporting the development of safe, secure, and trustworthy AI to drive widespread adoption, by providing sustained, independent technical assessments and emergency coordination.
Congress should establish a new grant program, coordinated by the Cybersecurity and Infrastructure Security Agency, to assist state and local governments in addressing AI challenges.
If AI systems are not always reliable and secure, this could inhibit their adoption, especially in high-stakes scenarios, potentially compromising American AI leadership.
To secure global adoption of U.S. AI technology and ensure America’s workforce can fully leverage advanced AI, the federal government should take a strategic and coordinated approach to support AI assurance and security R&D.

WMDs and Nonproliferation,
Biosecurity,
Nuclear Security,
National Security

National Security,
OSINT,
U.S.-China Competition


Emerging Technologies,
Workforce Development

Incident preparedness and response,
Independent oversight and evaluation policy

AI policy,
AI evaluations