Emerging Technology
day one project

Policy Ideas for a Safer AI Future

06.10.25 | 2 min read | Text by Oliver Stephenson & Clara Langevin & Karinna Gerhardt

AI holds immense promise, from revolutionizing science and medicine to strengthening national competitiveness and improving how governments serve the public. However, leading experts have warned that frontier AI systems can introduce real and complex risks, ranging from national security threats to the potential for unintended negative consequences, all of which require comprehensive governance structures that reduce risks, promote reliability, and protect our institutions.  

To put it another way, the path to responsible AI innovation runs through safety. For AI to reach its full potential, public trust must be earned and sustained. Today, AI safety is a rapidly evolving field that draws attention from a diverse range of policymakers across the political spectrum, including those in Congress, federal agencies, and state and local governments. Many of these leaders are eager to act but face a common challenge: they lack the time, technical capacity, and staffing support needed to develop and implement effective policy solutions.

That’s where policy entrepreneurship comes in. At FAS, we define policy entrepreneurship as empowering experts in science, technology, and policy with the tools, guidance, and networks to shape actionable and impactful policies and to work directly with policymakers to bring those ideas to life. 

To meet that need, we created the Policy Entrepreneurship Fellowship (PEF), a six-month, part-time program that supports individuals in advancing their policy ideas into practice. This cycle, we focused exclusively on AI safety. From January to June 2025, our fellows worked as part-time affiliates of the Federation of American Scientists, in partnership with the Aspen Institute, to develop timely, actionable proposals. Along the way, they received mentorship, strategic support, and opportunities to connect with key actors in government and civil society.

We are proud to publish the following memos, which reflect our fellows’ ambition and insight, and offer concrete, compelling policy options to advance AI safety in the public interest.

Read the memos
See all
Emerging Technology
day one project
Policy Memo
Moving Beyond Pilot Programs to Codify and Expand Continuous AI Benchmarking in Testing and Evaluation

At this inflection point, the choice is not between speed and safety but between ungoverned acceleration and a calculated momentum that allows our strategic AI advantage to be both sustained and secured.

06.11.25 | 12 min read
read more
Emerging Technology
day one project
Policy Memo
Develop a Risk Assessment Framework for AI Integration into Nuclear Weapons Command, Control, and Communications Systems

Improved detection could strengthen deterrence, but only if accompanying hazards—automation bias, model hallucinations, exploitable software vulnerabilities, and the risk of eroding assured second‑strike capability—are well managed.

06.11.25 | 8 min read
read more
Emerging Technology
day one project
Policy Memo
A National Center for Advanced AI Reliability and Security

A dedicated and properly resourced national entity is essential for supporting the development of safe, secure, and trustworthy AI to drive widespread adoption, by providing sustained, independent technical assessments and emergency coordination.

06.11.25 | 10 min read
read more
Emerging Technology
day one project
Policy Memo
A Grant Program to Enhance State and Local Government AI Capacity and Address Emerging Threats

Congress should establish a new grant program, coordinated by the Cybersecurity and Infrastructure Security Agency, to assist state and local governments in addressing AI challenges.

06.11.25 | 8 min read
read more
Emerging Technology
day one project
Policy Memo
Accelerating AI Interpretability To Promote U.S. Technological Leadership

If AI systems are not always reliable and secure, this could inhibit their adoption, especially in high-stakes scenarios, potentially compromising American AI leadership.

06.10.25 | 12 min read
read more
Emerging Technology
day one project
Policy Memo
Accelerating R&D for Critical AI Assurance and Security Technologies

To secure global adoption of U.S. AI technology and ensure America’s workforce can fully leverage advanced AI, the federal government should take a strategic and coordinated approach to support AI assurance and security R&D.

06.10.25 | 7 min read
read more
Meet the Fellows
AI Safety Policy Entrepreneurship Fellow
Maria Dooling
Emerging Technology,
WMDs and Nonproliferation,
Biosecurity,
Nuclear Security,
National Security
AI Safety Policy Entrepreneurship Fellow
Kateryna Halstead
Open-Source AI,
National Security,
OSINT,
U.S.-China Competition
AI Safety Policy Entrepreneurship Fellow
Jam Kraprayoon
AI Safety Policy Entrepreneurship Fellow
Christopher Maximos
State & Local Government,
Emerging Technologies,
Workforce Development
AI Safety Policy Entrepreneurship Fellow
Joe O’Brien
Dual-use AI monitoring and reporting,
Incident preparedness and response,
Independent oversight and evaluation policy
AI Safety Policy Entrepreneurship Fellowship
Matteo Pistillo
AI governance,
AI policy,
AI evaluations