AI Policy Specialist
Clara Langevin
she/her
AI Governance,
Technology Policy,
Public Sector AI,
Data Governance

Clara Langevin is an AI Policy Specialist on the Emerging Technologies team. She focuses on promoting responsible AI adoption across different sectors and creating policy guidance on AI and data ethics, transparency, and explainability. Before joining FAS, Clara was the Head of the AI and Machine Learning Platform at the Centre for the Fourth Industrial Revolution Brazil, a World Economic Forum Affiliate where she led projects on public sector AI procurement, healthcare and AI, and precision agriculture.

Clara is a graduate of Columbia University’s School of International and Public Affairs (MPA- Development Practice) and the University of Washington’s Jackson School of International Studies.

publications
Emerging Technology
Issue Brief
Securing American AI Leadership: A Strategic Action Plan for Innovation, Adoption, and Trust

To sustain America’s leadership in AI innovation, accelerate adoption across the economy, and guarantee that AI systems remain secure and trustworthy, we offer a set of policy recommendations.

03.24.25 | 23 min read
read more
Emerging Technology
Blog
The Federation of American Scientists Calls on OMB to Maintain the Agency AI Use Case Inventories at Their Current Level of Detail

To fully harness the benefits of AI, the public must have confidence that these systems are deployed responsibly and enhance their lives and livelihoods.

03.06.25 | 2 min read
read more
Emerging Technology
day one project
Policy Memo
A National Guidance Platform for AI Acquisition

By leveraging its substantial purchasing power responsibly, the government can encourage high-quality, inclusive AI solutions that address diverse citizen needs while setting a strong precedent for innovation and accountability.

01.08.25 | 10 min read
read more
Emerging Technology
Issue Brief
Public Comment on Executive Branch Agency Handling of CAI containing PII

The federal government is responsible for ensuring the safety and privacy of the processing of personally identifiable information within commercially available information used for the development and deployment of artificial intelligence systems

12.18.24 | 14 min read
read more
Emerging Technology
Issue Brief
Public Comment on the U.S. Artificial Intelligence Safety Institute’s Draft Document: NIST AI 800-1, Managing Misuse Risk for Dual-Use Foundation Models

NIST’s guidance on “Managing Misuse Risk for Dual-Use Foundation Models” represents a significant step forward in establishing robust practices for mitigating catastrophic risks associated with advanced AI systems.

08.28.24 | 13 min read
read more