Effective policy design,
Trustworthy AI,
AI safety
Jack Titus is an AI Policy fellow at the Federation of American Scientists. He is a member of the second cohort of Horizon Institute for Public Service Fellowship, specializing in AI Policy. His areas of research interest include AI assessment and monitoring, export controls, and open science research policy. Jack’s background is in education, having served as a high school teacher at Austin’s Liberal Arts & Science Academy. There, he taught a 10th-grade survey course in philosophy that introduced students to ideas in ethics, epistemology, political philosophy, and more. Jack completed a BA in Philosophy at the University of Notre Dame in 2014 and an MA in Economics at the University of Texas in May 2023. In his spare time, he is a national parks nerd, avid climber, crossword solver, and aspiring home chef.
NIST’s guidance on “Managing Misuse Risk for Dual-Use Foundation Models” represents a significant step forward in establishing robust practices for mitigating catastrophic risks associated with advanced AI systems.
Like climate change, the societal risks from AI will likely come from the cumulative impact of many different systems. Unilateral commitments are poor tools to address such risks.
As Congress moves forward with the appropriations process, both the House and Senate have proposed various provisions related to artificial intelligence (AI) and machine learning (ML) across different spending bills.
Responsible governance is crucial to harnessing the immense benefit promised by AI. Here are recommendations for advancing ethical, high-impact AI with thoughtful oversight.
FAS is launching this live blog post to track all proposals around artificial intelligence (AI) that have been included in the NDAA.
With U.S. companies creating powerful frontier AI models, the federal government must guide this technology’s growth toward public benefit and risk mitigation. Here are six ways to do that.