Liam Alexander is previously a Policy Associate on the Technology & Innovation Team, with a special focus on AI policy. Prior to starting at FAS, he conducted research on AI governance and strategy, and before that, he worked on a couple political campaigns in his home state of North Carolina. He graduated from the University of Pennsylvania with a degree in Philosophy, Politics, and Economics and a concentration in Emerging Technology Policy.
As Congress moves forward with the appropriations process, both the House and Senate have proposed various provisions related to artificial intelligence (AI) and machine learning (ML) across different spending bills.
Despite their importance, programs focused on AI trustworthiness form only a small fragment of total funding allocated for AI R&D by the National Science Foundation.
FAS is launching this live blog post to track all proposals around artificial intelligence (AI) that have been included in the NDAA.
With U.S. companies creating powerful frontier AI models, the federal government must guide this technology’s growth toward public benefit and risk mitigation. Here are six ways to do that.
Before releasing a new, powerful system like GPT-4 to millions of users, we must ask: “How can we know that this system is safe, trustworthy, and reliable enough to be released?”
The 21st century will be shaped by the US-China strategic competition. The United States and China are locked in a battle for global power, influence, and resources, and are fighting for control of the world’s most important geopolitical regions, including the Indo-Pacific and Africa. They are also vying for leadership in cutting-edge technologies such as […]
To help seed the ground for bipartisan progress, we’ve put together a menu of the best policy ideas on a range of critical topics.