Like climate change, the societal risks from AI will likely come from the cumulative impact of many different systems. Unilateral commitments are poor tools to address such risks.
Despite significant advances in scientific tools and methods, the traditional, labor-intensive model of scientific research in materials discovery has seen little innovation.
Five federal policy recommendations to maximize opportunity and minimize risk at the intersection of biology and artificial intelligence
As Congress moves forward with the appropriations process, both the House and Senate have proposed various provisions related to artificial intelligence (AI) and machine learning (ML) across different spending bills.
The looming competition for global talent has brought forth a necessity to evaluate and update the policies concerning international visa holders in the United States.
Researchers at the nonpartisan science think tank support Biden’s executive order on the use of artificial intelligence in government.
Responsible governance is crucial to harnessing the immense benefit promised by AI. Here are recommendations for advancing ethical, high-impact AI with thoughtful oversight.
Despite their importance, programs focused on AI trustworthiness form only a small fragment of total funding allocated for AI R&D by the National Science Foundation.
When it comes to AI, the Department of Defense is still moving too slowly to make meaningful and sustainable innovation.
FAS is launching this live blog post to track all proposals around artificial intelligence (AI) that have been included in the NDAA.
With U.S. companies creating powerful frontier AI models, the federal government must guide this technology’s growth toward public benefit and risk mitigation. Here are six ways to do that.
Before releasing a new, powerful system like GPT-4 to millions of users, we must ask: “How can we know that this system is safe, trustworthy, and reliable enough to be released?”