Federation of American Scientists Joins in Support of Authorizing the US Artificial Intelligence Safety Institute (AISI) within the National Institute of Standards and Technology (NIST)
With Americans for Responsible Innovation (ARI) and the Information Technology Industry Council (ITI), a letter signed by 50 groups is presented to the US Senate Commerce Committee
Washington, D.C. – July 29, 2024 – The Federation of American Scientists, along with other organizations focused on a wide range of AI policy goals, submitted a letter of support to the U.S. Senate Commerce Committee to authorize the U.S. Artificial Intelligence Safety Institute (AISI) within the National Institute of Standards and Technology (NIST).
As the letter states:
Establishing the AISI on a statutory basis will ensure that companies of all sizes – as well as other interested parties – continue to have a voice in the development of relevant standards and guidelines. This will accelerate the widespread adoption of AI and further ensure the US continues to lead the world in the development of AI standards.
Currently, businesses of all types recognize the potential of AI, but many have refrained from adoption, in part due to concerns regarding implementation risks. The AISI provides a venue to convene the leading experts across industry and government to contribute to the development of voluntary standards that ultimately assist in de-risking adoption of AI technologies.
FAS agrees that the mission of NIST AISI to advance responsible innovation of AI systems is an urgent priority that deserves our full support.
Safety, Trust, Adoption, and Innovation
The AISI is particularly important for those enterprises not primarily engaged in technological activities and which do not possess the wherewithal to develop bespoke benchmarks and protocols to assess AI systems. NIST, which does not possess regulatory authority and has a long history of successfully engaging the private sector, accomplishes this within the AI space primarily through the AISI Consortium (AISIC).
Launched in February 2024, the AISIC consists of over 200 leading organizations – including FAS – from industry, trade groups, government, civil society and more, working together to share knowledge and “develop science-based and empirically backed guidelines and standards for AI measurement and policy.”
The letter was delivered to Chair Cantwell, Ranking Member Cruz, Chairman Lucas, and Ranking Member Lofgren.
###
ABOUT FAS
The Federation of American Scientists (FAS) works to advance progress on a broad suite of contemporary issues where science, technology, and innovation policy can deliver dramatic progress, and seeks to ensure that scientific and technical expertise have a seat at the policymaking table. Established in 1945 by scientists in response to the atomic bomb, FAS continues to work on behalf of a safer, more equitable, and more peaceful world. More information at fas.org.
In anticipation of future known and unknown health security threats, including new pandemics, biothreats, and climate-related health emergencies, our answers need to be much faster, cheaper, and less disruptive to other operations.
To unlock the full potential of artificial intelligence within the Department of Health and Human Services, an AI Corps should be established, embedding specialized AI experts within each of the department’s 10 agencies.
The U.S. government should establish a public-private National Exposome Project (NEP) to generate benchmark human exposure levels for the ~80,000 chemicals to which Americans are regularly exposed.
The federal government is responsible for ensuring the safety and privacy of the processing of personally identifiable information within commercially available information used for the development and deployment of artificial intelligence systems