Emerging Technology

Federation of American Scientists and 16 Tech Organizations Call on OMB and OSTP to Maintain Agency AI Use Case Inventories

03.06.25 | 3 min read

The first Trump Administration’s E.O. 13859 commitment laid the foundation for increasing government accountability in AI use; this should continue

Washington, D.C. – March 6, 2025 – The Federation of American Scientists (FAS), a non-partisan, nonprofit science think tank dedicated to developing evidence-based policies to address national challenges, today released a letter to the White House Office of Management and Budget (OMB) and the Office of Science and Technology Policy (OSTP), signed by 16 additional scientific and technical organizations, urging the current Trump administration to maintain the federal agency AI use cases inventories at the current level of detail.

“The federal government has immense power to shape industry standards, academic research, and public perception of artificial intelligence,” says Daniel Correa, CEO of the Federation of American Scientists. “By continuing the work set forth by the first Trump administration in Executive Order 13960  and continued by the bipartisan 2023 Advancing American AI Act, OMB’s detailed use cases help us understand the depth and scope of AI systems used for government services.”

“FAS and our fellow organizations urge the administration to maintain these use case standards because these inventories provide a critical check on government AI use,” says Dr. Jedidah Isler, Chief Science Officer at FAS.

AI Guidance Update Mid-March

“Transparency is essential for public trust, which in turn is critical to maximizing the benefits of government AI use. That’s why FAS is leading a letter urging the administration to uphold the current level of agency AI use case detail—ensuring transparency remains a top priority,” says Oliver Stephenson, Associate Director of AI and Emerging Tech Policy at FAS.

“Americans want reassurances that the development and use of artificial intelligence within the federal government is safe;  and that we have the ability to mitigate any adverse impacts. By maintaining guidance that federal agencies have to collect and publish information on risks, development status, oversight, data use and so many other elements, OMB will continue strengthening Americans’ trust in the development and use of artificial intelligence,” says Clara Langevin, AI Policy Specialist at FAS.

Surging Use of AI in Government 

This letter follows the dramatic rise in the use of artificial intelligence across government, with anticipated growth coming at a rapid rate. For example, at the end of 2024 the Department of Homeland Security (DHS) alone reported 158 active AI use cases. Of these, 29 were identified as high-risk, with detailed documentation on how 24 of those use cases are mitigating potential risks. OMB and OSTP have the ability and authority to set the guidelines that can address the growing pace of government innovation. 

FAS and our signers believe that sustained transparency is crucial to ensuring responsible AI governance, fostering public trust, and enabling responsible industry innovation.

Signatories Urging AI Use Case Inventories at Current Level of Detail

Federation of American Scientists
Beeck Center for Social Impact + Innovation at Georgetown University
Bonner Enterprises, LLC
Center for AI and Digital Policy
Center for Democracy & Technology
Center for Inclusive Change
CUNY Public Interest Tech Lab
Electronic Frontier Foundation
Environmental Policy Innovation Center
Mozilla
National Fair Housing Alliance
NETWORK Lobby for Catholic Social Justice
New America’s Open Technology Institute
POPVOX Foundation
Public Citizen
SeedAI
The Governance Lab



###

ABOUT FAS

The Federation of American Scientists (FAS) works to advance progress on a broad suite of contemporary issues where science, technology, and innovation policy can deliver dramatic progress, and seeks to ensure that scientific and technical expertise have a seat at the policymaking table. Established in 1945 by scientists in response to the atomic bomb, FAS continues to work on behalf of a safer, more equitable, and more peaceful world. More information about FAS work at fas.org.


ABOUT THIS COALITION

Organizations signed on to this letter represent a range of technology stakeholders in industry, academia, and nonprofit realms. We share a commitment to AI transparency.  We urge the current administration, OMB, and OSTP to retain the policies set forth in Trump’s Executive Order 13960 and continued in the bipartisan 2023 Advancing American AI Act.


publications
See all publications
Emerging Technology
Blog
The Federation of American Scientists Calls on OMB to Maintain the Agency AI Use Case Inventories at Their Current Level of Detail

To fully harness the benefits of AI, the public must have confidence that these systems are deployed responsibly and enhance their lives and livelihoods.

03.06.25 | 2 min read
read more
Emerging Technology
Press release
Federation of American Scientists and 16 Tech Organizations Call on OMB and OSTP to Maintain Agency AI Use Case Inventories

The first Trump Administration’s E.O. 13859 commitment laid the foundation for increasing government accountability in AI use; this should continue

03.06.25 | 3 min read
read more
Emerging Technology
day one project
Policy Memo
A Federal Center of Excellence to Expand State and Local Government Capacity for AI Procurement and Use

As new waves of AI technologies continue to enter the public sector, touching a breadth of services critical to the welfare of the American people, this center of excellence will help maintain high standards for responsible public sector AI for decades to come.

02.14.25 | 9 min read
read more
Emerging Technology
day one project
Policy Memo
Strengthening Information Integrity with Provenance for AI-Generated Text Using ‘Fuzzy Provenance’ Solutions

By creating a reliable, user-friendly framework for surfacing provenance, NIST would empower readers to better discern the trustworthiness of the text they encounter, thereby helping to counteract the risks posed by deceptive AI-generated content.

02.13.25 | 7 min read
read more