Emerging Technology

The Federation of American Scientists Calls on OMB to Maintain the Agency AI Use Case Inventories at Their Current Level of Detail

03.06.25 | 2 min read | Text by Clara Langevin

The federal government’s approach to deploying AI systems is a defining force in shaping industry standards, academic research, and public perception of these technologies. Public sentiment toward AI remains mixed, with many Americans expressing a lack of trust in AI systems. To fully harness the benefits of AI, the public must have confidence that these systems are deployed responsibly and enhance their lives and livelihoods.

The first Trump Administration’s AI policies clearly recognized the opportunity to promote AI adoption through transparency and public trust. President Trump’s Executive Order 13859 explicitly stated that agencies must design, develop, acquire, and use “AI in a manner that fosters public trust and confidence while protecting privacy, civil rights, civil liberties, and American values.” This commitment laid the foundation for increasing government accountability in AI use.

A major step in this direction was the AI Use Case Inventory, established under President Trump’s Executive Order 13960 and later codified in the 2023 Advancing American AI Act. The agency inventories have since become a crucial tool in fostering public trust and innovation in government AI use. Recent OMB guidance (M-24-10) has expanded its scope, standardizing AI definitions, and collecting information on potential adverse impacts. The detailed inventory enhances accountability by ensuring transparency in AI deployments, tracks AI successes and risks to improve government services, and supports AI vendors by providing visibility into public-sector AI needs, thereby driving industry innovation.

The end of 2024 marked a major leap in government transparency regarding AI use. Agency reporting on AI systems saw dramatic improvements, with federal AI inventories capturing more than 1,700 AI use cases —a 200% increase in reported use cases from the previous year. The Department of Homeland Security (DHS) alone reported 158 active AI use cases. Of these, 29 were identified as high-risk, with detailed documentation on how 24 of those use cases are mitigating potential risks. This level of disclosure is essential for maintaining public trust and ensuring responsible AI deployment.

OMB is set to release revisions to its AI guidance (M-24-10) in mid-March, presenting an opportunity to ensure that transparency remains a top priority.

To support continued transparency and accountability in government AI use, the Federation of American Scientists has written a letter urging OMB to maintain its detailed guidance on AI inventories. We believe that sustained transparency is crucial to ensuring responsible AI governance, fostering public trust, and enabling industry innovation.

publications
See all publications
Emerging Technology
Blog
The Federation of American Scientists Calls on OMB to Maintain the Agency AI Use Case Inventories at Their Current Level of Detail

To fully harness the benefits of AI, the public must have confidence that these systems are deployed responsibly and enhance their lives and livelihoods.

03.06.25 | 2 min read
read more
Emerging Technology
Press release
Federation of American Scientists and 16 Tech Organizations Call on OMB and OSTP to Maintain Agency AI Use Case Inventories

The first Trump Administration’s E.O. 13859 commitment laid the foundation for increasing government accountability in AI use; this should continue

03.06.25 | 3 min read
read more
Emerging Technology
day one project
Policy Memo
A Federal Center of Excellence to Expand State and Local Government Capacity for AI Procurement and Use

As new waves of AI technologies continue to enter the public sector, touching a breadth of services critical to the welfare of the American people, this center of excellence will help maintain high standards for responsible public sector AI for decades to come.

02.14.25 | 9 min read
read more
Emerging Technology
day one project
Policy Memo
Strengthening Information Integrity with Provenance for AI-Generated Text Using ‘Fuzzy Provenance’ Solutions

By creating a reliable, user-friendly framework for surfacing provenance, NIST would empower readers to better discern the trustworthiness of the text they encounter, thereby helping to counteract the risks posed by deceptive AI-generated content.

02.13.25 | 7 min read
read more