Blank Checks for Black Boxes: Bring AI Governance to Competitive Grants
The misuse of AI in federally-funded projects can risk public safety and waste taxpayer dollars.
The Trump administration has a pivotal opportunity to spot wasteful spending, promote public trust in AI, and safeguard Americans from unchecked AI decisions. To tackle AI risks in grant spending, grant-making agencies should adopt trustworthy AI practices in their grant competitions and start enforcing them against reckless grantees.
Federal AI spending could soon skyrocket. One ambitious legislative plan from a Senate AI Working Group calls for doubling non-defense AI spending to $32 billion a year by 2026. That funding would grow AI across R&D, cybersecurity, testing infrastructure, and small business support.
Yet as federal AI investment accelerates, safeguards against snake oil lag behind. Grants can be wasted on AI that doesn’t work. Grants can pay for untested AI with unknown risks. Grants can blur the lines of who is accountable for fixing AI’s mistakes. And grants offer little recourse to those affected by an AI system’s flawed decisions. Such failures risk exacerbating public distrust of AI, discouraging possible beneficial uses.
Oversight for federal grant spending is lacking, with:
- No AI-specific policies vetting discretionary grants
- Varying AI expertise on grant judging panels
- Unclear AI quality standards set by grantmaking agencies
- Little-to-no pre- or post-award safeguards that identify and monitor high-risk AI deployments.
Watchdogs, meanwhile, play a losing game, chasing after errant programs one-by-one only after harm has been done. Luckily, momentum is building for reform. Policymakers recognize that investing in untrustworthy AI erodes public trust and stifles genuine innovation. Steps policymakers could take include setting clear AI quality standards, training grant judges, monitoring grantee’s AI usage, and evaluating outcomes to ensure projects achieve their potential. By establishing oversight practices, agencies can foster high-potential projects for economic competitiveness, while protecting the public from harm.
Challenge and Opportunity
Poor AI Oversight Jeopardizes Innovation and Civil Rights
The U.S. government advances public goals in areas like healthcare, research, and social programs by providing various types of federal assistance. This funding can go to state and local governments or directly to organizations, nonprofits, and individuals. When federal agencies award grants, they typically do so expecting less routine involvement than they would with other funding mechanisms, for example cooperative agreements. Not all federal grants look the same—agencies administer mandatory grants, where the authorizing statute determines who receives funding, and competitive grants (or “discretionary grants”), where the agency selects award winners. In competitive grants, agencies have more flexibility to set program-specific conditions and award criteria, which opens opportunities for policymakers to structure how best to direct dollars to innovative projects and mitigate emerging risks.
These competitive grants fall short on AI oversight. Programmatic policy is set in cross-cutting laws, agency-wide policies, and grant-specific rules; a lack of AI oversight mars all three. To date, no government-wide AI regulation extends to AI grantmaking. Even when President Biden’s 2023 AI Executive Order directed agencies to implement responsible AI practices, the order’s implementing policies exempted grant spending (see footnote 25) entirely from the new safeguards. In this vacuum, the 26 grantmaking agencies are on their own to set agency-wide policies. Few have. Agencies can also set AI rules just for specific funding opportunities. They do not. In fact, in a review of a large set of agency discretionary grant programs, only a handful of funding notices announced a standard for AI quality in a proposed program. (See: One Bad NOFO?) The net result? A policy and implementation gap for the use of AI in grant-funded programs.
Funding mistakes damage agency credibility, stifle innovation, and undermines the support for people and communities financial assistance aims to provide. Recent controversies highlight how today’s lax measures—particularly in setting clear rules for federal financial assistance, monitoring how they are used, and responding to public feedback—have led to inefficient and rights-trampling results. In just the last few years, some of the problems we have seen include:
- The Department of Housing and Urban Development set few rules on a grant to protect public housing residents, letting officials off the hook when they bought facial recognition cameras to surveil and evict residents.
- Senators called for a pause on predictive policing grants, finding the Department of Justice failed to check that their grantees’ use of AI complied with civil rights laws—the same laws on which the grant awards were conditioned.
- The National Institute of Justice issued a recidivism forecasting challenge which researchers argued “incentivized exploiting the metrics used to judge entrants, leading to the development of trivial solutions that could not realistically work in practice.”
Any grant can attract controversy, and these grants are no exception. But the cases above spotlight transparency, monitoring, and participation deficits—the same kinds of AI oversight problems weakening trust in government that policymakers aim to fix in other contexts.
Smart spending depends on careful planning. Without it, programs may struggle to drive innovation or end up funding AI that infringes peoples’ rights. OMB, as well as agency Inspectors General, and grant managers will need guidance to evaluate what money is going towards AI and how to implement effective oversight. Government will face tradeoffs and challenges promoting AI innovation in federal grants, particularly due to:
1) The AI Screening Problem. When reviewing applications, agencies might fail to screen out candidates that exaggerate their AI capabilities—or fail to report bunk AI use altogether. Grantmaking requires calculated risks on ideas that might fail. But grant judges who are not experts in AI can make bad bets. Applicants will pitch AI solutions directly to these non-experts, and grant winners, regardless of their original proposal, will likely purchase and deploy AI, creating additional oversight challenges.
2) The grant-procurement divide. When planning a grant, agencies might set overly burdensome restrictions that dissuade qualified applicants from applying or otherwise take up too much time, getting in the way of grant goals. Grants are meant to be hands-off; fostering breakthroughs while preventing negligence will be a challenging needle to thread.
3) Limited agency capacity. Agencies may be unequipped to monitor grant recipients’ use of AI. After awarding funding, agencies can miss when vetted AI breaks down on launch. While agencies audit grantees, those audits typically focus on fraud and financial missteps. In some cases, agencies may not be measuring grantee performance well at all (slides 12-13). Yet regular monitoring, similar to the oversight used in procurement, will be necessary to catch emergent problems that affect AI outcomes. Enforcement, too, could be cause for concern; agencies clawback funds for procedural issues, but “almost never withhold federal funds when grantees are out of compliance with the substantive requirements of their grant statutes.” Even as the funding agency steps away, an inaccurate AI system can persist, embedding risks over a longer period of time.
Plan of Action
Recommendation 1. OMB and agencies should bake-in pre-award scrutiny through uniform requirements and clearer guidelines
- Agencies should revise funding notices to require applicants disclose plans to use AI, with a greater level of disclosure required for funding in a foreseeable high-risk context. Agencies should take care not to overburden applicants with disclosures on routine AI uses, particularly as the tools grow in popularity. States may be a laboratory to watch for policy innovation; Illinois, for example, has proposed AI disclosure policies and penalties for its state grants.
- Agencies should make AI-related grant policies clearer to prospective applicants. Such a change would be consistent with OMB policy and the latest Uniform Guidance, the set of rules OMB sets for agencies to manage their grants. For example, in grant notices, any AI-related review criteria should be plainly stated, rather than inferred from a project’s description. Any AI restrictions should be spelled out too, not merely incorporated by reference. More generally, agencies should consider simplifying grant notices, publishing yearly AI grant priorities, hosting information sessions, and/or extending public comment periods to significant AI-related discretionary spending.
- Agencies could consider generally-applicable metrics with which to evaluate applicants’ suggested AI uses. For example, agencies may require applicants to demonstrate they have searched for less discriminatory algorithms in the development of an automated system.
- OMB could formally codify pre-award AI risk assessments in the Uniform Guidance, the set of rules OMB sets for agencies to manage their grants. OMB updates its guidance periodically (with the most recent updates in 2024) and can also issue smaller revisions.
- Agencies could also provide resources targeted at less established AI developers, who might otherwise struggle to meet auditing and compliance standards.
Recommendation 2. OMB and grant marketplaces should coordinate information sharing between agencies
To support review of AI-related grants, OMB and grantmaking agency staff should pool knowledge on AI’s tricky legal, policy, and technical matters.
- OMB, through its Council on Federal Financial Assistance, should coordinate information-sharing between grantmaking agencies on AI risks.
- The White House Office of Science and Technology Policy, the National Institute of Standards and Technology, and the Administrative Conference of the United States (ACUS) should support agencies by devising what information agencies should collect on their grantee’s use of AI so as to limit administrative burden on grantees.
- OMB can share expertise on monitoring and managing grant risks, best practices and guides, and trade offs; other relevant interagency councils can share grant evaluation criteria and past performance templates across agencies.
- Online grants marketplaces, such as the Grants Quality Service Management Office Marketplace operated by the Department of Health and Human Services, should improve grantmakers’ decisions by sharing AI-specific information like an applicants’ AI quality, audit and enforcement history, where applicable.
Recommendation 3. Agencies should embrace targeted hiring and talent exchanges for grant review boards
Agencies should have experts in a given AI topic judging grant competitions. To do so requires overcoming talent acquisition challenges. To that end:
- Agency Chief AI Officers should assess grant office needs as part of their OMB-required assessments on AI talent. Those officers should also include grant staff in any AI trainings, and, when prudent, agency-wide risk assessment meetings
- Agencies should staff review boards with technical experts through talent exchanges and targeted hiring.
- OMB should coordinate drop-in technical experts who can sit in as consultants across agencies.
- OMB should support the training of federal grants staff on matters that touch AI– particularly as surveyed grant managers see training as an area of need.
Recommendation 4. Agencies should step up post-award monitoring and enforcement
You can’t improve what you don’t measure—especially when it comes to AI. Quantifying, documenting, and enforcing against careless AI uses can be a new task for grantmaking agencies. Incident reporting will improve the chances that existing cross-cutting regulations, including civil rights laws, can reel back AI gone awry.
- Congress can delegate investigative authority to agencies with AI audit expertise. Such an effort might mirror the cross-agency approach taken by the Department of Justice’s Procurement Collusion Strike Force, which investigates antitrust crimes in procurement and grantmaking.
- Congress can require agencies to cut off funds when grantees show repeated or egregious violations of grant terms pertaining to AI use. Agencies, where authorized, should voluntarily enforce against repeat bad players through spending clawbacks, cutoffs or ban lists.
- Agencies should consider introducing dispute resolution procedures that give redress to enforced-upon grantees.
Recommendation 5. Agencies should encourage and fund efforts to investigate and measure AI harms
- Agencies should invest in establishing measurement and standards within their topic areas on which to evaluate prospective applicants. For example, the National Institute of Justice recently opened funding to research evaluating the use of AI in the criminal legal system.
- Agencies should follow through on longstanding calls to encourage public whistleblowing on grantee missteps, particularly around AI.
- Agencies should solicit feedback from the public through RFIs on grants.gov on how to innovate in AI in their specific research or topic area.
Conclusion
Little limits how grant winners can spend federal dollars on AI. With the government poised to massively expand its spending on AI, that should change.
The federal failure to oversee AI use in grants erodes public trust, civil rights, effective service delivery and the promise of government-backed innovation. Congressional efforts to remedy these problems–starting probes, drafting letters–are important oversight measures, but only come after the damage is done.
Both the Trump and Biden administrations have recognized that AI is exceptional and needs exceptional scrutiny. Many of the lessons learned from scrutinizing federal agency AI procurement apply to grant competitions. Today’s confluence of public will, interest, and urgency is a rare opportunity to widen the aperture of AI governance to include grantmaking.
This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.
Enabling statutes for agencies often are the authority for grant competitions. For grant competitions, the statutory language leaves it to agencies to place further specific policies on the competition. Additionally, laws, like the DATA Act and Federal Grant and Cooperative Agreement Act, offer definitions and guidance to agencies in the use of federal funds.
Agencies already conduct a great deal of pre-award planning to align grantmaking with Executive Orders. For example, in one survey of grantmakers, a little over half of respondents updated their pre-award processes, such as applications and organization information, to comply with an Executive Order. Grantmakers aligning grant planning with the Trump administration’s future Executive Orders will likely follow similar steps.
A wide range of states, local governments, companies, and individuals receive grant competition funds. Spending records, available on USASpending.gov, give some insight into where grant funding goes, though these records too, can be incomplete.
To tackle AI risks in grant spending, grant-making agencies should adopt trustworthy AI practices in their grant competitions and start enforcing them against reckless grantees.
As people become less able to distinguish between what is real and what is fake, it has become easier than ever to be misled by synthetic content, whether by accident or with malicious intent. This makes advancing alternative countermeasures, such as technical solutions, more vital than ever before.
The next administration should establish a Participatory Technology Assessment unit to ensure federal S&T decisions benefit society.
AI is transforming how children learn and live, and policymakers, industry, and educators owe it to the next generation to set in place a responsible policy that embraces this new technology while at the same time ensuring all children’s well-being, privacy, and safety is respected.