Emerging Technology
day one project

A Grant Program to Enhance State and Local Government AI Capacity and Address Emerging Threats

06.11.25 | 8 min read | Text by Christopher Maximos

States and localities are eager to leverage artificial intelligence (AI) to optimize service delivery and infrastructure management, but they face significant resource gaps. Without sufficient personnel and capital, these jurisdictions cannot properly identify and mitigate the risks associated with AI adoption, including cyber threats, surging power demands, and data privacy issues. Congress should establish a new grant program, coordinated by the Cybersecurity and Infrastructure Security Agency (CISA), to assist state and local governments in addressing these challenges. Such funding will allow the federal government to instill best security and operating practices nationwide, while identifying effective strategies from the grassroots that can inform federal rulemaking. Ultimately, federal, state, and local capacity are interrelated; federal investments in state and local government will help the entire country harness AI’s potential and reduce the risk of catastrophic events such as a large, AI-powered cyberattack.

Challenge and Opportunity 

In 2025, 45 state legislatures have introduced more than 550 bills focused on the regulation of artificial intelligence, covering everything from procurement guidelines to acceptable AI uses in K-12 education to liability standards for AI misuse and error. Major cities have followed suit with sweeping guidance of their own, identifying specific AI risks related to bias and hallucination and directives to reduce their impact on government functions. The influx of regulatory action reflects burgeoning enthusiasm about AI’s ability to streamline public services and increase government efficiency.

Yet two key roadblocks stand in the way: inconsistent rules and uneven capacity. AI regulations vary widely across jurisdictions — sometimes offering contradictory guidance — and public agencies often lack the staff and skills needed to implement them. In a 2024 survey, six in ten public sector professionals cited the AI skills gap as their biggest obstacle in implementing AI tools. This reflects a broader IT staffing crisis, with over 450,000 unfilled cybersecurity roles nationwide, which is particularly acute in the public sector given lower salaries and smaller budgets.

These roadblocks at the state and local level pose a major risk to the entire country. In the cyber space, ransomware attacks on state and local targets have demonstrated that hackers can exploit small vulnerabilities in legacy systems to gain broad access and cause major disruption, extending far beyond their initial targets. The same threat trajectory is conceivable with AI. States and cities, lacking the necessary workforce and adhering to a patchwork of different regulations, will find themselves unable to safely adopt AI tools and mount a uniform response in an AI-related crisis. 

In 2021, Congress established the State and Local Cybersecurity Grant Program (SLCGP) at CISA, which focused on resourcing states, localities, and tribal territories to better respond to cyber threats. States have received almost $1 billion in funding to implement CISA’s security best practices like multifactor authentication and establish cybersecurity planning committees, which effectively coordinate strategic planning and cyber governance among state, municipal, and private sector information technology leaders. 

Federal investment in state and local AI capacity-building can help standardize the existing, disparate guidance and bridge resource gaps, just as it has in the cybersecurity space. AI coordination is less mature today than the cybersecurity space was when the SLCGP was established in 2021. The updated Federal Information Security Modernization Act, which enabled the Department of Homeland Security to set information security standards across government, had been in effect for seven years by 2021, and some of its best practices had already trickled down to states and localities. 

Thus, the need for clear AI state capacity, guardrails, and information-sharing across all levels of government is even greater. A small federal investment now can unlock large returns by enabling safe, effective AI adoption and avoiding costly failures. Local governments are eager to deploy AI but lack the resources to do so securely. Modest funding can align fragmented rules, train high-impact personnel, and surface replicable models—lowering the cost of responsible AI use nationwide. Each successful pilot creates a multiplier effect, accelerating progress while reducing risk.

Plan of Action 

Recommendation 1. Congress should authorize a three-year pilot grant program focused on state and local AI capacity-building.

SLCGP’s authorization expires on August 31, 2025, which provides two unique pathways for a pilot grant program. The Homeland Security Committees in the House and Senate could amend and renew the existing SLCGP provision to make room for an AI-focused pilot. Alternatively, Congress could pass a new authorization, which would likely set the stage for a sustained grant program, upon successful completion of the pilot. A separate authorization would also allow Congress to consider other federal agencies as program facilitators or co-facilitators, in case they want to cover AI integrations that do not directly touch critical infrastructure, which is CISA’s primary focus. 

Alternatively, the House Energy and Commerce and Senate Commerce, Science, and Transportation Committees could authorize a program coordinated by the National Institute of Standards and Technology, which produced the AI Risk Management Framework and has strong expertise in a range of vulnerabilities embedded within AI models. Congress might also consider mandating an interagency advisory committee to oversee the program, including, for example, experts from the Department of Energy to provide technical assistance and guidance on projects related to energy infrastructure.

In either case, the authorization should be coupled with a starting appropriation of $55 million over three years, which would fund ten statewide pilot projects totaling up to $5 million plus administrative costs. The structure of the program will broadly parallel SLCGP’s goals. First, it would align state and local AI approaches with existing federal guidance, such as the NIST AI Risk Management Framework and the Trump Administration’s OMB guidance on the regulation and procurement of artificial intelligence applications. Second, the program would establish better coordination between local and state authorities on AI rules. A new authorization for AI, however, allows Congress and the agency tasked with managing the program the opportunity to improve upon SLCGP’s existing provisions. This new program should permit states to coordinate their AI activities through existing leadership structures rather than setting up a new planning committee. The legislative language should also prioritize skills training and allocate a portion of grant funding to be spent on recruiting and retaining AI professionals within state and local government who can oversee projects.

Recommendation 2. Pilot projects should be implementation-focused and rooted in one of three significant risks: cybersecurity, energy usage, or data privacy.

Similar to SLCGP, this pilot grant program should be focused on implementation. The target product for a grant is a functional local or state AI application that has undergone risk mitigation, rather than a report that identifies issues in the abstract. For example, under this program, a state would receive federal funding to integrate AI into the maintenance of its cities’ wastewater treatment plants without compromising cybersecurity. Funding would support AI skills training for the relevant municipal employees and scaling of certain cybersecurity best practices like data encryption that minimize the project’s risk. States will submit reports to the federal government at each phase of their project: first documenting the risks they identified, then explaining their prioritization of risks to mitigate, then walking through their specific mitigation actions, and later, retrospectively reporting on the outcomes of those mitigations after the project has gone into operational use.

This approach would maximize the pilot’s return on investment. States will be able to complete high-impact AI projects without taking on the associated security costs. The frameworks generated from the project can be reused many times over for later projects, as can the staff who are hired or trained with federal support. 

Given the inconsistency of priorities surfaced in state and local AI directives, the federal government should set the agenda of risks to focus on. The clearest set of risks for the pilot are cybersecurity, energy usage, and data privacy, all of which are highlighted in NIST’s Risk Management Framework

If successful, the pilot could expand to address additional risks or support broader, multi-risk, multi-state interventions.

Recommendation 3. The pilot program must include opportunities for grantees to share their ideas with other states and localities.

Arguably the most important facet of this new AI program will be forums where grantees share their learnings. Administrative costs for this program should go toward funding a twice-yearly (bi-annual) in-person forum, where grantees can publicly share updates on their projects. An in-person forum would also provide states with the space to coordinate further projects on the margins. CISA is particularly well positioned to host a forum like this given its track record of convening critical infrastructure operators. Grantees should be required to publish guidance, tools, and templates in a public, digital repository. Ideally, states that did not secure grants can adopt successful strategies from their peers and save taxpayers the cost of duplicate planning work. 

Conclusion 

Congress should establish a new grant program to assist state and local governments in addressing AI risks, including cybersecurity, energy usage, and data privacy. Such federal investments will give structure to the dynamic yet disparate national AI regulatory conversation. The grant program, which will cost $55 million to pilot over three years, will yield a high return on investment for both the ten grantee states and the peers that learn from its findings. By making these investments now, Congress can keep states moving fast toward AI without opening the door to critical, costly vulnerabilities.

This memo was written by an AI Safety Policy Entrepreneurship Fellow over the course of a six-month, part-time program that supports individuals in advancing their policy ideas into practice. You can read more policy memos and learn about Policy Entrepreneurship Fellows here.

Frequently Asked Questions
Does Congress have to authorize a new grant program to operate this pilot?

No, Congress could leverage SLCGP’s existing authorization to focus on projects that look at the intersection of AI and cybersecurity. They could offer an amendment to the next Homeland Security Appropriations package that directs modest SLCGP funding (e.g. $10-20 million) to AI projects. Alternatively, Congress could insert language on AI into SLCGP’s reauthorization, which is due on August 31, 2025.


Although leveraging the existing authorization would be easier, Congress would be better served by authorizing a new program, which can focus on multiple priorities including energy usage and data privacy. To stay agile, the language in the statute could allow CISA to direct funds toward new emerging risks, as they are identified by NIST and other agencies. Finally, a specific authorization would pave the way for an expansion of this program assuming the initial 10 state pilot goes well.

Why focus on individual state and local projects rather than an across-the-board effort to improve capacity in all states across all vectors?

This pilot is right-sized for efficiency, impact, and cost savings. A program to bring all 50 states into compliance with certain AI risk mitigation guidelines would cost hundreds of millions, which is not feasible in the current budgetary environment. States are starting from very different baselines, especially with their energy infrastructure, which makes it difficult to bring them all to a single end-point. Moreover, because AI is evolving so rapidly, guidance is likely to age poorly. The energy needs of AI might change before states finish their plan to build data centers. Similarly, federal data privacy laws might go in place that undercut or contradict the best practices established by this program.

What are the benefits to this deployment approach?

This pilot will allow 10 states and/or localities to quickly deploy AI implementations that produce real value: for example, quicker emergency response times and savings on infrastructure maintenance. CISA can learn from the grantees’ experiences to iterate on federal guidance. They might identify a stumbling block on one project and refine their guidance to prevent 49 other states from encountering the same obstacle. If grantees effectively share their learnings, they can cut massive amounts of time off other states’ planning processes and help the federal government build guidance that is more rooted in the realities of AI deployment.

Some have expressed concerns that planning-focused grants create additional layers of bureaucracy. Will this pilot just add more red tape to AI integration?

No. If done correctly, this pilot will cut red tape and allow the entire country to harness AI’s positive potential. States and localities are developing AI regulations in a vacuum. Some of the laws proposed are contradictory or duplicative precisely because many state legislatures are not coordinating effectively with state and local government technical experts. When bills do pass, guidance is often poorly implemented because there is no overarching figure, beyond a state chief information officer, to bring departments and cities into compliance. In essence, 50 states are producing 50 sets of regulations because there is scant federal guidance and few mechanisms for them to learn from other states and coordinate within their state on best practices.

How will this program streamline and optimize state and local AI planning processes?

This program aims to cut down on bureaucratic redundancy by leveraging states’ existing cyber planning bodies to take a comprehensive approach to AI. By convening the appropriate stakeholders from the public sector, private sector, and academia to work on a funded AI project, states will develop more efficient coordination processes and identify regulations that stand in the way of effective technological implementation. States and localities across the country will build their guidelines based on successful grantee projects, absorbing best practices and casting aside inefficient rules. It is impossible to mount a coordinated response to significant challenges like AI-enabled cyberattacks without some centralized government planning, but this pilot is designed to foster efficient and effective coordination across federal, state, and local governments.

publications
See all publications
Emerging Technology
day one project
Policy Memo
Moving Beyond Pilot Programs to Codify and Expand Continuous AI Benchmarking in Testing and Evaluation

At this inflection point, the choice is not between speed and safety but between ungoverned acceleration and a calculated momentum that allows our strategic AI advantage to be both sustained and secured.

06.11.25 | 12 min read
read more
Emerging Technology
day one project
Policy Memo
Develop a Risk Assessment Framework for AI Integration into Nuclear Weapons Command, Control, and Communications Systems

Improved detection could strengthen deterrence, but only if accompanying hazards—automation bias, model hallucinations, exploitable software vulnerabilities, and the risk of eroding assured second‑strike capability—are well managed.

06.11.25 | 8 min read
read more
Emerging Technology
day one project
Policy Memo
A National Center for Advanced AI Reliability and Security

A dedicated and properly resourced national entity is essential for supporting the development of safe, secure, and trustworthy AI to drive widespread adoption, by providing sustained, independent technical assessments and emergency coordination.

06.11.25 | 10 min read
read more
Emerging Technology
day one project
Policy Memo
A Grant Program to Enhance State and Local Government AI Capacity and Address Emerging Threats

Congress should establish a new grant program, coordinated by the Cybersecurity and Infrastructure Security Agency, to assist state and local governments in addressing AI challenges.

06.11.25 | 8 min read
read more