Emerging Technology

“Going Back to Cali” for AI Governance Lessons as States Take the Lead on AI Implementation

02.04.26 | 7 min read | Text by Caroline Siegel Singh

Imagine you are a state-level technology leader. Recent advancements in artificial intelligence promise to make approving small business licenses faster, or improve K-12 student learning, or even standardize compliance between agencies. All of which promise to improve the experience of your state’s constituents. Eager to deploy this new technology responsibly, you look to peers in other states for guidance. Their answers vary wildly, and in the absence of federal guidance, it quickly becomes clear that there is no standardized playbook. You must chart the path forward on your own, with far more limited resources.

This scenario is becoming increasingly common as AI systems are moving rapidly into consumer-facing services. Without federal action on AI, state government leaders are increasingly shouldering the responsibility for both protecting consumers from potential algorithmic harms and also supporting responsible innovation to improve service delivery to their constituents. States have structural advantages that position them to experiment with regulatory approaches: shorter legislative cycles that allow for quicker course corrections, authority to pilot programs, and the use of sunset provisions that make it easier to revise or retire early-stage governance models. This often places states as the most agile regulators who can swiftly set up guardrails for rapidly evolving AI technologies that impact their residents. 

But this regulatory agility must be matched with the necessary government capacity in order to be a success. The current lack of federal action is forcing states not only to pass new AI laws, but also to take on huge implementation challenges, without the AI expertise typically found in federal agencies or major private employers.  Building this capacity within state governments will demand resources and technical expertise that most states are only just starting to chart . Without deliberate investment in transparency and talent, even the most well-crafted legislation might not achieve their intended goals. As State legislative cycles start back up for the 2026 year, state policymakers should move forward with proposals that increase transparency, accountability, and bring new technical experts directly into government to meet the scale of need in the current moment. 

Increased Transparency to Build Public Trust 

One of the most immediate ways that state legislatures can move forward with transparency-improving legislation is with the passage and successful implementation of use-case inventories. A use-case inventory is a public-facing publication of algorithmic tools and their specific uses. They disclose when and where state governments are utilizing algorithmic tools in consumer-facing transactions such as applications for social programs and public assistance benefits. They are typically conducted by governments as a mechanism for transparency and to facilitate third-party auditing of outcomes. 

The benefits of public-facing AI use-case inventories are far reaching: they increase government transparency into automated decision-making outcomes, can provide valuable insights to private-sector product vendors, facilitate third-party auditing and bias-testing, and can even increase interagency sharing of best practices when AI tools are effectively used. They are particularly important in high-risk decisions such as those related to government benefits and services. Alternatively, a lack of transparency in expensive acquisitions from private and third party vendors can mean that an agency or entity is unaware of what tools they have acquired and whether or not they are safe to deploy in consumer-facing settings without bias or other inaccuracies. 

When increasing numbers of Americans are growing skeptical of the practical uses of AI tools, it is doubly important to design public systems that encourage transparency when algorithmic tools are deployed in the public and private sectors alike.

Despite a lack of federal legislation regulating responsible AI usage, one area where the federal government has led is in the production of regular AI use-case inventories since 2021. First requested via Executive Order 13960 in 2020 during the first Trump administration, and implemented in the Summer of 2021, the federal government provides a relatively transparent accounting of where AI is adopted within the federal enterprise. This policy has had bipartisan appeal, and the Biden administration continued the production of regularly updated inventories for the public. The Trump administration with its recently updated inventory now has the opportunity to use this tool to deliver increased public trust in AI, a clear administration priority. 

Case Study: Implementation Challenges in California 

While the federal experience demonstrates that AI use-case inventories can work, it also reveals an important limitation: transparency mechanisms rely on technical talent and focused implementation to be successful. California offers a cautionary example. In 2023, the state legislature passed Assembly Bill 302 requesting the State Department of Technology to “conduct a comprehensive inventory of all high-risk automated decision systems [ADS] being used by state agencies and submit a report to the Legislature by January 1, 2025, and annually thereafter.” Importantly, the bill covered systems that are “used to assist or replace human discretionary decisionmaking.” The bill was envisioned as a critical first step in gaining insight into the ways AI was being deployed in consumer-facing interactions by state government agencies. It was also in reaction to public reporting of biased technology being used on those applying for public services and benefits. 

However, the initial implementation deadline for the bill passed in early 2025 and the only report provided to the public was a single document stating that there are “no high-risk ADS [tools] being used by State agencies”—a fact that is easily disputed by a simple Google search. For example, the state healthcare exchange uses automated document processing tools to gauge eligibility for affordable health insurance policies, the state unemployment insurance program uses an algorithmic tool developed by a private company to rate applicants on the likelihood of their application being fraudulent, and the state Department of Finance even plans to use generative AI tools as part of fiscal analysis and state budgeting work. These are significant decisions that can have real repercussions for California residents. Rather than creating a transparent use-case inventory that can tell Californians where AI is being used in consumer-facing interactions, we instead have a letter which incorrectly states —based on examples above—that there are no algorithmic tools being used. The table below has additional examples of publicly disclosed automated decision-making system use cases in California state government.

DomainAgencyUse CaseLink
Government BenefitsCovered CaliforniaAutomated document processing for health insurance eligibilityhttps://hbex.coveredca.com/toolkit/downloads/Intelligent_Document_Processing_System_Guide.pdf
GovernanceCA Department of FinanceThe Department of Finance will use generative AI in a new initiative to assess the fiscal impact of legislative proposals and their effects on the state budgethttps://cdt.ca.gov/newsroom/2025/05/california-will-use-genai-to-accelerate-legislative-bill-analysis-and-the-impacts-on-the-state-budget/
TaxationCalifornia Department of Tax and Fee AdministrationThe CDTFA will use GenAI tools to assist staff in providing responses to taxpayers, and to businesseshttps://cdtfa.ca.gov/news/25-04.htm
Government BenefitsCA Employment Development DepartmentA Thomson Reuters algorithm takes consumer data and gives them scores rating the likelihood of a fraudulent applicationhttps://edd.ca.gov/siteassets/files/about_edd/pdf/fraud-tools-assessment-annual-report-2025.pdf
Government BenefitsCalifornia Student Aid CommissionCSAC deployed a two-way chatbot engagement platform to interact with students applying for state financial aidhttps://www.csac.ca.gov/cali
Government BenefitsCalHHSCalifornia Data Exchange framework uses algorithms to match data across healthcare data systemshttps://www.chhs.ca.gov/wp-content/uploads/2021/10/Data-Exchange-Framework-Pre-Read-Materials.pdf
TransportationCalifornia Department of Transportation (CalTrans)CalTrans is deploying pilot programs in traffic safety, congestion, and to assist in staff research outputs such as data analysis, and report writinghttps://insider.govtech.com/california/news/caltrans-offers-updates-on-ai-use-cases-future-plans

Results like this underscore the urgent need to embed technical talent within state governments to ensure that laws are implemented as designed. When implementing its use case inventories, the federal government provided guidance to reporting agencies and publicly released a final inventory for a majority of agencies. Even with substantial support during the federal government’s collection process,  there were still notable implementation challenges faced when creating a federal use-case inventory. Most notably, many agencies initially failed to disclose all use-cases, and a promised template for agencies to use has yet to come to fruition. The State of California, by contrast, instead relied on an ad hoc process, polling state agency officials through two successive emails to conduct its inventory evaluation. 

Scaling Government Talent to Bridge the Technical Capacity Gap

California’s experience implementing a use-case inventory is, unfortunately, not unique. Across the country, well-intentioned legislation is often passed into law only to falter during implementation. Once enacted, agency staff are tasked with operationalizing complex policies, often without the necessary technical expertise, staffing capacity, or financial resources to succeed. Without deliberate investment in these areas, the responsibility of properly regulating emerging technologies and protecting consumers from harm is shifted to government employees that are poorly equipped to handle the growing scope and technical complexity of their workloads. That is why, in addition to transparency, states need to find ways to quickly bring in technical talent and expertise in digital technologies to drive forward effective implementation of the coming onslaught of bills. 

In the midst of massive layoffs within the federal government and private sector, individual states now have access to historic levels of human capital and can bring forward some of the innovations developed within the federal government in recent years. Methods like skills-based hiring to rapidly bring in technical talent and scale new teams within government have also been developed and thoroughly tested in recent years through entities like the United States Digital Service (USDS) and the Consumer Financial Protection Bureau’s in-house technologists. These initiatives brought skilled workers into government at less than half the recruiting cost of private sector hiring and saved hundreds of millions of dollars through reimbursable agreements with agencies in lieu of costly private sector consultancy contracts. 

During periods of financial uncertainty it can be deeply challenging for state leaders to make the investments necessary to hire additional staff and build robust government teams. One other method to bridge the gap between policymakers and those who implement it is through the development of modernized policy fellowships that utilize endowments or other private funds to bring cutting-edge researchers and experts directly into government. California has most recently unveiled a revamped science and technology fellowship that will place additional AI experts within state agencies or the legislature to propel forward-thinking and informed policymaking.

Conclusion 

With no federal framework in place, state governments will be the primary drivers of accountability and transparency needed to ensure AI serves the public rather than erodes democratic norms. This presents us with a crucial window for state policymakers to establish both processes that further transparency and robust talent pipelines that can manage responsible deployment in order to restore public trust and prevent harms before AI systems become further entrenched in critical public services. States that build transparent AI use-case inventories and invest in technical expertise will be best positioned to translate lofty regulatory principles into real protections for their citizens—while also fostering a fairer, more trustworthy environment for innovation to thrive.