Emerging Technology
day one project

A Federal Center of Excellence to Expand State and Local Government Capacity for AI Procurement and Use

02.14.25 | 9 min read | Text by Anna Kawakami & Ken Holstein & Haiyi Zhu

The administration should create a federal center of excellence for state and local artificial intelligence (AI) procurement and use—a hub for expertise and resources on public sector AI procurement and use at the state, local, tribal, and territorial (SLTT) government levels. The center could be created by expanding the General Services Administration’s (GSA) existing Artificial Intelligence Center of Excellence (AI CoE). As new waves of AI technologies enter the market, shifting both practice and policy, such a center of excellence would help bridge the gap between existing federal resources on responsible AI and the specific, grounded challenges that individual agencies face. In the decades ahead, new AI technologies will touch an expanding breadth of government services—including public health, child welfare, and housing—vital to the wellbeing of the American people. An AI CoE federal center would equip public sector agencies with sustainable expertise and set a consistent standard for practicing responsible AI procurement and use. This resource ensures that AI truly enhances services, protects the public interest, and builds public trust in AI-integrated state and local government services. 

Challenge and Opportunity 

State, local, tribal, and territorial (SLTT) governments provide services that are critical to the welfare of our society. Among these: providing housing, child support, healthcare, credit lending, and teaching. SLTT governments are increasingly interested in using AI to assist with providing these services. However, they face immense challenges in responsibly procuring and using new AI technologies. While grappling with limited technical expertise and budget constraints, SLTT government agencies considering or deploying AI must navigate data privacy concerns, anticipate and mitigate biased model outputs, ensure model outputs are interpretable to workers, and comply with sector-specific regulatory requirements, among other responsibilities. 

The emergence of foundation models (large AI systems adaptable to many different tasks) for public sector use exacerbates these existing challenges. Technology companies are now rapidly developing new generative AI services tailored towards public sector organizations. For example, earlier this year, Microsoft announced that Azure OpenAI Service would be newly added to Azure Government—a set of AI services that target government customers. These types of services are not specifically created for public sector applications and use contexts, but instead are meant to serve as a foundation for developing specific applications. 

For SLTT government agencies, these generative AI services blur the line between procurement and development: Beyond procuring specific AI services, we anticipate that agencies will increasingly be tasked with the responsible use of general AI services to develop specific AI applications. Moreover, recent AI regulations suggest that responsibility and liability for the use and impacts of procured AI technologies will be shared by the public sector agency that deploys them, rather than just resting with the vendor supplying them.

SLTT agencies must be well-equipped with responsible procurement practices and accountability mechanisms pivotal to moving forward given the shifts across products, practice, and policy. Federal agencies have started to provide guidelines for responsible AI procurement (e.g., Executive Order 13960, OMB-M-21-06, NIST RMF). But research shows that SLTT governments need additional support to apply these resources.: Whereas existing federal resources provide high-level, general guidance, SLTT government agencies must navigate a host of challenges that are context-specific (e.g., specific to regional laws, agency practices, etc.). SLTT government agency leaders have voiced a need for individualized support in accounting for these context-specific considerations when navigating procurement decisions. 

Today, private companies are promising state and local government agencies that using their AI services can transform the public sector. They describe diverse potential applications, from supporting complex decision-making to automating administrative tasks. However, there is minimal evidence that these new AI technologies can improve the quality and efficiency of public services. There is evidence, on the other hand, that AI in public services can have unintended consequences, and when these technologies go wrong, they often worsen the problems they are aimed at solving. For example, by increasing disparities in decision-making when attempting to reduce them. 

Challenges to responsible technology procurement follow a historical trend: Government technology has frequently been critiqued for failures in the past decades. Because public services such as healthcare, social work, and credit lending have such high stakes, failures in these areas can have far-reaching consequences. They also entail significant financial costs, with millions of dollars wasted on technologies that ultimately get abandoned. Even when subpar solutions remain in use, agency staff may be forced to work with them for extended periods despite their poor performance.

The new administration is presented with a critical opportunity to redirect these trends. Training each relevant individual within SLTT government agencies, or hiring new experts within each agency, is not cost- or resource-effective. Without appropriate training and support from the federal government, AI adoption is likely to be concentrated in well-resourced SLTT agencies, leaving others with fewer resources (and potentially more low income communities) behind. This could lead to disparate AI adoption and practices among SLTT agencies, further exacerbating existing inequalities. The administration urgently needs a plan that supports SLTT agencies in learning how to handle responsible AI procurement and use–to develop sustainable knowledge about how to navigate these processes over time—without requiring that each relevant individual in the public sector is trained. This plan also needs to ensure that, over time, the public sector workforce is transformed in their ability to navigate complicated AI procurement processes and relationships—without requiring constant retraining of new waves of workforces. 

In the context of federal and SLTT governments, a federal center of excellence for state and local AI procurement would accomplish these goals through a “hub and spoke” model. This center of excellence would serve as the “hub” that houses a small number of selected experts from academia, non-profit organizations, and government. These experts would then train “spokes”—existing state and local public sector agency workers—in navigating responsible procurement practices. To support public sector agencies in learning from each others’ practices and challenges, this federal center of excellence could additionally create communication channels for information- and resource-sharing across the state and local agencies. 

Procured AI technologies in government will serve as the backbone of local public services for decades to come. Upskilling government agencies to make smart decisions about which AI technologies to procure (and which are best avoided) would not only protect the public from harmful AI systems but would also save the government money by decreasing the likelihood of adopting expensive AI technologies that end up getting dropped. 

Plan of Action 

A federal center of excellence for state and local AI procurement would ensure that procured AI technologies are responsibly selected and used to serve as a strong and reliable backbone for public sector services. This federal center of excellence can support both intra-agency and inter-agency capacity-building and learning about AI procurement and use—that is, mechanisms to support expertise development within a given public sector agency and between multiple public sector agencies. This federal center of excellence would not be deliberative (i.e., SLTT governments would receive guidance and support but would not have to seek approval on their practices). Rather, the goal would be to upskill SLTT agencies so they are better equipped to navigate their own AI procurement and use endeavors. 

To upskill SLTT agencies through inter-agency capacity-building, the federal center of excellence would house experts in relevant domain areas (e.g., responsible AI, public interest technology, and related topics). Fellows would work with cohorts of public sector agencies to provide training and consultation services. These fellows, who would come from government, academia, and civil society, would build on their existing expertise and experiences with responsible AI procurement, integrating new considerations proposed by federal standards for responsible AI (e.g., Executive Order 13960, OMB-M-21-06, NIST RMF). The fellows would serve as advisors to help operationalize these guidelines into practical steps and strategies, helping to set a consistent bar for responsible AI procurement and use practices along the way. 

Cohorts of SLTT government agency workers, including existing agency leaders, data officers, and procurement experts, would work together with an assigned advisor to receive consultation and training support on specific tasks that their agency is currently facing. For example, for agencies or programs with low AI maturity or familiarity (e.g., departments that are beginning to explore the adoption of new AI tools), the center of excellence can help navigate the procurement decision-making process, help them understand their agency-specific technology needs, draft procurement contracts, select amongst proposals, and negotiate plans for maintenance. For agencies and programs with high AI maturity or familiarity, the advisor can train the programs about unexpected AI behaviors and mitigation strategies, when this arises. These communication pathways would allow federal agencies to better understand the challenges state and local governments face in AI procurement and maintenance, which can help seed ideas for improving existing resources and create new resources for AI procurement support.

To scaffold intra-agency capacity-building, the center of excellence can build the foundations for cross-agency knowledge-sharing. In particular, it would include a communication platform and an online hub of procurement resources, both shared amongst agencies. The communication platform would allow state and local government agency leaders who are navigating AI procurement to share challenges, learned lessons, and tacit knowledge to support each other. The online hub of resources can be collected by the center of excellence and SLTT government agencies. Through the online hub, agencies can upload and learn about new responsible AI resources and toolkits (e.g., such as those created by government and the research community), as well as examples of procurement contracts that agencies themselves used. 

To implement this vision, the new administration should expand the U.S. General Services Administration’s (GSA) existing Artificial Intelligence Center of Excellence (AI CoE), which provides resources and infrastructural support for AI adoption across the federal government. We propose expanding this existing AI CoE to include the components of our proposed center of excellence for state and local AI procurement and use. This would direct support towards SLTT government agencies—which are currently unaccounted for in the existing AI CoE—specifically via our proposed capacity-building model.

Over the next 12 months, the goals of expanding the AI CoE would be three-fold:

1. Develop the core components of our proposed center of excellence within the AI CoE. 

2. Launch collaborations for the first sample of SLTT government agencies. Focus on building a path for successful collaborations: 

3. Build a path for our proposed center of excellence to grow and gain experience. If the first few collaborations show strong reviews, design a scaling strategy that will: 

Conclusion

Expanding the existing AI CoE to include our proposed federal center of excellence for AI procurement and use can help ensure that SLTT governments are equipped to make informed, responsible decisions about integrating AI technologies into public services. This body would provide necessary guidance and training, helping to bridge the gap between high-level federal resources and the context-specific needs of SLTT agencies. By fostering both intra-agency and inter-agency capacity-building for responsible AI procurement and use, this approach builds sustainable expertise, promotes equitable AI adoption, and protects public interest. This ensures that AI enhances—rather than harms—the efficiency and quality of public services. As new waves of AI technologies continue to enter the public sector, touching a breadth of services critical to the welfare of the American people, this center of excellence will help maintain high standards for responsible public sector AI for decades to come.

This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.

PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.

Frequently Asked Questions
What is existing guidance for responsible SLTT procurement and use of AI technologies?

Federal agencies have published numerous resources to support responsible AI procurement, including the Executive Order 13960, OMB-M-21-06, NIST RMF. Some of these resources provide guidance on responsible AI development in organizations broadly, across the public, private, and non-profit sectors. For example, the NIST RMF provides organizations with guidelines to identify, assess, and manage risks in AI systems to promote the deployment of more trustworthy and fair AI systems. Others focus on public sector AI applications. For instance, the OMB Memorandum published by the Office of Management and Budget describes strategies for federal agencies to follow responsible AI procurement and use practices.

Why a federal center? Can’t SLTT governments do this on their own?

Research describes how these forms of resources often require additional skills and knowledge that make it challenging for agencies to effectively use on their own. A federal center of excellence for state and local AI procurement could help agencies learn to use these resources. Adapting these guidelines to specific SLTT agency contexts necessitates a careful task of interpretation which may, in turn, require specialized expertise or resources. The creation of this federal center of excellence to guide responsible SLTT procurement on-the-ground can help bridge this critical gap. Fellows in the center of excellence and SLTT procurement agencies can build on this existing pool of guidance to build a strong theoretical foundation to guide their practices.

How has this “hub and spoke” model been used before?

The hub and spoke model has been used across a range of applications to support efficient management of resources and services. For instance, in healthcare, providers have used the hub and spoke model to organize their network of services; specialized, intensive services would be located in “hub” healthcare establishments whereas secondary services would be provided in “spoke” establishments, allowing for more efficient and accessible healthcare services. Similar organizational networks have been followed in transportation, retail, and cybersecurity. Microsoft follows a hub and spoke model to govern responsible AI practices and disseminate relevant resources. Microsoft has a single centralized “hub” within the company that houses responsible AI experts—those with expertise on the implementation of the company’s responsible AI goals. These responsible AI experts then train “spokes”—workers residing in product and sales teams across the company, who learn about best practices and support their team in implementing them.

Who would be the experts selected as fellows by the center of excellence? What kind of training would they receive?

During the training, experts would form a stronger foundation for (1) on-the-ground challenges and practices that public sector agencies grapple with when developing, procuring, and using AI technologies and (2) existing AI procurement and use guidelines provided by federal agencies. The content of the training would be taken from syntheses of prior research on public sector AI procurement and use challenges, as well as existing federal resources available to guide responsible AI development. For example, prior research has explored public sector challenges to supporting algorithmic fairness and accountability and responsible AI design and adoption decisions, amongst other topics.


The experts who would serve as fellows for the federal center of excellence would be individuals with expertise and experience studying the impacts of AI technologies and designing interventions to support more responsible AI development, procurement, and use. Given the interdisciplinary nature of the expertise required for the role, individuals should have an applied, socio-technical background on responsible AI practices, ideally (but not necessarily) for the public sector. The individual would be expected to have the skills needed to share emerging responsible AI practices, strategies, and tacit knowledge with public sector employees developing or procuring AI technologies. This covers a broad range of potential backgrounds.

What are some examples of the skills or competencies fellows might bring to the Center?

For example, a professor in academia who studies how to develop public sector AI systems that are more fair and aligned with community needs may be a good fit. A socio-technical researcher in civil society with direct experience studying or developing new tools to support more responsible AI development, who has intuition over which tools and practices may be more or less effective, may also be a good candidate. A data officer in a state government agency who has direct experience procuring and governing AI technologies in their department, with an ability to readily anticipate AI-related challenges other agencies may face, may also be a good fit. The cohort of fellows should include a balanced mix of individuals coming from government, academia, and civil society.