Creating an AI Testbed for Government
Summary
The United States should establish a testbed for government-procured artificial intelligence (AI) models used to provide services to U.S. citizens. At present, the United States lacks a uniform method or infrastructure to ensure that AI systems are secure and robust. Creating a standardized testing and evaluation scheme for every type of model and all its use cases is an extremely challenging goal. Consequently, unanticipated ill effects of AI models deployed in real-world applications have proliferated, from radicalization on social media platforms to discrimination in the criminal justice system. Increased interest in integrating emerging technologies into U.S. government processes raises additional concerns about the robustness and security of AI systems.
Establishing a designated federal AI testbed is an important part of alleviating these concerns. Such a testbed will help AI researchers and developers better understand how to construct testing methods and ultimately build safer, more reliable AI models. Without this capacity, U.S. agencies risk perpetuating existing structural inequities as well as creating new government systems based on insecure AI systems — both outcomes that could harm millions of Americans while undermining the missions that federal agencies are entrusted to pursue.
Small, fast grant programs are vital to supporting transformative research. By adopting a more flexible, decentralized model, we can significantly enhance their impact.
New solutions are needed to target diseases before they are life-threatening or debilitating, moving from retroactive sick-care towards preventative healthcare.
To improve program outcomes, federal evaluation officers should conduct “unmet desire surveys” to advance federal learning agendas and built agency buy-in.
At least 40% of Medicare beneficiaries do not have a documented AHCD. In the absence of one, medical professionals may perform major and costly interventions unknowingly against a patient’s wishes.