Creating an AI Testbed for Government
Summary
The United States should establish a testbed for government-procured artificial intelligence (AI) models used to provide services to U.S. citizens. At present, the United States lacks a uniform method or infrastructure to ensure that AI systems are secure and robust. Creating a standardized testing and evaluation scheme for every type of model and all its use cases is an extremely challenging goal. Consequently, unanticipated ill effects of AI models deployed in real-world applications have proliferated, from radicalization on social media platforms to discrimination in the criminal justice system. Increased interest in integrating emerging technologies into U.S. government processes raises additional concerns about the robustness and security of AI systems.
Establishing a designated federal AI testbed is an important part of alleviating these concerns. Such a testbed will help AI researchers and developers better understand how to construct testing methods and ultimately build safer, more reliable AI models. Without this capacity, U.S. agencies risk perpetuating existing structural inequities as well as creating new government systems based on insecure AI systems — both outcomes that could harm millions of Americans while undermining the missions that federal agencies are entrusted to pursue.
At the Office of Clean Energy Demonstrations, Dr. Glaser is paving the way for cutting-edge energy storage and battery technologies to scale up.
Outside of loans, the federal government can do more to support the restart and ensure other nuclear plants continue generating clean baseload energy for as long as safely possible.
The ongoing failure of the U.S. to invest comes at a time when our competitors continue to up their investments in science.
Science funding agencies are biased against risk, making transformative research difficult to fund. Forecast-based approaches to grantmaking could improve funding outcomes for high-risk, high-reward research.