The United States should establish a testbed for government-procured artificial intelligence (AI) models used to provide services to U.S. citizens. At present, the United States lacks a uniform method or infrastructure to ensure that AI systems are secure and robust. Creating a standardized testing and evaluation scheme for every type of model and all its use cases is an extremely challenging goal. Consequently, unanticipated ill effects of AI models deployed in real-world applications have proliferated, from radicalization on social media platforms to discrimination in the criminal justice system. Increased interest in integrating emerging technologies into U.S. government processes raises additional concerns about the robustness and security of AI systems.
Establishing a designated federal AI testbed is an important part of alleviating these concerns. Such a testbed will help AI researchers and developers better understand how to construct testing methods and ultimately build safer, more reliable AI models. Without this capacity, U.S. agencies risk perpetuating existing structural inequities as well as creating new government systems based on insecure AI systems — both outcomes that could harm millions of Americans while undermining the missions that federal agencies are entrusted to pursue.
We sat down with space technology startup K2 Space to find out just how big of a leap the next generation of launch vehicles will represent.
Despite their importance, programs focused on AI trustworthiness form only a small fragment of total funding allocated for AI R&D by the National Science Foundation.
Measuring how neurons integrate their inputs and respond to them is key to understanding the impressive and complex behavior of humans and animals. However, a complete measurement of neuronal Input-Output Functions (IOFs) has not been achieved in any animal.
Wearable health electronics are now ubiquitous, but continuous molecular monitoring is only widely available for glucose.