Improved detection could strengthen deterrence, but only if accompanying hazards—automation bias, model hallucinations, exploitable software vulnerabilities, and the risk of eroding assured second‑strike capability—are well managed.
A dedicated and properly resourced national entity is essential for supporting the development of safe, secure, and trustworthy AI to drive widespread adoption, by providing sustained, independent technical assessments and emergency coordination.
Congress should establish a new grant program, coordinated by the Cybersecurity and Infrastructure Security Agency, to assist state and local governments in addressing AI challenges.
If AI systems are not always reliable and secure, this could inhibit their adoption, especially in high-stakes scenarios, potentially compromising American AI leadership.
To secure global adoption of U.S. AI technology and ensure America’s workforce can fully leverage advanced AI, the federal government should take a strategic and coordinated approach to support AI assurance and security R&D.
The stakes are high: how we manage this convergence will influence not only the pace of technological innovation but also the equity and sustainability of our energy future.
To sustain America’s leadership in AI innovation, accelerate adoption across the economy, and guarantee that AI systems remain secure and trustworthy, we offer a set of policy recommendations.
Current scientific understanding shows that so-called “anonymization” methods that have been widely used in the past are inadequate for protecting privacy in the era of big data and artificial intelligence.
Dr. Lim will help develop, organize, and implement FAS’s growing contribution in the area of catastrophic risk including on core areas of nuclear weapons, AI and national security, space, and other emerging technologies.
To fully harness the benefits of AI, the public must have confidence that these systems are deployed responsibly and enhance their lives and livelihoods.
As new waves of AI technologies continue to enter the public sector, touching a breadth of services critical to the welfare of the American people, this center of excellence will help maintain high standards for responsible public sector AI for decades to come.
By creating a reliable, user-friendly framework for surfacing provenance, NIST would empower readers to better discern the trustworthiness of the text they encounter, thereby helping to counteract the risks posed by deceptive AI-generated content.