
A National Center for Advanced AI Reliability and Security
While AI’s transformative advances have enormous positive potential, leading scientists and industry executives are also sounding the alarm about catastrophic risks on a global scale. If left unmanaged, these risks could undermine our ability to reap the benefits of AI progress. While the U.S. government has made some progress, including by establishing the Center for AI Standards and Innovation (CAISI)—formerly the US AI Safety Institute—current government capacity is insufficient to respond to these extreme frontier AI threats. To address this problem, this memo proposes scaling up a significantly enhanced “CAISI+” within the Department of Commerce. CAISI+ would require dedicated high-security compute facilities, specialized talent, and an estimated annual operating budget of $67-155 million, with a setup cost of $155-275 million. CAISI+ would have expanded capacity for conducting advanced model evaluations for catastrophic risks, provide direct emergency assessments to the President and National Security Council (NSC), and drive critical AI reliability and security research, ensuring America is prepared to lead on AI and safeguard its national interests.
Challenge and Opportunity
Frontier AI is advancing rapidly toward powerful general-purpose capabilities. While this progress has produced widely useful products, it is also generating significant security risks. Recent evaluations on Anthropic’s Claude Opus 4 model were unable to rule out the risk that the model could be used to advise novice actors to produce bioweapons, triggering additional safeguards. Meanwhile, the FBI warns that AI “increases cyber-attack speed, scale, and automation”, with a 442% increase in AI-enhanced voice phishing attacks in 2024, and recent evaluations showing AI models rapidly gaining offensive cyber capabilities.
AI company CEOs and leading researchers have predicted that this progress will continue, with potentially transformative AI capabilities arriving in the next few years–and fast progress in AI capabilities will continue to generate novel threats greater than those from existing models. As AI systems are predicted to become increasingly capable of performing complex tasks and taking extended autonomous actions, researchers warn of these additional risks, such as loss of human control, AI-enabled WMD proliferation, and strategic surprise with severe national security implications. While timelines to AI systems surpassing dangerous capability thresholds are uncertain, this proposal attempts to lay out a US government response that is robust to a range of possible timelines, while taking the above trends seriously.
Current U.S. Government capabilities, including the existing Center for AI Standards and Innovation (CAISI), are not adequately resourced or empowered to independently evaluate, monitor, or respond to the most advanced AI threats. For example, current CAISI funding is precarious, its home institution (NIST)’s offices are reportedly “crumbling”, and its budget is roughly one-tenth of its counterpart in the UK. Despite previous underinvestment, CAISI has consistently produced rigorous model evaluations, and in doing so, has earned strong credibility with industry and government stakeholders. This also includes support from legislators: bipartisan legislation has been introduced in both chambers of Congress to authorize CAISI in statute, while just last month, the House China Committee released a letter noting that CAISI has a role to play in “understanding, predicting, and preparing for” national security risks from AI development in the PRC.
A dedicated and properly resourced national entity is essential for supporting the development of safe, secure, and trustworthy AI to drive widespread adoption, by providing sustained, independent technical assessments and emergency coordination—roles that ad-hoc industry consultations or self-reporting cannot fulfill for paramount matters of national security and public safety.
Establishing CAISI+ now is a critical opportunity to proactively manage these profound risks, ensure American leadership in AI, and prevent strategic disadvantage as global AI capabilities advance. While full operational capacity may not be needed immediately, certain infrastructure, such as highly secure computing, has significant lead times, demanding foresight and preparatory action. This blueprint offers a scalable framework to build these essential national capabilities, safeguarding our future against AI-related catastrophic events and enabling the U.S. to shape the trajectory of this transformative technology.
Plan of Action
To effectively address extreme AI risks, develop more trustworthy AI systems, and secure U.S. interests, the Administration and Congress should collaborate to establish and resource a world-class national entity to inform the federal response to the above trendlines.
Recommendation 1. Establish CAISI+ to Lead National AI Safety and Coordinate Crisis Response.
CAISI+, evolving from the current CAISI within the National Institute of Standards and Technology, under the Department of Commerce, must have a clear mandate focused on large-scale AI risks. Core functions include:
- Advanced Model Evaluation: Developing and operating state-of-the-art platforms to test frontier AI models for dangerous capabilities, adversarial behavior or goals (such as deception or power-seeking), and potential weaponization. While the level of risk presented by current models is very uncertain, even those who are skeptical of particular risk models are often supportive of developing better evaluations.
- Emergency Assessment & Response: Providing rapid, expert risk assessments and warnings directly to the President and the National Security Council (NSC) in the event of severe AI-driven national security threats. The CAISI+ Director should be statutorily designated as the Principal Advisor on AI Risks to the President and NSC, with authority to:
- Submit AI threat assessments to the President’s Daily Brief (PDB) when intelligence indicates imminent or critical risks
- Convene emergency sessions of the NSC Deputies Committee or Principals Committee for time-sensitive AI security threats
- Maintain direct communication channels to the National Security Advisor for immediate threat notification
- Issue “Critical AI Threat Warnings” through established NSC emergency communication protocols, similar to those used for terrorism or WMD threats
- Foundational AI Reliability and Security Research: Driving and funding research into core AI alignment, control, and security challenges to maintain U.S. technological leadership while developing trustworthy AI systems. This research will yield dual benefits to both the public and industry, by enabling broader adoption of reliable AI tools and preventing catastrophic incidents that could devastate the AI sector, similar to how the Three Mile Island disaster impacted nuclear energy development. Following the model of NIST’s successful encryption standards, establishing rigorous AI safety benchmarks and protocols will create industry-wide confidence while ensuring American competitiveness.
Governance will feature clear interagency coordination (e.g., with the Department of Defense, Department of Energy, Department of Homeland Security, and other relevant bodies in the intelligence community) and an internal structure with distinct directorates for evaluations, emergency response, and research, coordinated by CAISI+ leadership.
Recommendation 2. Equip CAISI+ with Elite American Talent and Sustained Funding
CAISI+’s efficacy hinges on world-class personnel and reliable funding to execute its mission. This necessitates:
- Exceptional American Talent: Special hiring authorities (e.g., direct hire, excepted service) and competitive compensation are paramount to attract and retain leading U.S. AI researchers, evaluators, and security experts, ensuring our AI standards reflect American values.
- Significant, Sustained Funding: Initial mainline estimates (see “Funding estimates for CAISI+” below) suggest $155-$275 million for setup and an annual operating budget of $67-$155 million for the recommended implementation level, sourced via new appropriations, to ensure America develops strong domestic capacity for defending against AI-powered threats. If funding is not appropriated, or if appropriations fall short, additional support may be able to be sourced via a NIST Foundation.
Funding estimates for CAISI+
Implementation Considerations
- Phased approach: The facility could be developed in stages, prioritizing core evaluation capabilities before expanding to full emergency response capacity.
- Leverage existing assets: Initial operations could utilize existing DOE relationships rather than immediately building dedicated infrastructure.
- Partnership model: Some costs could be offset through public-private partnerships with technology companies and research institutions.
- Talent acquisition strategy: Use of special hiring authorities (direct hire, excepted service) and competitive compensation (SL/ST pay scales, retention bonuses) may help compete with private sector AI companies.
- Sustainable funding: For stability, a multi-year Congressional appropriation with dedicated line-item funding would be crucial.
Staffing Breakdown by Function
- Technical Research (40-60% of staff): AI evaluations, safety research, alignment, interpretability research
- Security Operations (25-35% of staff): Red-teaming, misuse assessment, weaponization evaluation, security management
- Policy & Strategy (10-15% of staff): Leadership, risk assessment, interagency coordination, international liaisons
- Support Functions (15-20% of staff): Legal, procurement, compute infrastructure management, administration
For context, current funding levels include:
- Current CAISI funding (mid-2025): $10 million annually
- UK AISI (CAISI counterpart) initial funding: £100 million (~$125 million)
- Oak Ridge Leadership Computing Facility operations: ~$200-300 million annually
- Standard DOE supercomputing facility construction: $400-600 million
Even the minimal implementation would require substantially greater resources than the current CAISI, but remains well within the scale of other national-priority technology initiatives. The recommended implementation level would position CAISI+ to effectively fulfill its expanded mission of frontier AI evaluation, monitoring, and emergency response.
Funding Longevity
- Initial authorization: 5-year authorization with specific milestones and metrics
- Review mechanism: Independent assessment by the Government Accountability Office at 3-year mark to evaluate effectiveness and adjust scope/resources, supplemented by a National Academies study specifically tasked with evaluating the scientific and technical rigor of the CAISI+.
- Long-term vision: Transition to permanent authorization for core functions with periodic reauthorization of specific initiatives
- Accountability: Annual reporting to Congress on key performance metrics and risk assessments
Recommendation 3. Equip CAISI+ with Essential Secure Compute Infrastructure.
CAISI+ must be able to access secure compute in order to run certain evaluations involving proprietary models and national security data. This cluster can remain relatively modest in scale. Other researchers have hypothesized that a “Trusted AI Verification and Evaluation Cluster” for verifying and evaluating frontier AI development would need only 128 to 512 state-of-the-art graphical processing units (GPU)s–orders of magnitude smaller than the scale of training compute, such as the recent Llama 3.1 405 B model’s training run use of a 16,000 H100 GPU cluster, or xAI’s 200,000 GPU Colossus cluster.
However, the cluster will need to be highly secure–in other words, able to defend against attacks from nation-state adversaries. Certain evaluations will require full access to the internal “weights” of AI models, which requires hosting the model. Model hosting introduces the risk of model theft and proliferation of dangerous capabilities. Some evaluations will also involve the use of very sensitive data, such as nuclear weapons design evals–introducing additional incentive for cyberattacks. Researchers at Gladstone AI, a national security-focused AI policy consulting firm, write that in several years, powerful AI systems may confer significant strategic advantages to nation-states, and will therefore be top-priority targets for theft or sabotage by adversary nation-states. They also note that neither existing datacenters nor AI labs are secure enough to prevent this theft–thereby necessitating novel research and buildout to reach the necessary security level, outlined as “Security Level-5” (SL-5) in RAND’s Playbook for Securing AI Model Weights.
Therefore, we suggest a hybrid strategy for specialized secure compute, featuring a highly secure SL-5 air-gapped core facility for sensitive model analysis (a long-lead item requiring immediate planning), with access to a secondary pool of compute for additional capacity to run less sensitive evaluations via a formal partnership with DOE to access national lab resources. CAISI+ may also want to coordinate with the NITRD National Strategic Computing Reserve Pilot Program to explore needs for AI-crisis-related surge computing capability.
If a sufficiently secure compute cluster is infeasible or not developed in time, CAISI+ will ultimately be unable to host model internals without introducing unacceptable risks of model theft, severely limiting its ability to evaluate frontier AI systems.
Recommendation 4. Explore Granting Critical Authorities
While current legal authorities may suffice for CAISI+’s core missions, evolving AI threats could require additional tools. The White House (specifically the Office of Science and Technology Policy [OSTP], in collaboration with the Office of Management and Budget [OMB]) should analyze existing federal powers (such as the Defense Production Act or the International Emergency Economic Powers Act) to identify gaps in AI threat response capabilities–including potential needs for an incident reporting system and related subpoena authorities (similar to the function of the National Transportation Safety Board), or for model access for safety evaluations, or compute oversight authorities. Based on this analysis, the executive branch should report to Congress where new statutory authorities may be necessary, with defined risk criteria and appropriate safeguards.
Recommendation 5. Implement CAISI+ Enhancements Through Urgent, Phased Approach
Building on CAISI’s existing foundation within NIST/DoC, the Administration should enhance its capabilities to address AI risks that extend beyond current voluntary evaluation frameworks. Given expert warnings that transformative AI could emerge within the current Administration’s term, immediate action is essential to augment CAISI’s capacity to handle extreme scenarios. To achieve full operational capacity by early 2027, initial-phase activities must begin now due to long infrastructure lead times:
Immediate Enhancements (0-6 months):
- Leverage NIST’s existing relationships with DOE labs to secure interim access to classified computing facilities for sensitive evaluations
- Initiate the security research and procurement process for the SL-5 compute facility outlined in Recommendation 3
- Work with OMB and Department of Commerce leadership to secure initial funding through reprogramming or supplemental appropriations
- Build on CAISI’s current voluntary agreements to develop protocols for emergency model access and crisis response
- Begin the OSTP-led analysis of existing federal authorities (per Recommendation 4) to identify potential gaps in AI threat response capabilities
Subsequent phases will extend CAISI’s current work through:
- Foundation-building activities (6-12 months): Implementing the special hiring authorities described in Recommendation 2, formalizing enhanced interagency MOUs to support coordination described in Recommendation 1, and establishing the direct NSC reporting channels for the CAISI+ Director as Principal Advisor on AI Risks.
- Capability expansion (12-18 months): Beginning construction of the SL-5 facility, operationalizing the three core functions (Advanced Model Evaluation, Emergency Assessment & Response, and Foundational AI Reliability Research), and recruiting the 80-150 technical staff outlined in the funding breakdown.
- Full enhanced capacity (18+ months): Achieving the operational capabilities described in Recommendation 1, including mature evaluation platforms, direct Presidential/NSC threat warning protocols, and comprehensive research programs.
Conclusion
Enhancing and empowering CAISI+ is a strategic investment in U.S. national security, far outweighed by the potential costs of inaction on this front. With an estimated annual operating budget of $67-155 million, CAISI+ will provide essential technical capabilities to evaluate and respond to the most serious AI risks, ensuring the U.S. leads in developing and governing AI safely and securely, irrespective of where advanced capabilities emerge. While timelines to AI systems surpassing dangerous capability thresholds are uncertain, by acting now to establish the necessary infrastructure, expertise, and authorities, the Administration can safeguard American interests and our technological future through a broad range of possible scenarios.
This memo was written by an AI Safety Policy Entrepreneurship Fellow over the course of a six-month, part-time program that supports individuals in advancing their policy ideas into practice. You can read more policy memos and learn about Policy Entrepreneurship Fellows here.
At this inflection point, the choice is not between speed and safety but between ungoverned acceleration and a calculated momentum that allows our strategic AI advantage to be both sustained and secured.
Improved detection could strengthen deterrence, but only if accompanying hazards—automation bias, model hallucinations, exploitable software vulnerabilities, and the risk of eroding assured second‑strike capability—are well managed.
A dedicated and properly resourced national entity is essential for supporting the development of safe, secure, and trustworthy AI to drive widespread adoption, by providing sustained, independent technical assessments and emergency coordination.
Congress should establish a new grant program, coordinated by the Cybersecurity and Infrastructure Security Agency, to assist state and local governments in addressing AI challenges.