Three Artificial Intelligence Bills Endorsed by Federation of American Scientists Advance from the House Committee

Proposed bills advance research ecosystems, economic development, and education access and move now to the U.S. House of Representatives for a vote

Washington, D.C. – September 12, 2024 – Three proposed artificial intelligence bills endorsed by the Federation of American Scientists (FAS), a nonpartisan science think tank, advance forward from a House Science, Space, and Technology Committee markup held on September 11th, 2024. These bills received bipartisan support and will now be reported to the full chamber. The three bills are: H.R. 9403, the Expanding AI Voices Act, co-sponsored by Rep. Vince Fong (CA-20) and Rep. Andrea Salinas (OR-06); H.R. 9197, the Small Business AI Act, co-sponsored by Rep. Mike Collins (GA-10) and Rep. Haley Stevens (MI-11), and H.R. 9403, the Expand AI Act, co-sponsored by Rep. Valerie Foushee (NC-04) and Rep. Frank Lucas (OK-03).

“FAS endorsed these bills based on the evaluation of their strengths. Among these are the development of infrastructure to develop AI safely and responsibly; the deployment of resources to ensure development benefits more equitably across our economy; and investment in the talent pool necessary for this consequential, emerging technology,” says Dan Correa, CEO of FAS.

“These three bills pave a vision for the equitable and safe use of AI in the U.S. Both the Expanding AI Voices Act and the NSF AI Education Act will create opportunities for underrepresented voices to have a say in how AI is developed and deployed. Additionally, the Small Business AI Act will ensure that an important sector of our society feels empowered to use AI safely and securely,” says Clara Langevin, FAS AI Policy Specialist. 

Expanding AI Voices Act

The Expanding AI Voices Act will support a broad and diverse interdisciplinary research community for the advancement of artificial intelligence and AI-powered innovation through partnerships and capacity building at certain institutions of higher education to expand AI capacity in populations historically underrepresented in STEM.

Specifically, the Expanding AI Voices Act of 2024 will:

Small Business AI Act

Emerging science is central to new and established small businesses, across industries and around the country. This bill will require the Director of the National Institute of Standards and Technology (NIST) to develop resources for small businesses in utilizing artificial intelligence, and for other purposes. 

National Science Foundation Artificial Intelligence Education Act of 2024 (NSF AI Education Act).


The National Artificial Intelligence Initiative Act of 2020 (15 U.S.C. 9451) will  bolster educational skills in AI through new learning initiatives and workforce training programs. Specifically, the bill will: 

###

ABOUT FAS

The Federation of American Scientists (FAS) works to advance progress on a broad suite of contemporary issues where science, technology, and innovation policy can deliver dramatic progress, and seeks to ensure that scientific and technical expertise have a seat at the policymaking table. Established in 1945 by scientists in response to the atomic bomb, FAS continues to work on behalf of a safer, more equitable, and more peaceful world. More information at fas.org.

Public Comment on the U.S. Artificial Intelligence Safety Institute’s Draft Document: NIST AI 800-1, Managing Misuse Risk for Dual-Use Foundation Models

Public comments serve the executive branch by informing more effective, efficient program design and regulation. As part of our commitment to evidence-based, science-backed policy, FAS staff leverage public comment opportunities to embed science, technology, and innovation into policy decision-making.

The Federation of American Scientists (FAS) is a non-partisan organization dedicated to using science and technology to benefit humanity through equitable and impactful policy. With a strong track record in AI governance, FAS has actively contributed to the development of AI standards and frameworks, including providing feedback on NIST AI 600-1, the Generative AI Profile. Our work spans advocating for federal AI testbeds, recommending policy measures for frontier AI developers, and evaluating industry adoption of the NIST AI Risk Management Framework. We are members of the U.S. AI Safety Institute Research Consortium, and we responded to NIST’s request for information earlier this year concerning its responsibilities under sections 4.1, 4.5, and 11 of the AI Executive Order.

We commend NIST’s U.S. Artificial Intelligence Safety Institute for developing the draft guidance on “Managing Misuse Risk for Dual-Use Foundation Models.” This document represents a significant step toward establishing robust practices for mitigating catastrophic risks associated with advanced AI systems. The guidance’s emphasis on comprehensive risk assessment, transparent decision-making, and proactive safeguards aligns with FAS’s vision for responsible AI development.

In our response, we highlight several strengths of the guidance, including its focus on anticipatory risk assessment and the importance of clear documentation. We also identify areas for improvement, such as the need for harmonized language and more detailed guidance on model development safeguards. Our key suggestions include recommending a more holistic socio-technical approach to risk evaluation, strengthening language around halting development for unmanageable risks, and expanding the range of considered safeguards. We believe these adjustments will further strengthen NIST’s crucial role in shaping responsible AI development practices.

Background and Context

The rapid advancement of AI foundation models has spurred novel industry-led risk mitigation strategies. Leading AI companies have voluntarily adopted frameworks like Responsible Scaling Policies and Preparedness Frameworks, outlining risk thresholds and mitigation strategies for increasingly capable AI systems. (Our response to NIST’s February RFI was largely an exploration of these policies, their benefits and drawbacks, and how they could be strengthened.)

Managing misuse risks in foundation models is of paramount importance given their broad applicability and potential for dual use. As these models become more powerful, they may inadvertently enable malicious actors to cause significant harm, including facilitating the development of weapons, enabling sophisticated cyber attacks, or generating harmful content. The challenge lies not only in identifying current risks but also in anticipating future threats that may emerge as AI capabilities expand.

NIST’s new guidance on “Managing Misuse Risk for Dual-Use Foundation Models” builds upon these industry initiatives, providing a more standardized and comprehensive approach to risk management. By focusing on objectives such as anticipating potential misuse, establishing clear risk thresholds, and implementing robust evaluation procedures, the guidance creates a framework that can be applied across the AI development ecosystem. This approach is crucial for ensuring that as AI technology advances, appropriate safeguards are in place to protect against potential misuse while still fostering innovation.

Strengths of the guidance

1. Comprehensive Documentation and Transparency

The guidance’s emphasis on thorough documentation and transparency represents a significant advancement in AI risk management. For every practice under every objective, the guidance indicates appropriate documentation; this approach is more thorough in advancing transparency than any comparable guidance to date. The creation of a paper trail for decision-making and risk evaluation is crucial for both internal governance and potential external audits.

The push for transparency extends to collaboration with external stakeholders. For instance, practice 6.4 recommends providing “safe harbors for third-party safety research,” including publishing “a clear vulnerability disclosure policy for model safety issues.” This openness to external scrutiny and feedback is essential for building trust and fostering collaborative problem-solving in AI safety. (FAS has published a legislative proposal calling for enshrining “safe harbor” protections for AI researchers into law.)

2. Lifecycle Approach to Risk Management

The guidance excels in its holistic approach to risk management, covering the entire lifecycle of foundation models from pre-development assessment through to post-deployment monitoring. This comprehensive approach is evident in the structure of the document itself, which follows a logical progression from anticipating risks (Objective 1) through to responding to misuse after deployment (Objective 6).

The guidance demonstrates a proactive stance by recommending risk assessment before model development. Practice 1.3 suggests to “Estimate the model’s capabilities of concern before it is developed…”, which helps anticipate and mitigate potential harms before they materialize. The framework for red team evaluations (Practice 4.2) is particularly robust, recommending independent external experts and suggesting ways to compensate for gaps between red teams and real threat actors. The guidance also emphasizes the importance of ongoing risk assessment. Practice 3.2 recommends to “Periodically revisit estimates of misuse risk stemming from model theft…” This acknowledgment of the dynamic nature of AI risks encourages continuous vigilance.

3. Strong Stance on Model Security and Risk Tolerance

The guidance takes a firm stance on model security and risk tolerance, particularly in Objective 3. It unequivocally states that models relying on confidentiality for misuse risk management should only be developed when theft risk is sufficiently mitigated. This emphasizes the critical importance of security in AI development, including considerations for insider threats (Practice 3.1).

The guidance also demonstrates a realistic approach to the challenges posed by different deployment strategies. In Practice 5.1, it notes, “For example, allowing fine-tuning via API can significantly limit options to prevent jailbreaking and sharing the model’s weights can significantly limit options to monitor for misuse (Practice 6.1) and respond to instances of misuse (Practice 6.2).” This candid discussion of the limitations of safety interventions for open weight foundation models is crucial for fostering realistic risk assessments.

Additionally, the guidance promotes a conservative approach to risk management. Practice 5.3 recommends to “Consider leaving a margin of safety between the estimated level of risk at the point of deployment and the organization’s risk tolerance.” It further suggests considering “a larger margin of safety to manage risks that are more severe or less certain.” This approach provides an extra layer of protection against unforeseen risks or rapid capability advancements, which is crucial given the uncertainties inherent in AI development.

These elements collectively demonstrate NIST’s commitment to promoting realistic and robust risk management practices that prioritize safety and security in AI development and deployment. However, while the NIST guidance demonstrates several important strengths, there are areas where it could be further improved to enhance its effectiveness in managing misuse risks for dual-use foundation models.

Areas for improvement

1. Need for a More Comprehensive Socio-technical Approach to Measuring Misuse Risk

Objective 4 of the guidance demonstrates a commendable effort to incorporate elements of a socio-technical approach in measuring misuse risk. The guidance recognizes the importance of considering both technical and social factors, emphasizes the use of red teams to assess potential misuse scenarios, and acknowledges the need to consider different levels of access and various threat actors. Furthermore, it highlights the importance of avoiding harm during the measurement process, which is crucial in a socio-technical framework.

However, the guidance falls short in fully embracing a comprehensive socio-technical perspective. While it touches on the importance of external experts, it does not sufficiently emphasize the value of diverse perspectives, particularly from individuals with lived experiences relevant to specific risk scenarios. The guidance also lacks a structured approach to exploring the full range of potential misuse scenarios across different contexts and risk areas. Finally, the guidance does not mention measuring absolute versus marginal risks (ie., how much total misuse risk a model poses in a specific context versus how much marginal risk it poses compared to existing tools). These gaps limit the effectiveness of the proposed risk measurement approach in capturing the full complexity of AI system interactions with human users and broader societal contexts.

Specific recommendations for improving socio-technical approach

The NIST guidance in Practice 1.3 suggests estimating model capabilities by comparison to existing models, but provides little direction on how to conduct these comparisons effectively. To improve this, NIST could incorporate the concept of “available affordances.” This concept emphasizes that an AI system’s risk profile depends not just on its absolute capabilities, but also on the environmental resources and opportunities for affecting the world that are available to it.

Additionally, Kapoor et al. (2024) emphasize the importance of assessing the marginal risk of open foundation models compared to existing technologies or closed models. This approach aligns with a comprehensive socio-technical perspective by considering not just the absolute capabilities of AI systems, but also how they interact with existing technological and social contexts. For instance, when evaluating cybersecurity risks, they suggest considering both the potential for open models to automate vulnerability detection and the existing landscape of cybersecurity tools and practices. This marginal risk framework helps to contextualize the impact of open foundation models within broader socio-technical systems, providing a more nuanced understanding of their potential benefits and risks. 

NIST could recommend that organizations assess both the absolute capabilities of their AI systems and the affordances available to them in potential deployment contexts. This approach would provide a more comprehensive view of potential risks than simply comparing models in isolation. For instance, the guidance could suggest evaluating how a system’s capabilities might change when given access to different interfaces, actuators, or information sources.

Similarly, Weidinger et al. (2023) argue that while quantitative benchmarks are important, they are insufficient for comprehensive safety evaluation. They suggest complementing quantitative measures with qualitative assessments, particularly at the human interaction and systemic impact layers. NIST could enhance its guidance by providing more specific recommendations for integrating qualitative evaluation methods alongside quantitative benchmarks.

NIST should acknowledge potential implementation challenges with a comprehensive socio-technical approach. Organizations may struggle to create benchmarks that accurately reflect real-world misuse scenarios, particularly given the rapid evolution of AI capabilities and threat landscapes. Maintaining up-to-date benchmarks in a fast-paced field presents another ongoing challenge. Additionally, organizations may face difficulties in translating quantitative assessments into actionable risk management strategies, especially when dealing with novel or complex risks. NIST could enhance the guidance by providing strategies for navigating these challenges, such as suggesting collaborative industry efforts for benchmark development or offering frameworks for scalable testing approaches.

OpenAI‘s approach of using human participants to evaluate AI capabilities provides both a useful model for more comprehensive evaluation and an example of quantification challenges. While their evaluation attempted to quantify biological risk increase from AI access, they found that, as they put it, “Translating quantitative results into a meaningfully calibrated threshold for risk turns out to be difficult.” This underscores the need for more research on how to set meaningful thresholds and interpret quantitative results in the context of AI safety.

2. Inconsistencies in Risk Management Language

There are instances where the guidance uses varying levels of strength in its recommendations, particularly regarding when to halt or adjust development. For example, Practice 2.2 recommends to “Plan to adjust deployment or development strategies if misuse risks rise to unacceptable levels,” while Practice 3.2 uses stronger language, suggesting to “Adjust or halt further development until the risk of model theft is adequately managed.” This variation in language could lead to confusion and potentially weaker implementation of risk management strategies.

Furthermore, while the guidance emphasizes the importance of managing risks before deployment, it does not provide clear criteria for what constitutes “adequately managed” risk, particularly in the context of development rather than deployment. More consistent and specific language around these critical decision points would strengthen the guidance’s effectiveness in promoting responsible AI development.

Specific recommendations for strengthening language on halting development for unmanageable risks

To address the inconsistencies noted above, we suggest the following changes:

1. Standardize the language across the document to consistently use strong phrasing such as “Adjust or halt further development” when discussing responses to unacceptable levels of risk. 

The current guidance uses varying levels of strength in its recommendations regarding development adjustments. For instance, Recommendation 4 of Practice 2.2 uses the phrase “Plan to adjust deployment or development strategies,” while Recommendation 3 of Practice 3.2 more strongly suggests to “Adjust or halt further development.” Consistent language would emphasize the critical nature of these decisions and reduce potential confusion or weak implementation of risk management strategies. This could be accomplished by changing the language of Practice 2.2, Recommendation 4 to “Plan to adjust or halt further development or deployment if misuse risks rise to unacceptable levels before adequate security and safeguards are available to manage risk.”

The need for stronger language regarding halting development is reflected both in NIST’s other work and in commitments that many frontier AI developers have publicly agreed to. For instance, the NIST AI Risk Management Framework, section 1.2.3 (Risk Prioritization), suggests: “In some cases where an AI system presents the highest risk – where negative impacts are imminent, severe harms are actually occurring, or catastrophic risks are present – development and deployment should cease in a safe manner until risks can be sufficiently mitigated.” Further, the AI Seoul Summit frontier AI safety commitments explicitly state that organizations should “set out explicit processes they intend to follow if their model or system poses risks that meet or exceed the pre-defined thresholds.” Importantly, these commitments go on to specify that “In the extreme, organisations commit not to develop or deploy a model or system at all, if mitigations cannot be applied to keep risks below the thresholds.” 

2. Add to the list of transparency documentation for Practice 2.2 the following: “A decision-making framework for determining when risks have become truly unmanageable, considering factors like the severity of potential harm, the likelihood of the risk materializing, and the feasibility of mitigation strategies.”

While the current guidance emphasizes the importance of managing risks before deployment (e.g., in Practice 5.3), it does not provide clear criteria for what constitutes “adequately managed” risk, particularly in the context of development rather than deployment. A decision-making framework would provide clearer guidance on when to take the serious step of halting development. This addition would help prevent situations where development continues despite unacceptable risks due to a lack of clear stopping criteria. This recommendation aligns with the approach suggested by Alaga and Schuett (2023) in their paper on coordinated pausing, where they emphasize the need for clear thresholds and decision criteria to determine when AI development should be halted due to unacceptable risks. 

3. Gaps in Model Development Safeguards

The guidance’s treatment of safeguards, particularly those related to model development, lacks sufficient detail to be practically useful. This is most evident in Appendix B, which lists example safeguards. While this appendix is a valuable addition, the safeguards related to model training (“Improve the model’s training”) are notably lacking in detail compared to the safeguards around model security and detecting misuse.

While the guidance covers many aspects of risk management comprehensively, especially model security, it does not provide enough specific recommendations for technical approaches to building safer models during the development phase. This gap could limit the practical utility of the guidance for AI developers seeking to implement safety measures from the earliest stages of model creation.

Specific recommendations for additional safeguards for model development

For some safeguards, we recommend that the misuse risk guidance explicitly reference relevant sections of NIST 600-1, the Generative Artificial Intelligence Profile. Specifically, the GAI profile offers more comprehensive guidance on data-related and monitoring safeguards. For instance, the profile emphasizes documenting training data curation policies (MP-4.1-004) and establishing policies for data collection, retention, and quality (MP-4.1-005), which are crucial for managing misuse risk from the earliest stages of development. Additionally, the profile suggests implementing real-time monitoring processes for analyzing generated content performance and trustworthiness characteristics (MG-3.2-006), which could significantly enhance ongoing risk management during development. These references to the GAI Profile on model development safeguards could take the form of an additional item in Appendix B, or be incorporated into the relevant sections earlier in the guidance.

Beyond pointing to the model development safeguards included in the GAI Profile, we also recommend expanding Appendix B to include further safeguards for the model development phase. Both the GAI Profile and the current misuse risk guidance lack specific recommendations for two key model development safeguards: iterative safety testing throughout development and staged development/release processes. Below are two proposed additions to Appendix B:

SafeguardPossible Implementation Methods
Implement iterative safety testing throughout development.* Develop and continuously update a comprehensive suite of safety tests covering identified risk areas.

* Establish quantitative safety benchmarks and ensure the model meets predefined thresholds before progressing to next development stages.

* Conduct regular adversarial testing, updating the test suite based on discovered vulnerabilities or emerging threats.
Consider a staged development and release process.* Define clear safety criteria that must be met before advancing to each subsequent stage of model development or deployment.

* Implement a phased release strategy, incrementally increasing model capabilities or access only after thorough safety evaluations at each stage.

* If possible, maintain the capability to rapidly revert to previous versions or restrict access if safety issues are identified post-release.

The proposed safeguard “Implement iterative safety testing throughout development” addresses the current guidance’s limited detail on model training and development safeguards. This approach aligns with Barrett, et al.’s AI Risk-Management Standards Profile for General-Purpose AI Systems and Foundation Models (the “GPAIS Profile”)’s emphasis on proactive and ongoing risk assessment. Specifically, the Profile recommends identifying “GPAIS impacts…and risks (including potential uses, misuses, and abuses), starting from an early AI lifecycle stage and repeatedly through new lifecycle phases or as new information becomes available” (Barrett et al., 2023, p. 19). The GPAIS Profile further suggests that for larger models, developers should “analyze, customize, reanalyze, customize differently, etc., then deploy and monitor” (Barrett et al., 2023, p. 19), where “analyze” encompasses probing, stress testing, and red teaming. This iterative safety testing would integrate safety considerations throughout development, aligning with the guidance’s emphasis on proactive risk management and anticipating potential misuse risk.

Similarly, the proposed safeguard “Establish a staged development and release process” addresses a significant gap in the current guidance. While Practice 5.1 discusses pre-deployment risk assessment, it lacks a structured approach to incrementally increasing model capabilities or access. Solaiman et al. (2023) propose a “gradient of release” framework for generative AI, a phased approach to model deployment that allows for iterative risk assessment and mitigation. This aligns with the guidance’s emphasis on ongoing risk management and could enhance the ‘margin of safety’ concept in Practice 5.3. Implementing such a staged process would introduce multiple risk assessment checkpoints throughout development and deployment, potentially improving safety outcomes.

Conclusion

NIST’s guidance on “Managing Misuse Risk for Dual-Use Foundation Models” represents a significant step forward in establishing robust practices for mitigating catastrophic risks associated with advanced AI systems. The document’s emphasis on comprehensive risk assessment, transparent decision-making, and proactive safeguards demonstrates a commendable commitment to responsible AI development. However, to more robustly contribute to risk mitigation, the guidance must evolve to address key challenges, including a stronger approach to measuring misuse risk, consistent language on halting development, and more detailed model development safeguards.

As the science of AI risk assessment advances, this guidance should be recursively updated to address emerging risks and incorporate new best practices. While voluntary guidance is crucial, it is important to recognize that it cannot replace the need for robust policy and regulation. A combination of industry best practices, government oversight, and international cooperation will be necessary to ensure the responsible development of high-risk AI systems.

We appreciate the opportunity to provide input on this important document. FAS stands ready to continue assisting NIST in refining and implementing this guidance, as well as in developing further resources for responsible AI development. We believe that close collaboration between government agencies, industry leaders, and civil society organizations is key to realizing the benefits of AI while effectively mitigating its most serious risks.

Scaling AI Safely: Can Preparedness Frameworks Pull Their Weight?

A new class of risk mitigation policies has recently come into vogue for frontier AI developers. Known alternately as Responsible Scaling Policies or Preparedness Frameworks, these policies outline commitments to risk mitigations that developers of the most advanced AI models will implement as their models display increasingly risky capabilities. While the idea for these policies is less than a year old, already two of the most advanced AI developers, Anthropic and OpenAI, have published initial versions of these policies. The U.K. AI Safety Institute asked frontier AI developers about their “Responsible Capability Scaling” policies ahead of the November 2023 UK AI Safety Summit. It seems that these policies are here to stay.

The National Institute of Standards & Technology (NIST) recently sought public input on its assignments regarding generative AI risk management, AI evaluation, and red-teaming. The Federation of American Scientists was happy to provide input; this is the full text of our response. NIST’s request for information (RFI) highlighted several potential risks and impacts of potentially dual-use foundation models, including: “Negative effects of system interaction and tool use…chemical, biological, radiological, and nuclear (CBRN) risks…[e]nhancing or otherwise affecting malign cyber actors’ capabilities…[and i]mpacts to individuals and society.” This RFI presented a good opportunity for us to discuss the benefits and drawbacks of these new risk mitigation policies.

This report will provide some background on this class of risk mitigation policies (we use the term Preparedness Framework, for reasons to be described below). We outline suggested criteria for robust Preparedness Frameworks (PFs) and evaluate two key documents, Anthropic’s Responsible Scaling Policy and OpenAI’s Preparedness Framework, against these criteria. We claim that these policies are net-positive and should be encouraged. At the same time, we identify shortcomings of current PFs, chiefly that they are underspecified, insufficiently conservative, and address structural risks poorly. Improvement in the state of the art of risk evaluation for frontier AI models is a prerequisite for a meaningfully binding PF. Most importantly, PFs, as unilateral commitments by private actors, cannot replace public policy.

Motivation for Preparedness Frameworks

As AI labs develop potentially dual-use foundation models (as defined by Executive Order No. 14110, the “AI EO”) with capability, compute, and efficiency improvements, novel risks may emerge, some of them potentially catastrophic. Today’s foundation models can already cause harm and pose some risks, especially as they are more broadly used. Advanced large language models at times display unpredictable behaviors

To this point, these harms have not risen to the level of posing catastrophic risks, defined here broadly as “devastating consequences for vast numbers of people.” The capabilities of models at the current state of the art simply do not imply levels of catastrophic risk above current non-AI related margins.1 However, as these models continue to scale in training compute, some speculate they may develop novel capabilities that could potentially be misused. The specific capabilities that will emerge from further scaling remain difficult to predict with confidence or certainty. Some analysis indicates that as training compute for AI models has doubled approximately every six months since 2015, performance on capability benchmarks has also steadily improved. While it’s possible that bigger models could lead to better performance, it wouldn’t be surprising if smaller models emerge with better capabilities, as despite years of research by machine learning theorists, our knowledge of just how the number of model parameters relates to model capabilities remains uncertain. 

Nonetheless, as capabilities increase, risks may also increase, and new risks may appear. Executive Order 14110 (the Executive Order on Artificial Intelligence, or the “AI EO”) detailed some novel risks of potentially dual-use foundation models, including potential risks associated with chemical, biological, radiological, or nuclear (CBRN) risks and advanced cybersecurity risks. Other risks are more speculative, such as risks of model autonomy, loss of control of AI systems, or negative impacts on users including risks of persuasion.2 Without robust risk mitigations, it is plausible that increasingly powerful AI systems will eventually pose greater societal risks.

Other technologies that pose catastrophic risks, such as nuclear technologies, are heavily regulated in order to prevent those risks from resulting in serious harms. There is a growing movement to regulate development of potentially dual-use biotechnologies, particularly gain-of-function research on the most pathogenic microbes. Given the rapid pace of progress at the AI frontier, comprehensive government regulation has yet to catch up; private companies that develop these models are starting to take it upon themselves to prevent or mitigate the risks of advanced AI development.

Prevention of such novel and consequential risks requires developers to implement policies that address potential risks iteratively. That is where preparedness frameworks come in. A preparedness framework is used to assess risk levels across key categories and outline associated risk mitigations. As the introduction to OpenAI’s PF states, “The processes laid out in each version of the Preparedness Framework will help us rapidly improve our understanding of the science and empirical texture of catastrophic risk, and establish the processes needed to protect against unsafe development.” Without such processes and commitments, the tendency to prioritize speed over safety concerns might prevail. While the exact consequences of failing to mitigate these risks are uncertain, they could potentially be significant.

Preparedness frameworks are limited in scope to catastrophic risks. These policies aim to prevent the worst conceivable outcomes of the development of future advanced AI systems; they are not intended to cover risks from existing systems. We acknowledge that this is an important limitation of preparedness frameworks. Developers can and should address both today’s risks and future risks at the same time; preparedness frameworks attempt to address the latter, while other “trustworthy AI” policies attempt to address a broader swathe of risks. For instance, OpenAI’s “Preparedness” team sits alongside its “Safety Systems” team, which “focuses on mitigating misuse of current models and products like ChatGPT.”

A note about terminology: The term “Responsible Scaling Policy” (RSP) is the term that took hold first, but it presupposes scaling of compute and capabilities by default. “Preparedness Framework” (PF) is a term coined by OpenAI, and it communicates the idea that the company needs to be prepared as its models approach the level of artificial general intelligence. Of the two options, “Preparedness Framework” communicates the essential idea more clearly: developers of potentially dual-use foundation models must be prepared for and mitigate potential catastrophic risks from development of these models.

The Industry Landscape

In September of 2023, ARC Evals (now METR, “Model Evaluation & Threat Research”) published a blog post titled “Responsible Scaling Policies (RSPs).” This post outlined the motivation and basic structure of an RSP, and revealed that ARC Evals had helped Anthropic write its RSP (version 1.0) which had been released publicly a few days prior. (ARC Evals had also run pre-deployment evaluations on Anthropic’s Claude model and OpenAI’s GPT-4.) And in December 2023, OpenAI published its Preparedness Framework in beta; while using new terminology, this document is structurally similar to ARC Evals’ outline of the structure of an RSP. Both OpenAI and Anthropic have indicated that they plan to update their PFs with new information as the frontier of AI development advances.

Not every AI company should develop or maintain a preparedness framework. Since these policies relate to catastrophic risk from models with advanced capabilities, only those developers whose models could plausibly attain those capabilities should use PFs. Because these advanced capabilities are associated with high levels of training compute, a good interim threshold for who should develop a PF could be the same as the AI EO threshold for potentially dual-use foundation models; that is, developers of models trained on over 10^26 FLOPS (or October 2023-equivalent level of compute adjusted for compute efficiency gains).3 Currently, only a handful of developers have models that even approach this threshold. This threshold should be subject to change, like that of the AI EO, as developers continue to push the frontier (e.g. by developing more efficient algorithms or realizing other compute efficiency gains).

While several other companies published “Responsible Capability Scaling” documents ahead of the UK AI Safety Summit, including DeepMind, Meta, Microsoft, Amazon, and Inflection AI, the rest of this report focuses primarily on OpenAI’s PF and Anthropic’s RSP. 

Weaknesses of Preparedness Frameworks

Preparedness frameworks are not panaceas for AI-associated risks. Even with improvements in specificity, transparency, and strengthened risk mitigations, there are important weaknesses to the use of PFs. Here we outline a couple weaknesses of PFs and possible answers to them.

1. Spirit vs. text: PFs are voluntary commitments whose success depends on developers’ faithfulness to their principles.

Current risk thresholds and mitigations are defined loosely. In Anthropic’s RSP, for instance, the jump from the current risk level posed by Claude 2 (its state of the art model) to the next risk level is defined in part by the following: “Access to the model would substantially increase the risk of catastrophic misuse, either by proliferating capabilities, lowering costs, or enabling new methods of attack….” A “substantial increase” is not well-defined. This ambiguity leaves room for interpretation; since implementing risk mitigations can be costly, developers could have an incentive to take advantage of such ambiguity if they do not follow the spirit of the policy.

This concern about the gap between following the spirit of the PF and following the text might be somewhat eased with more specificity about risk thresholds and associated mitigations, and especially with more transparency and public accountability to these commitments.

To their credit, OpenAI’s PF and Anthropic’s RSP show a serious approach to the risks of developing increasingly advanced AI systems. OpenAI’s PF includes a commitment to fine-tune its models to better elicit capabilities along particular risk categories, then evaluate “against these enhanced models to ensure we are testing against the ‘worst case’ scenario we know of.” They also commit to triggering risk mitigations “when any of the tracked risk categories increase in severity, rather than only when they all increase together.” And Anthropic “commit[s] to pause the scaling and/or delay the deployment of new models whenever our scaling ability outstrips our ability to comply with the safety procedures for the corresponding ASL [AI Safety Level].” These commitments are costly signals that these developers are serious about their PFs.

2. Private commitment vs. public policy: PFs are unilateral commitments that individual developers take on; we might prefer more universal policy (or regulatory) approaches.

Private companies developing AI systems may not fully account for broader societal risks. Consider an analogy to climate change—no single company’s emissions are solely responsible for risks like sea level rise or extreme weather. The risk comes from the aggregate emissions of all companies. Similarly, AI developers may not consider how their systems interact with others across society, potentially creating structural risks. Like climate change, the societal risks from AI will likely come from the cumulative impact of many different systems. Unilateral commitments are poor tools to address such risks.

Furthermore, PFs might reduce the urgency for government intervention. By appearing safety-conscious, developers could diminish the perceived need for regulatory measures. Policymakers might over-rely on self-regulation by AI developers, potentially compromising public interest for private gains.

Policy can and should step into the gap left by PFs. Policy is more aligned to the public good, and as such is less subject to competing incentives. And policy can be enforced, unlike voluntary commitments. In general, preparedness frameworks and similar policies help hold private actors accountable to their public commitments; this effect is stronger with more specificity in defining risk thresholds, better evaluation methods, and more transparency in reporting. However, these policies cannot and should not replace government action to reduce catastrophic risks (especially structural risks) of frontier AI systems.

Suggested Criteria for Robust Preparedness Frameworks

These criteria are adapted from the ARC Evals post, Anthropic’s RSP, and OpenAI’s PF. Broadly, they are aspirational; no existing preparedness framework meets all or most of these criteria.

For each criterion, we explain the key considerations for developers adopting PFs. We analyze OpenAI’s PF and Anthropic’s RSP to illustrate the strengths and shortcomings of their approaches. Again, these policies are net-positive and should be encouraged. They demonstrate costly unilateral commitments to measuring and addressing catastrophic risk from their models; they meaningfully improve on the status quo. However, these initial PFs are underspecified and insufficiently conservative. Improvement in the state of the art of risk evaluation and mitigation, and subsequent updates, would make them more robust.

Suggested Criteria for Robust Preparedness Frameworks
Table 1: Summary of suggested criteria for robust preparedness frameworks.
BreadthPreparedness frameworks should cover the breadth of potential catastrophic risks of developing frontier AI models.“What risks are covered?”
Risk appetitePreparedness frameworks should define the developer’s acceptable risk level (“risk appetite”) in terms of likelihood and severity of risk.“What is an acceptable level of risk?”
ClarityPreparedness frameworks should clearly define capability levels and risk thresholds.“How will developers know they have hit capability levels associated with particular risks?”
EvaluationPreparedness frameworks should include detailed evaluation procedures for AI models, ensuring comprehensive risk assessment.“What tests will developers run on their models?”
MitigationFor different risk thresholds, preparedness frameworks should identify and commit to pre-specified risk mitigations.“What will developers do when their models reach particular levels of risk?”
RobustnessPreparedness frameworks’ pre-specified risk mitigations must effectively address potentially catastrophic risks.“How do developers know their risk mitigations will work?”
AccountabilityPreparedness frameworks should combine credible risk mitigation commitments with governance structures that ensure these commitments are fulfilled.“How can developers hold themselves accountable to their commitment to safety?”
AmendmentsPreparedness frameworks should include a mechanism for regular updates to the framework itself, in light of ongoing research and advances in AI.“How will developers change their PFs over time?”
TransparencyFor models with risk above the lowest level, both pre- and post-mitigation evaluation results and methods should be public, including any performed mitigations.“How will developers communicate about their models’ capabilities and risks?”

1. Preparedness frameworks should cover the breadth of potential catastrophic risks of developing frontier AI models. 

These risks may include:

Preparedness frameworks should apply to catastrophic risks in particular because they govern the scaling of capabilities of the most advanced AI models, and because catastrophic risks are of the highest consequence to such development. PFs are one tool among many that developers of the most advanced AI models should use to prevent harm. Developers of advanced AI models tend to also have other “trustworthy AI” policies, which seek to prevent and address already-existing risks such as harmful outputs, disinformation, and synthetic sexual content. Despite PFs’ focus on potentially catastrophic risks, faithfully applying PFs may help developers catch many other kinds of risks as well, since they involve extensive evaluation for misuse potential and adverse human impacts.

2. Preparedness frameworks should define the developer’s acceptable risk level (“risk appetite”) in terms of likelihood and severity of risk, in accordance with the NIST AI Risk Management Framework, section Map 1.5.

Neither OpenAI nor Anthropic has publicly declared their risk appetite. This is a nascent field of research, as these risks are novel and perhaps less predictable than eg. nuclear accident risk.5 NIST and other standard-setting bodies will be crucial in developing AI risk metrology. For now, PFs should state developers’ risk appetites as clearly as possible, and update them regularly with research advances.6

AI developers’ risk appetites might be different than a regulatory risk appetite. Developers should elucidate their risk appetite in quantitative terms so their PFs can be evaluated accordingly. As in the case of nuclear technology, regulators may eventually impose risk thresholds on frontier AI developers. At this point, however, there is no standard, scientifically-grounded approach to measuring the potential for catastrophic AI risk; this has to start with the developers of the most capable AI models.

3. Preparedness frameworks should clearly define capability levels and risk thresholds. Risk thresholds should be quantified robustly enough to hold developers accountable to their commitments.

OpenAI and Anthropic both outline qualitative risk thresholds corresponding with different categories of risk. For instance, in OpenAI’s PF, the High risk threshold in the CBRN category reads: “​​Model enables an expert to develop a novel threat vector OR model provides meaningfully improved assistance that enables anyone with basic training in a relevant field (e.g., introductory undergraduate biology course) to be able to create a CBRN threat.” And Anthropic’s RSP defines the ASL-3 [AI Safety Level] threshold as: “Low-level autonomous capabilities, or access to the model would substantially increase the risk of catastrophic misuse, either by proliferating capabilities, lowering costs, or enabling new methods of attack, as compared to a non-LLM baseline of risk.”

These qualitative thresholds are under-specified; reasonable people are likely to differ on what “meaningfully improved assistance” looks like, or a “substantial increase [in] the risk of catastrophic misuse.” In PFs, these thresholds should be quantified to the extent possible.

To be sure, the AI development research community currently lacks a good empirical understanding of the likelihood or quantification of frontier AI-related risks. Again, this is a novel science that needs to be developed with input from both the private and public sectors. Since this science is still developing, it is natural to want to avoid too much quantification. A conceivable failure mode is that developers “check the boxes,” which may become obsolete quickly, in lieu of using their judgment to determine when capabilities are dangerous enough to warrant higher risk mitigations. Again, as research improves, we should expect to see improvements in PFs’ specification of risk thresholds.

4. Preparedness frameworks should include detailed evaluation procedures for AI models, ensuring comprehensive risk assessment within a developer’s tolerance. 

Anthropic and OpenAI both have room for improvement on detailing their evaluation procedures. Anthropic’s RSP includes evaluation procedures for model autonomy and misuse risks. Its evaluation procedures for model autonomy are impressively detailed, including clearly defined tasks on which it will evaluate its models. Its evaluation procedures for misuse risk are much less well-defined, though it does include the following note: “We stress that this will be hard and require iteration. There are fundamental uncertainties and disagreements about every layer…It will take time, consultation with experts, and continual updating.” And OpenAI’s PF includes a “Model Scorecard,” a mock evaluation of an advanced AI model. This model scorecard includes the hypothetical results of various evaluations in all four of their tracked risk categories; it does not appear to be a comprehensive list of evaluation procedures.

Again, the science of AI model evaluation is young. The AI EO directs NIST to develop red-teaming guidance for developers of potentially dual-use foundation models. NIST, along with private actors such as METR and other AI evaluators, will play a crucial role in creating and testing red-teaming practices and model evaluations that elicit all relevant capabilities.

5. For different risk thresholds, preparedness frameworks should identify and commit to pre-specified risk mitigations.

Classes of risk mitigations may include:

Both OpenAI’s PF and Anthropic’s RSP commit to a number of pre-specified risk mitigations for different thresholds. For example, for what Anthropic calls “ASL-2” models (including its most advanced model, Claude 2), they commit to measures including publishing model cards, providing a vulnerability reporting mechanism, enforcing an acceptable use policy, and more. Models at higher risk thresholds (what Anthropic calls “ASL-3” and above) have different, more stringent risk mitigations, including “limit[ing] access to training techniques and model hyperparameters…” and “implement[ing] measures designed to harden our security…”

Risk mitigations can and should differ in approaches to development versus deployment. There are different levels of risk associated with possessing models internally and allowing external actors to interact with them. Both OpenAI’s PF and Anthropic’s RSP include different risk mitigation approaches for development and deployment. For example, OpenAI’s PF restricts deployment of models such that “Only models with a post-mitigation score of “medium” or below can be deployed,” whereas it restricts development of models such that “Only models with a post-mitigation score of “high” or below can be developed further.”

Mitigations should be defined as specifically as possible, with the understanding that as the state of the art changes, this too is an area that will require periodic updates. Developers should include some room for judgment here.

6. Preparedness frameworks’ pre-specified risk mitigations must effectively address potentially catastrophic risks.

Having confidence that the risk mitigations do in fact address potential catastrophic risks is perhaps the most important and difficult aspect of a PF to evaluate. Catastrophic risk from AI is a novel and speculative field; evaluating AI capabilities is a science in its infancy; and there are no empirical studies of the effectiveness of risk mitigations preventing such risks. Given this uncertainty, frontier AI developers should err on the side of caution.

Both OpenAI and Anthropic should be more conservative in their risk mitigations. Consider OpenAI’s commitment to restricting development: “[I]f we reach (or are forecasted to reach) ‘critical’ pre-mitigation risk along any risk category, we commit to ensuring there are sufficient mitigations in place…for the overall post-mitigation risk to be back at most to ‘high’ level.” To understand this commitment, we have to look at their threshold definitions. Under the Model Autonomy category, the “critical” threshold in part includes: “model can self-exfiltrate under current prevailing security.” Setting aside that this threshold is still quite vague and difficult to evaluate (and setting aside the novelty of this capability), a model that approaches or exceeds this threshold by definition can self-exfiltrate, rendering all other risk mitigations ineffective. A more robust approach to restricting development would not permit training or possessing a model that comes close to exceeding this threshold.

As for Anthropic, consider their threshold for “ASL-3,” which reads in part: “Access to the model would substantially increase the risk of catastrophic misuse…” The risk mitigations for ASL-3 models include the following: “Harden security such that non-state attackers are unlikely to be able to steal model weights and advanced threat actors (e.g. states) cannot steal them without significant expense.” While an admirable approach to development of potentially dual-use foundation models, assuming state actors seek out tools whose misuse involves catastrophic risk, a more conservative mitigation would entail hardening security such that it is unlikely that any actor, state or non-state, could steal the model weights of such a model.9

7. Preparedness frameworks should combine credible risk mitigation commitments with governance structures that ensure these commitments are fulfilled.

Preparedness Frameworks should detail governance structures that incentivize actually undertaking pre-committed risk mitigations when thresholds are met. Other incentives, including profit and shareholder value, sometimes conflict with risk management.

Anthropic’s RSP includes a number of procedural commitments meant to enhance the credibility of its risk mitigation commitments. For example, Anthropic commits to proactively planning to pause scaling of its models,10 publicly sharing evaluation results, and appointing a “Responsible Scaling Officer.” However, Anthropic’s RSP also includes the following clause: “[I]n a situation of extreme emergency, such as when a clearly bad actor (such as a rogue state) is scaling in so reckless a manner that it is likely to lead to lead to imminent global catastrophe if not stopped…we could envisage a substantial loosening of these restrictions as an emergency response…” This clause potentially undermines the credibility of Anthropic’s other commitments in the RSP, if at any time it can point to another actor who in its view is scaling recklessly.

OpenAI’s PF also outlines commendable governance measures, including procedural commitments, meant to enhance its risk mitigation credibility. It summarizes its operation structure: “(1) [T]here is a dedicated team “on the ground” focused on preparedness research and monitoring (Preparedness team), (2) there is an advisory group (Safety Advisory Group) that has a sufficient diversity of perspectives and technical expertise to provide nuanced input and recommendations, and (3) there is a final decision-maker (OpenAI Leadership, with the option for the OpenAI Board of Directors to overrule).” 

8. Preparedness frameworks should include a mechanism for regular updates to the framework itself, in light of ongoing research and advances in AI.

Both OpenAI’s PF and Anthropic’s RSP acknowledge the importance of regular updates. This is reflected in both of these documents’ names: Anthropic labels its RSP as “Version 1.0,” while OpenAI’s PF is labeled as “(Beta).”

Anthropic’s RSP includes an “Update Process” that reads in part: “We expect most updates to this process to be incremental…as we learn more about model safety features or unexpected capabilities…” This language directly commits Anthropic to changing its RSP as the state of the art changes. OpenAI references updates throughout its PF, notably committing to updating its evaluation methods and rubrics (“The Scorecard will be regularly updated by the Preparedness team to help ensure it reflects the latest research and findings”).

9. For models with risk above the lowest level, most evaluation results and methods should be public, including any performed mitigations

Publishing model evaluations and mitigations is an important tool for holding developers accountable to their PF commitments. Sensitivity about the level of transparency is key. For example, full information about evaluation methodology and risk mitigations could be exploited by malicious actors. Anthropic’s RSP takes a balanced approach in committing to “[p]ublicly share evaluation results after model deployment where possible, in some cases in the initial model card, in other cases with a delay if it serves a broad safety interest.” OpenAI’s PF does not commit to publishing its Model Scorecards, but OpenAI has since published related research on whether its models aid the creation of biological threats.

Conclusion

Preparedness frameworks represent a promising approach for AI developers to voluntarily commit to robust risk management practices. However, current versions have weaknesses—particularly their lack of specificity in risk thresholds, insufficiently conservative risk mitigation approaches, and inadequacy in addressing structural risks. Frontier AI developers without PFs should consider adopting them, and OpenAI and Anthropic should update their policies to strengthen risk mitigations and include more specificity.

Strengthening preparedness frameworks will require advancing AI safety science to enable precise risk quantification and develop new mitigations. NIST, academics, and companies plan to collaborate to measure and model frontier AI risks. Policymakers have a crucial opportunity to adapt regulatory approaches from other high-risk technologies like nuclear power to balance AI innovation and catastrophic risk prevention. Furthermore, standards bodies could develop more robust AI evaluations best practices, including guidance for third-party auditors.

Overall the AI community must view safety as an intrinsic priority, not just private actors creating preparedness frameworks. All stakeholders, including private companies, academics, policymakers and civil society organizations have roles to play in steering AI development toward societally beneficial outcomes. Preparedness frameworks are one tool, but not sufficient absent more comprehensive, multi-stakeholder efforts to scale AI safely and for the public good.

Many thanks to Madeleine Chang, Di Cooke, Thomas Woodside, and Felipe Calero Forero for providing helpful feedback.

A National AI for Good Initiative

Summary

Artificial intelligence (AI) and machine learning (ML) models can solve well-specified problems, like automatically diagnosing disease or grading student essays, at scale. But applications of AI and ML for major social and scientific problems are often constrained by a lack of high-quality, publicly available data—the foundation on which AI and ML algorithms are built.

The Biden-Harris Administration should launch a multi-agency initiative to coordinate the academic, industry, and government research community to support the identification and development of datasets for applications of AI and ML in domain-specific, societally valuable contexts. The initiative would include activities like generating ideas for high-impact datasets, linking siloed data into larger and more useful datasets, making existing datasets easier to access, funding the creation of real-world testbeds for societally valuable AI and ML applications, and supporting public-private partnerships related to all of the above.