Six Policy Ideas for the National AI Strategy

The White House Office of Science and Technology Policy (OSTP) has sought public input for the Biden administration’s National AI Strategy, acknowledging the potential benefits and risks of advanced AI. The Federation of American Scientists (FAS) was happy to recommend specific actions for federal agencies to safeguard Americans’ rights and safety. With U.S. companies creating powerful frontier AI models, the federal government must guide this technology’s growth toward public benefit and risk mitigation.

Recommendation 1: OSTP should work with a suitable agency to develop and implement a pre-deployment risk assessment protocol that applies to any frontier AI model.

Before launching a frontier AI system, developers must ensure safety, trustworthiness, and reliability through pre-deployment risk assessment. This protocol aims to thoroughly analyze potential risks and vulnerabilities in AI models before deployment. 

We advocate for increased funding towards the National Institute of Standards and Technology (NIST) to enhance its risk measurement capacity and develop robust benchmarks for AI model risk assessment. Building upon NIST’s AI Risk Management Framework (RMF) will standardize metrics for evaluation incorporating various cases such as open-source models, academic research, and fine-tuning of models which differ from larger labs like OpenAI’s GPT-4.

We propose the Federal Trade Commission (FTC), under Section 5 of the FTC Act, implement and enforce this pre-deployment risk assessment strategy. The FTC’s role to prevent unfair or deceptive practices in commerce is aligned with mitigating potential risks from AI systems.

Recommendation 2: Adherence to the appropriate risk management framework should be compulsory for any AI-related project that receives federal funding.

The U.S. government, as a significant funder of AI through contracts and grants, has both a responsibility and opportunity. Responsibility: to ensure that its AI applications meet a high bar for risk management.  Opportunity: to enhance a culture of safety in AI development more broadly. Adherence to a risk management framework should be a prerequisite for AI projects seeking federal funds.

Currently, voluntary guidelines such as NIST’s AI RMF exist, but we propose making these compulsory. Agencies should require contractors to document and verify the risk management practices in place for the contract. For agencies that do not have their own guidelines, the NIST AI RMF should be used. And the NSF should require documentation of the grantee’s compliance with the NIST AI RMF in grant applications for AI projects. This approach will ensure all federally funded AI initiatives maintain a high bar for risk management.

Recommendation 3: NSF should increase its funding for “trustworthy AI” R&D.

Trustworthy AI” refers to AI systems that are reliable, safe, transparent, privacy-enhanced, and unbiased. While NSF is a key non-military funder of AI R&D in the U.S., our rough estimates indicate that its investment in fields promoting trustworthiness has remained relatively static, accounting for only 10-15% of all AI grants. Given its $800 million annual AI-related budget, we recommend that NSF direct a larger share of grants towards research in trustworthy AI.

To enable this shift, NSF could stimulate trustworthy AI research through specific solicitations; launch targeted programs in this area; and incorporate a “trustworthy AI” section in funding applications, prompting researchers to outline the trustworthiness of their projects. This would help evaluate AI project impacts and promote proposals with significant potential in trustworthy AI. Lastly, researchers could be requested or mandated to apply the NIST AI RMF during their studies.

Recommendation 4: FedRAMP should be broadened to cover AI applications contracted for by the federal government.

The Federal Risk and Authorization Management Program (FedRAMP) is a government-wide initiative that standardizes security protocols for cloud services. Given the rising utilization of AI services in federal operations, a similar system of security standards should apply to these services, since they are responsible for managing highly sensitive data related to national security and individual privacy.

Expanding FedRAMP’s mandate to include AI services is a logical next step in ensuring the secure integration of advanced technologies into federal operations. Applying a framework like FedRAMP to AI services would involve establishing robust security standards specific to AI, such as secure data handling, model transparency, and robustness against adversarial attacks. The expanded FedRAMP program would streamline AI integration into federal operations and avoid repetitive security assessments.

Recommendation 5: The Department of Homeland Security should establish an AI incidents database.

The Department of Homeland Security (DHS) needs to create a centralized AI Incidents Database, detailing AI-related breaches, failures and misuse across industries. Its existing authorization under the Homeland Security Act of 2002 makes DHS capable of this role. This database would increase understanding, mitigate risks, and build trust in AI systems’ safety and security.

Voluntary reporting from AI stakeholders should be encouraged while preserving data confidentiality. For effectiveness, anonymized or aggregated data should be shared with AI developers, researchers, and policymakers to better understand AI risks. DHS could use existing databases such as the one maintained by the Partnership on AI and Center for Security and Emerging Technologies, as well as adapt reporting methods from global initiatives like the Financial Services Information Sharing and Analysis Center.

Recommendation 6: OSTP should work with agencies to streamline the process of granting Interested Agency Waivers to AI researchers on J-1 visas.

The ongoing global competition in AI necessitates attracting and retaining a diverse, highly skilled talent pool. The US J-1 Exchange Visitor Program, often used by visiting researchers, requires some participants to return home for two years before applying for permanent residence.

Federal agencies can waive this requirement for certain individuals via an “Interested Government Agency” (IGA) request. Agencies should establish a transparent, predictable process for AI researchers to apply for such waivers. The OSTP should collaborate with agencies to streamline this process. Taking cues from the Department of Defense’s structured application process, including a dedicated webpage, application checklist, and sample sponsor letter, could prove highly beneficial for improving the transition of AI talent to permanent residency in the US.
Review the details of these proposals in our public comment.

What Are Acceptable Nuclear Risks?

When I read Eric Schlosser’s acclaimed 2013 bookCommand and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety, I found a tantalizing revelation on pages 170-171, when it asked, “What was the ‘acceptable’ probability of an accidental nuclear explosion?” and then proceeded to describe a 1957 Sandia Report, “Acceptable Premature Probabilities for Nuclear Weapons,” which dealt with that question.

Unable to find the report online, I contacted Schlosser, who was kind enough to share it with me. (We owe him a debt of gratitude for obtaining it through a laborious Freedom of Information Act request.) The full reportSchlosser’s FOIA request, and my analysis of the report are now freely accessible on my Stanford web site. (The 1955 Army report, “Acceptable Military Risks from Accidental Detonation of Atomic Weapons,” on which this 1957 Sandia report builds, appears not to be available. If anyone knows of an existing copy, please post a comment.)

Using the same criterion as this report*, which, of course, is open to question, my analysis shows that nuclear terrorism would have to have a risk of at most 0.5% per year to be considered “acceptable.” In contrast, existing estimates are roughly 20 times higher.**

My analysis also shows, that using the report’s criterion*, the risk of a full-scale nuclear war would have to be on the order of 0.0005% per year, corresponding to a “time horizon” of 200,000 years. In contrast, my preliminary risk analysis of nuclear deterrence indicates that risk to be at least a factor 100 and possibly a factor of 1,000 times higher. Similarly, when I ask people how long they think we can go before nuclear deterrence fails and we destroy ourselves (assuming nothing changes, which hopefully it will), almost all people see 10 years as too short and 1,000 years as too long, leaving 100 years as the only “order of magnitude” estimate left, an estimate which is 2,000 times riskier than the report’s criterion would allow.

In short, the risks of catastrophes involving nuclear weapons currently appear to be far above any acceptable level. Isn’t it time we started paying more attention to those risks, and taking steps to reduce them?

* The report required that the expected number of deaths due to an accidental nuclear detonation should be no greater than the number of American deaths each year due to natural disasters, such as hurricanes, floods, and earthquakes.

** In the Nuclear Tipping Point video documentary Henry Kissinger says, “if nothing fundamental changes, then I would expect the use of nuclear weapons in some 10 year period is very possible” – equivalent to a risk of approximately 10% per year. Similarly, noted national security expert Dr. Richard Garwin testified to Congress that he estimate the risk to be in the range of 10-20 percent per year. A survey of national security expertsby Senator Richard Lugar was also in the 10% per year range.