FAS Receives $1.5 Million Grant on The Artificial Intelligence / Global Risk Nexus

Grant Funds Research of AI’s Impact on Nuclear Weapons, Biosecurity, Military Autonomy, Cyber, and other global issues

Washington, D.C. – September 11, 2024 – The Federation of American Scientists (FAS) has received a $1.5 million grant from the Future of Life Institute (FLI) to investigate the implications of artificial intelligence on global risk. The 18-month project supports FAS’s efforts to bring together the world’s leading security and technology experts to better understand and inform policy on the nexus between AI and several global issues, including nuclear deterrence and security, bioengineering, autonomy and lethality, and cyber security-related issues.

FAS’s CEO Daniel Correa noted that “understanding and responding to how new technology will change the world is why the Federation of American Scientists was founded. Against this backdrop, FAS has embarked on a critical journey to explore AI’s potential. Our goal is not just to understand these risks, but to ensure that as AI technology advances, humanity’s ability to understand and manage the potential of this technology advances as well.

“When the inventors of the atomic bomb looked at the world they helped create, they understood that without scientific expertise and brought her perspectives humanity would never live the potential benefits they had helped bring about. They founded FAS to ensure the voice of objective science was at the policy table, and we remain committed to that effort after almost 80 years.”

“We’re excited to partner with FLI on this essential work,” said Jon Wolfsthal, who directs FAS’ Global Risk Program.  “AI is changing the world. Understanding this technology and how humans interact with it will affect the pressing global issues that will determine the fate of all humanity. Our work will help policy makers better understand these complex relationships. No one fully understands what AI will do for us or to us, but having all perspectives in the room and working to protect against negative outcomes and maximizing positive ones is how good policy starts.”

“As the power of AI systems continues to grow unchecked, so too does the risk of devastating misuse and accidents,” writes FLI President Max Tegmark. “Understanding the evolution of different global threats in the context of AI’s dizzying development is instrumental to our continued security, and we are honored to support FAS in this vital work.”

The project will include a series of activities, including high-level focused workshops with world-leading experts and officials on different aspects of artificial intelligence and global risk, policy sprints and fellows, and directed research, and conclude with a global summit on global risk and AI in Washington in 2026.


###



ABOUT FAS

The Federation of American Scientists (FAS) works to advance progress on a broad suite of contemporary issues where science, technology, and innovation policy can deliver dramatic progress, and seeks to ensure that scientific and technical expertise have a seat at the policymaking table. Established in 1945 by scientists in response to the atomic bomb, FAS continues to work on behalf of a safer, more equitable, and more peaceful world. More information at fas.org.

ABOUT FLI

Founded in 2014, the Future of Life Institute (FLI) is a leading nonprofit working to steer transformative technology towards benefiting humanity. FLI is best known for their 2023 open letter calling for a six-month pause on advanced AI development, endorsed by experts such as Yoshua Bengio and Stuart Russell, as well as their work on the Asilomar AI Principles and recent EU AI Act.

We Need Biological Verification and Attribution Tools to Combat Disinformation Aimed at International Institutions

The Biological Weapons Convention’s Ninth Review Conference (RevCon) took place under a unique geopolitical storm, as the COVID-19 pandemic raged and the Russian invasion of Ukraine took center stage. Russia’s continued claims of United States-sponsored bioweapons research laboratories in Ukraine only added to the tension. Russia asserted that they had uncovered evidence of offensive biological weapons research underway in Ukrainian labs, supported by the United States, and that the invasion of Ukraine was due to the threat they faced so close to their borders. 

While this story has been repeated countless times to countless audiences – including an Article V consultative meeting and the United Nations Security Council (UNSC) as part of an Article VI complaint against the U.S. lodged by Russia, the fact remains that Russia’s assertion is untrue.

The biological laboratories present in Ukraine operate as part of the Department of Defense’s Biological Threat Reduction Program, and are run by Ukrainian scientists charged with detecting and responding to emerging pathogens in the area. Articles V and VI of the Biological Weapons Convention (BWC) are intended to be invoked when a State Party hopes to resolve a problem of cooperation or report a breach of the Convention, respectively. The Article V formal consultative meeting resulted in no consensus being reached, and the Article VI UNSC meeting rejected Russia’s claims. The lack of consensus during the consultative meeting created a foothold for Russia to continue their campaign of disinformation, and the UNSC meeting only further reinforced it. The Article V and VI clauses are meant to provide some means of mediation and recourse to States Parties in the case of BWC violation. However, in this case they were not invoked in good faith, rather, they were used as a springboard for a sweeping Russian disinformation campaign.

Abusing Behavioral Norms for Political Gain

While Russia’s initial steps of calling for an Article V consultative meeting and an Article VI Security Council investigation do not seem outwardly untoward, Russia’s behavior during and after these proceedings dismissed the claims indicated their deeper purpose. 

Example: Misdirection in Documentation Review

During the RevCon, the Russian delegation often brought up how the results of the UNSC investigation would be described in the final document during the article-by-article review, calling for versions of the document that included more slanted versions of the events. They also continually mentioned the U.S.’ refusal to answer their questions, despite the answers being publicly available on the consultative meeting’s UNODA page, the Russians characterized their repeated mentioning of the UNSC investigation findings as an act of defiant heroism, implying that the U.S. was trying to quash their valid concerns, but that Russia would continue to raise them until the world had gotten the answers it deserved. This narrative directly contradicts the facts of the Article V and VI proceedings. The UNSC saw no need to continue investigating Russia’s claims. 

Example: Side Programming with Questionable Intent 

The Russian delegation also conducted a side event during the BWC dedicated to the outcomes of the consultative meeting. The side event included a short ‘documentary’ of Russian evidence that the U.S.-Ukraine laboratories were conducting biological weapons research. This evidence included footage of pesticide-dispersal drones in a parking lot that were supposedly modified to hold bioweapons canisters, cardboard boxes with USAID stickers on them, and a list of pathogen samples supposedly present that were destroyed prior to filming. When asked about next steps, the Russian delegation made thinly veiled threats to hold larger BWC negotiations hostage, stating that if the U.S. and its allies maintain their position and don’t demonstrate any further interest in continuing dialogue, it would be difficult for the 9th RevCon to reach consensus. 

Example: Misuse of ‘Point of Order’

Russia’s behavior at the 9th RevCon emphasizes the unwitting role international institutions can play as springboards for state-sponsored propaganda and disinformation. 

During opening statements, the Russian delegation continually called a point of order upon any mention of the Russian invasion of Ukraine. A point of order allows the delegation to respond to the speaker immediately, effectively interrupting their statement. During the Ukrainian delegation’s opening statement, the Russian delegation called four points of order, citing Ukraine’s “political statements” as disconnected from the BWC discussion. Russia’s use of the rules of procedure to bully other delegations continued – after they concluded a point of order during the NATO delegate’s statement, they called another one almost immediately after the NATO delegate resumed her statement with the singular word, “Russia.” This behavior continued throughout all three weeks of the RevCon. 

Example: Single Vote Disruption Made in Bad Faith

All BWC votes are adopted by consensus, meaning that all states parties have to agree for a decision to be made. While this helps ensure the greatest inclusivity and equality between states parties, as well as promote implementation, it also means that one country can be the ‘spoiler’ and disrupt even widely supported changes. 

For example, in 2001, the United States pulled out of verification mechanism negotiations at the last minute, upending the entire effort. Russia’s behavior in 2022 was similarly disruptive, but made with the goal of subversion. The vote changed how other delegations reacted, as representatives seemed more reluctant to mention the Article V and VI proceedings. The structure of the United Nations as impartial and the BWC as consensus-based means that by their very nature they cannot combat their misuse. Any progress to be had by the BWC relies on states operating in good faith, which is impossible to do when a country has a disinformation agenda.

Thus, the very nature of the UN and associated bodies attenuates their ability to respond to states’ misuse. Russia’s behavior at the 9th RevCon is part of a pattern that shows no signs of slowing down. 

We Need More Sophisticated Biological Verification and Attribution Tools

The actions described above demonstrate the door has been kicked fully open for regimes to use the UN and associated bodies as mouthpieces for state-sponsored propaganda. 

So, it is imperative that 1) more sophisticated biological verification and attribution tools be developed, and 2) the BWC implements a legally binding verification mechanism. 

The development of better verification methods to verify whether biological research is for civil or military purposes will help to remove ambiguity around laboratory activities around the world. It will also make it harder for benign activities to be misidentified as offensive biological weapons activities. 

Further, improved attribution methods will  determine where biological weapons originate from and will further remove ambiguity during a genuine biological attack. 

The development of both these capabilities will strengthen an eventual legally binding verification mechanism. These two changes will also allow Article V consultative meetings and Article VI UNSC meetings to determine the presence of offensive bioweapons research more definitively, thus contributing rather substantively to the strengthening of the convention. As ambiguity around the results of these investigations decreases, so does the space for disinformation to take hold.

A Step Forward in Mitigating Existential Threats

It’s no secret that the world is becoming increasingly complex and interconnected. And as our societies become more technologically advanced, the risks of a global catastrophe become greater. Natural disasters or severe climate change in one part of the world can quickly become a humanitarian crisis in another, an airborne virus can spread around the globe in days, and a terrorist attack can have ripple effects across borders. In recent years, we’ve seen a number of such major disasters—both natural and man-made—that have had devastating impacts on communities around the world. From hurricanes and earthquakes to cyberattacks and pandemics, these events have shown us just how vulnerable we are to the forces of nature and the dangers posed by our own technologies. Yet, despite the clear and present danger, governments appear woefully unprepared to manage any of these risks.

Fortunately, top lawmakers on the Senate Homeland Security and Government Affairs Committee (HSGAC), Senator Rob Portman (R-OH) and Senator Gary Peters (D-MI), have introduced legislation—the Global Catastrophic Risk Preparedness Act—that would establish an interagency taskforce to study how the U.S. government should be prepared to mitigate and manage such risks. This bipartisan legislation would ensure that our government has the tools and resources necessary to identify, assess, and respond to these risks in a coordinated and effective manner and would be the first critical step towards a national preparedness plan. 

In recent years, the U.S. government has been caught flat-footed by a number of global catastrophic risks. From pandemics to climate change, the U.S. has been slow to respond to these existential threats. While the probability of some of these events happening may be low, the potential consequences are far too severe to ignore.

Given the potentially devastating consequences of these events, it is essential that the U.S. government is prepared to manage them should they occur. Moreover, the cost of preparing for them is dwarfed by the cost of doing nothing and being caught unprepared when one of them does occur. For instance, the COVID-19 pandemic has cost the United States over $16 trillion, while the White House estimates it needs merely $65 billion to help prevent the next pandemic. Similarly, an analysis by Deloitte found that if the U.S. does not decarbonize over the next 50 years, it would cost the economy nearly $14.5 trillion but the U.S. economy would gain $3 trillion if it rapidly decarbonizes during that time. But to prevent such catastrophic events from happening requires an all-of-government approach to mitigation and preparedness—a gap this legislation aims to fill.

Aside from the natural catastrophes waiting to happen in the lack of a coordinated global response, there are also man-made catastrophic risks that the U.S. Government must be prepared to mitigate and manage. In 1939, Einstein wrote to President Roosevelt, warning him of the possibility to engineer a nuclear chain reaction that could lead to the creation of powerful bombs. Just a few years later, these bombs were created. In little more than a decade, enough had been produced that, for the first time in history, a handful of decision-makers could destroy civilization. Humanity had entered a new age, in which we faced not only existential risks from our natural environment, but also the possibility that we might be able to extinguish ourselves. This technology which was considered “emerging” in 1939 almost led to destruction of humanity 23 years later.

It is difficult to forecast what emerging technologies may develop in the future. Emerging technologies are quite literally emerging. When they are realized, they develop rapidly and the full extent of their capabilities is often not known for years or even decades. Just last year, The Department of Justice indicted several FSB officers for their involvement in a multi-stage campaign in which they gained remote access to critical infrastructure, including a US nuclear power plant where they planted malware. In 2005, Paul Krugman would have likely laughed at the possibility of the internet being used as a weapon to cause a nuclear meltdown. Yet to come machine learning technologies in possession of power hungry dictators could potentially be used in a similar manner to expand their powers and harm large populations in other countries. An algorithm that can identify a cure for superbugs could also be used by bioterrorists to find strains of viruses that likely evade any such cures. Thus preparing for what could potentially happen, even if considered a low probability event today, only makes sense.

While lawmakers fuss over the finer details of the Global Catastrophic Risk Preparedness Act, it is also essential to look at the next steps. If an interagency task force works to develop an assessment of the current state of preparedness and implementation plans to prepare the U.S. government for these risks, it would also be responsible for ensuring that these plans are regularly updated and tested, so that we can be as prepared as possible when—not if, as we see with climate change—one of these events happens. Some may argue that this is unnecessary bureaucracy—but given the stakes involved, we cannot afford to take chances. The time to act is now, before it’s too late.

FAS Joins Over 30 Biosecurity Leaders Supporting Proposed Recommendations to the U.S. Government and NSABB on Strengthening ePPP and DURC Policies

WASHINGTON, D.C. — The Federation of American Scientists joined over 30 leaders in the scientific, medical, public health, research, and science policy fields in providing a set of recommendations regarding oversight of enhanced potential pandemic pathogen (ePPP) research and dual use research to the National Science Advisory Board on Biosecurity (NSABB). Research involving potential pandemic pathogens can provide significant benefits to society but, if done incorrectly, can also contribute to pandemic risk.

The recommendations aim to diminish the risk that U.S. science could inadvertently initiate epidemics or pandemics, clarify the scope and decision-making process associated with governance of ePPP research and dual-use science, increase transparency around U.S. policy and decision making on these issues, and minimize or eliminate disruption of science work that does not pose these risks.

“Without proper governance, dual use research can be as dangerous as it is illuminating. The U.S. government must revise its decision-making process to protect scientists and the public,” said FAS CEO Daniel Correa. “Bio innovation and pandemic prevention are not disparate aims, and finding the balance between them can enhance pathogen research responsibly and foster innovation.”

The letter highlights five primary recommendations to improve the guidance and implementation of governing research related to dual use and ePPP pathogens including:

Read the full letter to the NSABB here.

2022 Bioautomation Challenge: Investing in Automating Protein Engineering

2022 Bioautomation Challenge: Investing in Automating Protein Engineering
Thomas Kalil, Chief Innovation Officer of Schmidt Futures, interviews biomedical engineer Erika DeBenedictis

Schmidt Futures is supporting an initiative – the 2022 Bioautomation Challenge – to accelerate the adoption of automation by leading researchers in protein engineering. The Federation of American Scientists will act as the fiscal sponsor for this challenge.

​This initiative was designed by Erika DeBenedictis, who will also serve as the program director. Erika holds a PhD in biological engineering from MIT, and has also worked in biochemist David Baker’s lab on machine learning for protein design ​​at the University of Washington in Seattle.  

​Recently, I caught up with Erika to understand why she’s excited about the opportunity to automate protein engineering.

Why is it important to encourage widespread use of automation in life science research?

Automation improves reproducibility and scalability of life science. Today, it is difficult to transfer experiments between labs. This slows progress in the entire field, both amongst academics and also from academia to industry. Automation allows new techniques to be shared frictionlessly, accelerating broader availability of new techniques. It also allows us to make better use of our scientific workforce. Widespread automation in life science would shift the time spent away from repetitive experiments and toward more creative, conceptual work, including designing experiments and carefully selecting the most important problems. 

How did you get interested in the role that automation can play in the life sciences?

​I started graduate school in biological engineering directly after working as a software engineer at Dropbox. I was shocked to learn that people use a drag-and-drop GUI to control laboratory automation rather than an actual programming language. It was clear to me that automation has the potential to massively accelerate life science research, and there’s a lot of low-hanging fruit. 

Why is this the right time to encourage the adoption of automation?

​The industrial revolution was 200 years ago, and yet people are still using hand pipettes. It’s insane! The hardware for doing life science robotically is quite mature at this point, and there are quite a few groups (Ginkgo, Strateos, Emerald Cloud Lab, Arctoris) that have automated robotic setups. Two barriers to widespread automation remain: the development of robust protocols that are well adapted to robotic execution and overcoming cultural and institutional inertia.

What role could automation play in generating the data we need for machine learning?  What are the limitations of today’s publicly available data sets?

​There’s plenty of life science datasets available online, but unfortunately most of it is unusable for machine learning purposes. Datasets collected by individual labs are usually too small, and combining datasets between labs, or even amongst different experimentalists, is often a nightmare. Today, when two different people run the ‘same’ experiment they will often get subtly different results. That’s a problem we need to systematically fix before we can collect big datasets. Automating and standardizing measurements is one promising strategy to address this challenge.

Why protein engineering?

​The success of AlphaFold has highlighted to everyone the value of using machine learning to understand molecular biology. Methods for machine-learning guided closed-loop protein engineering are increasingly well developed, and automation makes it that much easier for scientists to benefit from these techniques. Protein engineering also benefits from “robotic brute force.” When you engineer any protein, it is always valuable to test more variants, making this discipline uniquely benefit from automation. 

If it’s such a good idea, why haven’t academics done it in the past?

​Cost and risk are the main barriers. What sort of methods are valuable to automate and run remotely? Will automation be as valuable as expected? It’s a totally different research paradigm; what will it be like? Even assuming that an academic wants to go ahead and spend $300k for a year of access to a cloud laboratory, it is difficult to find a funding source. Very few labs have enough discretionary funds to cover this cost, equipment grants are unlikely to pay for cloud lab access, and it is not obvious whether or not the NIH or other traditional funders would look favorably on this sort of expense in the budget for an R01 or equivalent. Additionally, it is difficult to seek out funding without already having data demonstrating the utility of automation for a particular application. All together, there are just a lot of barriers to entry.

You’re starting this new program called the 2022 Bioautomation Challenge. How does the program eliminate those barriers?

​This program is designed to allow academic labs to test out automation with little risk and at no cost. Groups are invited to submit proposals for methods they would like to automate. Selected proposals will be granted three months of cloud lab development time, plus a generous reagent budget. Groups that successfully automate their method will also be given transition funding so that they can continue to use their cloud lab method while applying for grants with their brand-new preliminary data. This way, labs don’t need to put in any money up-front, and are able to decide whether they like the workflow and results of automation before finding long-term funding.

Historically, some investments that have been made in automation have been disappointing, like GM in the 1980s, or Tesla in the 2010s. What can we learn from the experiences of other industries? Are there any risks?

​For sure. I would say even “life science in the 2010s” is an example of disappointing automation: academic labs started buying automation robots, but it didn’t end up being the right paradigm to see the benefits. I see the 2022 Bioautomation Challenge as an experiment itself: we’re going to empower labs across the country to test out many different use cases for cloud labs to see what works and what doesn’t.

Where will funding for cloud lab access come from in the future?

​Currently there’s a question as to whether traditional funding sources like the NIH would look favorably on cloud lab access in a budget. One of the goals of this program is to demonstrate the benefits of cloud science, which I hope will encourage traditional funders to support this research paradigm. In addition, the natural place to house cloud lab access in the academic ecosystem is at the university level. I expect that many universities may create cloud lab access programs, or upgrade their existing core facilities into cloud labs. In fact, it’s already happening: Carnegie Mellon recently announced they’re opening a local robotic facility that runs Emerald Cloud Lab’s software.

What role will biofabs and core facilities play?

​In 10 years, I think the terms “biofab,” “core facility,” and “cloud lab” will all be synonymous. Today the only important difference is how experiments are specified: many core facilities still take orders through bespoke Google forms, whereas Emerald Cloud Lab has figured out how to expose a single programming interface for all their instruments. We’re implementing this program at Emerald because it’s important that all the labs that participate can talk to one another and share protocols, rather than each developing methods that can only run in their local biofab. Eventually, I think we’ll see standardization, and all the facilities will be capable of running any protocol for which they have the necessary instruments.

In addition to protein engineering, are there other areas in the life sciences that would benefit from cloud labs and large-scale, reliable data collection for machine learning?

​I think there are many areas that would benefit. Areas that struggle with reproducibility, are manually repetitive and time intensive, or that benefit from closely integrating computational analysis with data are both good targets for automation. Microscopy and mammalian tissue culture might be another two candidates. But there’s a lot of intellectual work for the community to do in order to articulate problems that can be solved with machine learning approaches, if given the opportunity to collect the data.

Use of Microbial Forensics in the Middle East/North Africa Region

In this report, Christoper Bidwell, JD and Randall Murch, PhD, explore the use of microbial forensics as a tool for creating a common base line for understanding biologically-triggered phenomena, as well as one that can promote mutual cooperation in addressing these phenomena. A particular focus is given to the Middle East/North Africa (MENA) region, as it has been forced to deal with multiple instances of both naturally-occurring and man-made biological threats over the last 10 years. Although the institution of a microbial forensics capability in the MENA region (however robust) is still several years away, establishing credibility of the results offered by microbial forensic analysis performed by western states and/or made today in workshops and training have the ability to prepare the policy landscape for the day in which the source of a bio attack, either man-made or from nature, needs to be accurately attributed.

A full PDF version of the report can be found here.

Use of Attribution and Forensic Science in Addressing Biological Weapon Threats: A Multi-Faceted Study

The threat from the manufacture, proliferation, and use of biological weapons (BW) is a high priority concern for the U.S. Government. As reflected in U.S. Government policy statements and budget allocations, deterrence through attribution (“determining who is responsible and culpable”) is the primary policy tool for dealing with these threats. According to those policy statements, one of the foundational elements of an attribution determination is the use of forensic science techniques, namely microbial forensics. In this report, Christopher Bidwell, FAS Senior Fellow for Nonproliferation Law and Policy, and Kishan Bhatt, an FAS summer research intern and undergraduate student studying public policy and global health at Princeton University, look beyond the science aspect of forensics and examine how the legal, policy, law enforcement, medical response, business, and media communities interact in a bioweapon’s attribution environment. The report further examines how scientifically based conclusions require credibility in these communities in order to have relevance in the decision making process about how to handle threats.

A full PDF version of the report can be found here.

“Zika has been sexually transmitted in Texas, CDC confirms” (CNN)

The first identified case of the Zika virus acquired in the continental United States has been confirmed in Texas, contracted via sexual transmission. The CDC is expected to release guidelines on sexual transmission, however relatively little is known. While it has been established that the virus remains in the blood for roughly a week, the viability in semen is yet to be determined. Find out more about the latest research developments of Zika virus at CNN: http://www.cnn.com/2016/02/02/health/zika-virus-sexual-contact-texas/

“Florida, Illinois officials report travel-related Zika virus cases” (The Washington Post)

Hawaii, Illinois, Florida, and Texas have all recently reported travel-related cases of Zika virus, including two pregnant women who are being actively monitored. The virus has shown a strong association with fetal brain damage, but no treatment or vaccine is currently available. Last week, the CDC advised pregnant women to avoid traveling to countries where transmission of the virus has been reported. Read more at The Washington Post: https://www.washingtonpost.com/news/to-your-health/wp/2016/01/19/cdc-issues-guidelines-for-pregnant-women-returning-from-zika-affected-countries/

“Egregious safety failures at Army lab led to anthrax mistakes” (USA Today)

An investigation into the Army labs at Dugway Proving Ground in Utah, responsible for chemical and biological defensive testing, was launched last year after it was discovered to be accidentally shipping live anthrax to laboratories across the country for over a decade. The report reveals gaps that go far beyond that of poor leadership, and include a dozen personnel that are being held accountable and could face disciplinary action as a result. To read more about the findings of the Army investigation report, visit USA Today:  http://www.usatoday.com/story/news/nation/2016/01/15/military-bioterrorism-lab-safety/78752876/

“Biosecurity board grapples with how to rein in risky flu studies” (Science)

The National Science Advisory Board for Biosecurity met last week to discuss Gain of Function (GOF) studies. A topic of debate for the past several years, GOF studies involving  H5N1 avian influenza and accidents at federal high containment laboratories caused the U.S. government to declare a moratorium in 2014. To find out more about the meeting, including the concerns and recommendations of opponents and researchers, read the article published in Sciencehttp://www.sciencemag.org/news/2016/01/biosecurity-board-grapples-how-rein-risky-flu-studies

“Suicide attack on Pakistan polio vaccination center kills 15” (Washington Post)

Fifteen people were killed and more wounded by a small militant group in Quetta, Pakistan. The suicide bomber targeted a polio vaccination center as teams prepared for a three-day immunization campaign. A spokesman for the group claiming responsibility has warned of future attacks on polio teams. More information can be found at the Washington Post:

 https://www.washingtonpost.com/world/asia_pacific/police-14-killed-in-bomb-attack-on-polio-vaccination-center-in-southwestern-pakistan/2016/01/13/d27fafd0-b9b9-11e5-85cd-5ad59bc19432_story.html?wpmm=1&wpisrc=nl_daily202