Artificial Intelligence (AI) has gained momentum in the last 6 months and has become impossible to ignore. The ease of use of these new tools, such as AI-driven text and image generators, have driven significant discussion on the appropriate use of AI. Congress has also started digging into AI governance. Discussion has focused on a wide range of social consequences of AI, including biosecurity risks that could arise. To develop an overarching framework that includes addressing bio-related risks, it will be crucial for Congress, different federal agencies, and various non-governmental AI stakeholders to work together.
Bio Has Already Been Utilizing AI For Decades
Artificial intelligence has a long history in the life sciences. The principles are not new. Turing developed the idea in the 50’s and, by the turn of the century, bioinformaticians (data scientists for biological data) were already using AI in genome analysis. One focus of AI tools for biology has been on proteins. Nearly every known function in your body relies on proteins, and their 3-dimensional shapes determine their functions. Predicting the shape of a protein has long been a critical challenge. In 2020, Alphabet’s DeepMind published AlphaFold 2 as an AI-enabled software tool capable of doing just that. While not perfect, scientists have been able to use it and related tools to predict the shape of proteins faster and even to create new proteins optimized for specific applications. Of course, the applications of AI in biotechnology extends beyond proteins. Medical researchers have taken advantage of AI to identify new biomarkers and leverage AI to improve diagnostic tests. Industrial biotechnology researchers are exploring the use of AI to optimize biomanufacturing processes to improve yield. In other natural sciences, AI can even drive entire courses of experiments with minimal human input, with biological labs in development. Unfortunately, these same tools and capabilities could also be misused to cause harm by actors trying to develop toxins, pathogens, and other potential bio risks.
Proposed Bio x AI Solutions Are Incomplete
Congress is looking for ways to reduce AI risks, beginning with social implications such as disinformation, employment decision making, and other areas encountered by the general public. These are excellent starting points and echo some concerns abroad. Some Congressional action has also called for sweeping studies, new regulatory commissions, or broadly scoped risk management frameworks (see the AI Risk Management Framework developed by NIST). While some recently proposed bills address AI concerns in healthcare, there have been few solutions for reducing risks specifically related to intersections of AI with biosciences and biotechnology.
The Biden Administration recently reached agreements with leaders in the development of AI models to implement risk mitigation measures, including ones related to biosecurity. However, all of the current oversight mechanisms for AI models are voluntary, which has generated discussion on how to provide incentives and whether a stronger approach is needed. As the availability of AI models increases and models specific to biosciences and biotechnology become more sophisticated, this question about how to establish enforceable rules and appropriate degrees of accountability while minimizing collateral impact on innovation will become more urgent.
Approaches to governance for AI’s intersections with biology must also be tailored to the needs of the scientific community. As AI-enabled biodesign tools drive understanding and innovation, they will also decrease hurdles for malicious actors seeking to do harm. At the same time, data sharing, collaboration, and transparency have long been critical to advances in biosciences. Restricting AI model development or access to data, models, or model outputs without hampering legitimate research and development will be challenging. Implementing guardrails for these tools should be done carefully and with a solid understanding of how they are used and how they might be misused. Key questions for oversight of AI in bio include:
- How can we implement oversight on current and future bio-related tools that utilize AI enabled technologies (e.g., AlphaFold2, etc) in order to mitigate biosecurity risks associated with the technology while advancing R&D innovation?
- Are there other ways to reduce the potential for misuse with these technologies?
- AI model training requires an immense amount of data and AI models for biology will require many types of datasets specific to biology (e.g., protein structure databases, genomic sequence databases, etc). How should we address the need for scientists to generate and have access to a wide range of datasets in order to train bio-related AI tools while also balancing the potential for misuse of that data?
- How can bio-related AI or ML tools be applied to improve biosecurity more broadly?
- Into the future, advances in biosciences and biotechnology are likely to become more automated (e.g., with AI-enabled self-driving labs). How can we best ensure that these capabilities are not misused?
Now, While the Policy Window is Open
Recently, the National Defense Authorization Act for Fiscal Year 2022 created the bipartisan, intergovernmental National Security Commission on Emerging Biotechnology. The NSCEB has been tasked with creating an interim report by the end of 2023 and a final policy recommendation report by the end of 2024 with recommendations for Congressional action. One of the areas they are looking into is the intersection of AI and biosciences, specifically how AI technology can enable innovation in the biosciences and biotechnologies while mitigating risks.
The current attention on AI and the upcoming interim report to Congress by the NSCEB provide an important policy window and acts as a call to action that requires stakeholder input in order to create governance and policy recommendations that enable innovation while mitigating risks. If you are an AI or bio expert within academia, the biotech industry, an AI lab, or other non-governmental organization and are interested in contributing policy ideas, we have just launched our Bio x AI Policy Development Sprint here. Timely, well considered policy recommendations that address the key questions listed above will lead to the best possible implementation of AI in biosciences and biotechnology.
September should be bioeconomy month. To celebrate, we took our experts to the Hill to share their research and recommendations with Congress.
The U.S. federal government is the largest funder of scientific research in the world — but it is risk-averse to a fault. New approaches to peer review can bring American research back to the bleeding edge.
Truly open science requires that the public is not only able to access the products of research, but the knowledge embedded within.
Over the last year we’ve devoted considerable effort to understanding wildfire in the context of U.S. federal policy. Here’s what we learned.