Richard Moulange is an AI–Biosecurity Fellow at the Centre for Long-Term Resilience and a PhD candidate in biomedical machine learning at the University of Cambridge. He was recently a Summer Research Fellow at the Centre of the Governance of AI, where he co-authored two papers: one on risk-benefit analysis for open-source AI and the other on responsible governance of biological design tools. His academic research focuses on out-of-distribution robustness for biomedical machine learning models. He earned his Bachelor’s and Master’s degrees from the University of Cambridge.
The landscape of biosecurity risks related to AI is complex and rapidly changing, and understanding the range of issues requires diverse perspectives and expertise. Here are five promising ideas that match the diversity of challenges that AI poses in the life sciences.