Shayne Longpre is an AI researcher and PhD candidate at MIT. His research focuses on the data used to trains foundation models, their societal impact and governance. He leads the Data Provenance Initiative, a research collective of 50+ volunteers passionate about tracing, demystifying, and improving the data used to train AI systems. He also led the open letter to protect independent AI safety research into proprietary models, encouraging companies to protect good faith research with safe harbors. The letter was co-signed by 350+ researchers, journalists, and advocates in the field. His work has been covered by the New York Times, Washington Post, VentureBeat, and IEEE Spectrum.