Congress has directed the Department of Defense to reach an “arrangement with the JASON scientific advisory group to conduct national security studies and analyses.”
Last spring DoD officials sought to let the existing contract with the JASONs lapse, leaving the panel without a sponsor and threatening its continued viability. The new legislation rejects that move, although it anticipates that the JASON contract will now be managed instead by the DoD Under Secretary for Acquisition and Sustainment instead of by Defense Research and Engineering.
“The conferees expect the [new] arrangement or contract to be structured . . . similar to previous contracts for JASON research studies,” the NDAA conference report said.
The JASON panel is widely esteemed as a source of independent scientific expertise that is relatively free of institutional bias. Its reports are often able to provide insight into challenging technological problems of various kinds.
The FY2020 defense authorization bill calls for new JASON assessments of electronic warfare programs, and of options for replacement of the W78 warhead.
In 2019 the JASONs performed studies on Pit Aging (NNSA), Bio Threats (DOE), and Fundamental Research Security (NSF), among others.
Protecting the health and safety of the American public and ensuring that the public has the opportunity to participate in the federal decision-making process is crucial. As currently organized, FACs are not equipped to provide the best evidence-based advice.
As new waves of AI technologies continue to enter the public sector, touching a breadth of services critical to the welfare of the American people, this center of excellence will help maintain high standards for responsible public sector AI for decades to come.
The Federation of American Scientists supports the Critical Materials Future Act and the Unearth Innovation Act.
By creating a reliable, user-friendly framework for surfacing provenance, NIST would empower readers to better discern the trustworthiness of the text they encounter, thereby helping to counteract the risks posed by deceptive AI-generated content.