The Pentagon should “monitor enemy activities in sleep research” says a newly disclosed report (pdf) from the elite defense science advisory panel known as JASON.
The JASONs were investigating the potential for U.S. adversaries “to exploit advances in Human Performance Modification, and thus create a threat to national security.”
Their report examined “the present state of the art in pharmaceutical intervention in cognition and in brain-computer interfaces, and considered how possible future developments might proceed and be used by adversaries.”
Among their findings was the underappreciated significance of sleep and the possibility of a “sleep gap” (a term not used in the report).
“The most immediate human performance factor in military effectiveness is degradation of performance under stressful conditions, particularly sleep deprivation.”
“If an opposing force had a significant sleep advantage, this would pose a serious threat.”
Fortunately, “the technical likelihood of such a development is small at present.” Just to be safe, however, the scientists recommended that the Pentagon “Monitor enemy activities in sleep research, and maintain close understanding of open source sleep research.”
In general, the JASONs went on to observe, “the publicity and scientific literature regarding human performance enhancement can easily be misinterpreted, yielding incorrect conclusions about potential military applications.”
See “Human Performance,” JASON, March 2008. Selected other reports from JASON are available here.
Researchers have many questions about the modernization of Pakistan’s nuclear-capable aircraft and associated air-launched cruise missiles.
The decision casts uncertainty on the role of scientific and technical expertise in federal decision-making, potentially harming our nation’s ability to respond effectively
Congress should foster a more responsive and evidence-based ecosystem for GenAI-powered educational tools, ensuring that they are equitable, effective, and safe for all students.
Without independent research, we do not know if the AI systems that are being deployed today are safe or if they pose widespread risks that have yet to be discovered, including risks to U.S. national security.