When Gen. Keith Alexander became the new director of the National Security Agency in 2005, “his predecessor, Mike Hayden, stepped down, seething with suspicion”– towards Alexander.
As told by Fred Kaplan in his new book Dark Territory, Gen. Hayden and Gen. Alexander had clashed years before in a struggle “for turf and power, leaving Hayden with a bitter taste, a shudder of distrust, about every aspect and activity of the new man in charge.” The feeling was mutual.
The subject (and subtitle) of Kaplan’s book is “the secret history of cyber war.” But the most interesting secrets disclosed here have less to do with any classified missions or technologies than with the internal bureaucratic evolution of the military’s interest in cyber space. Who met with whom, who was appointed to what position, or even (as in the case of Hayden and Alexander) who may have hated whom all turn out to be quite important in the ongoing development of this contested domain.
Kaplan seems to have interviewed almost all of the major players and participants in this history, and he has an engaging story to tell. (Two contrasting reviews of Dark Territory in the New York Times are here and here.)
Meanwhile, the history of cyber war is becoming gradually less secret.
This week, the Department of Defense openly published an updated instruction on Cybersecurity Activities Support to DoD Information Network Operations (DoD Instruction 8530.01, March 7).
It replaces, incorporates and cancels previous directives from 2001 that were for restricted distribution only.
Researchers have many questions about the modernization of Pakistan’s nuclear-capable aircraft and associated air-launched cruise missiles.
The decision casts uncertainty on the role of scientific and technical expertise in federal decision-making, potentially harming our nation’s ability to respond effectively
Congress should foster a more responsive and evidence-based ecosystem for GenAI-powered educational tools, ensuring that they are equitable, effective, and safe for all students.
Without independent research, we do not know if the AI systems that are being deployed today are safe or if they pose widespread risks that have yet to be discovered, including risks to U.S. national security.