At a Senate Armed Services Committee hearing yesterday on foreign cyber threats to the U.S., there were several references to the saying that “people who live in glass houses should not throw stones.” The point, made by DNI James Clapper, was that the U.S. should not be too quick to penalize the very espionage practices that U.S. intelligence agencies rely upon, including clandestine collection of information from foreign computer networks.
But perhaps a more pertinent saying would be “It takes a thief to catch a thief.”
U.S. intelligence agencies should be well-equipped to recognize Russian cyber threats and political intervention since they have been tasked for decades to carry out comparable efforts.
A newly disclosed intelligence directive from 1999 addresses “information operations” (IO), which are defined as: “Actions taken to affect adversary information and information systems while defending one’s own information and information systems.”
“Although still evolving, the fundamental concept of IO is to integrate different activities to affect [adversary] decision making processes, information systems, and supporting information infrastructures to achieve specific objectives.”
The elements of information operations may include computer network attack, computer network exploitation, and covert action.
See Director of Central Intelligence Directive 7/3, Information Operations and Intelligence Community Related Activities, effective 01 July 1999.
The directive was declassified (in part) on December 2 by the Interagency Security Classification Appeals Panel, and was first obtained and published by GovernmentAttic.org.
This DOE Office has been achieving DOGE’s stated mission of billion dollar savings for decades. Now government leaders may close its doors.
Direct File redefined what IRS service could look like, with real-time help and data-driven improvements. Let’s apply that bar elsewhere.
At this inflection point, the choice is not between speed and safety but between ungoverned acceleration and a calculated momentum that allows our strategic AI advantage to be both sustained and secured.
Improved detection could strengthen deterrence, but only if accompanying hazards—automation bias, model hallucinations, exploitable software vulnerabilities, and the risk of eroding assured second‑strike capability—are well managed.