The Department of Defense has issued a new Instruction defining its response to the “insider threat” from Department personnel who engage in unauthorized disclosures of information or other activities deemed harmful to national security.
The new Instruction assigns responsibilities and authorities for systematically detecting “anomalous” employee behavior that may be an indication of an insider threat.
An insider threat is defined as “A person with authorized access, who uses that access, wittingly or unwittingly, to harm national security interests or national security through unauthorized disclosure, data modification, espionage, terrorism, or kinetic actions resulting in loss or degradation of resources or capabilities.”
A subset of the insider threat is the counterintelligence (CI) insider threat, which refers to an authorized individual who uses his access on behalf of a “foreign intelligence entity.”
A foreign intelligence entity (FIE) is “Any known or suspected foreign organization, person, or group (public, private, or governmental) that conducts intelligence activities to acquire U.S. information, blocks or impairs U.S. intelligence collection, influences U.S. policy, or disrupts U.S. systems and programs.”
All heads of DoD components are now instructed to “implement CI insider threat initiatives to identify DoD-affiliated personnel suspected of or actually compromising DoD information on behalf of an FIE.”
All military departments are expected to “conduct anomaly-based detection activities.”
See “Countering Espionage, International Terrorism, and the Counterintelligence (CI) Insider Threat,” DoD Instruction 5240.26, May 4, 2012.
The new Instruction complies with a congressional mandate in the FY2012 defense authorization act that was passed last year in response to the WikiLeaks disclosures.
This DOE Office has been achieving DOGE’s stated mission of billion dollar savings for decades. Now government leaders may close its doors.
Direct File redefined what IRS service could look like, with real-time help and data-driven improvements. Let’s apply that bar elsewhere.
At this inflection point, the choice is not between speed and safety but between ungoverned acceleration and a calculated momentum that allows our strategic AI advantage to be both sustained and secured.
Improved detection could strengthen deterrence, but only if accompanying hazards—automation bias, model hallucinations, exploitable software vulnerabilities, and the risk of eroding assured second‑strike capability—are well managed.