DNI Directive Seeks to Tighten Protection of Intelligence
Director of National Intelligence James R. Clapper issued a directive earlier this month to improve the protection of intelligence information and to help prevent unauthorized disclosures.
The newly revised Intelligence Community Directive 700 requires a new degree of collaboration between counterintelligence and security activities. While counterintelligence (CI) was scarcely mentioned in the previous version of the policy on protecting intelligence in 2007, it is now being elevated to a central role and integrated with security.
“Together, CI and security provide greater protection for national intelligence than either function operating alone,” the new directive states.
In order to combat the insider threat of unauthorized disclosures, the directive prescribes that “all personnel with access to national intelligence… shall be continually evaluated and monitored….”
But since there are more than a million government employees and contractors holding Top Secret clearances who are potentially eligible for access to intelligence information, it seems unlikely that any significant fraction of them can literally be “continually monitored.” Still, that is now formally the objective.
A copy of the June 7, 2012 directive on “Protection of National Intelligence” was released by the Office of the Director of National Intelligence under the Freedom of Information Act.
The new directive has been under development for at least several months. It was not specifically devised as a response to the latest controversy over leaks of classified information.
It serves as a reminder that the implementation of revised policies to address unauthorized disclosures of classified information (including congressional action just last year to establish an “insider threat detection program”) is ongoing, possibly obviating the need for new legislation.
As new waves of AI technologies continue to enter the public sector, touching a breadth of services critical to the welfare of the American people, this center of excellence will help maintain high standards for responsible public sector AI for decades to come.
The Federation of American Scientists supports the Critical Materials Future Act and the Unearth Innovation Act.
By creating a reliable, user-friendly framework for surfacing provenance, NIST would empower readers to better discern the trustworthiness of the text they encounter, thereby helping to counteract the risks posed by deceptive AI-generated content.
By investing in the mechanisms that connect learning ecosystems, policymakers can build “neighborhoods” of learning that prepare students for citizenship, work, and life.