Treasury Classification Guide, and Other Resources
The Department of the Treasury has recently produced a consolidated classification guide, detailing exactly what kinds of Treasury information may be classified at what level and for how long. It is in such agency classification guides, not in high-level government-wide policy statements, that the nuts and bolts of government secrecy policy are to be found, and perhaps to be changed. See “Security Classification Guide” (pdf), Department of the Treasury, December 2010.
The Congressional Research Service yesterday offered its assessment of the Stuxnet worm, which was evidently designed to damage industrial control systems such as those used in Iran’s nuclear program. See “The Stuxnet Computer Worm: Harbinger of an Emerging Warfare Capability” (pdf), December 9, 2010.
Intelligence historian Jeffrey Richelson has written what must be the definitive account of the rise and fall of the National Applications Office, the aborted Department of Homeland Security entity that was supposed to harness intelligence capabilities for domestic security and law enforcement applications. The article, which is not freely available online, is entitled “The Office That Never Was: The Failed Creation of the National Applications Office.” It appears in the International Journal of Intelligence and Counter Intelligence, vol. 24, no. 1, pp. 65-118 (2011).
The latest issue of the Journal of National Security Law & Policy (vol. 4, no. 2) is now available online. Entitled “Liberty, terrorism and the laws of war,” it includes several noteworthy and informative papers on intelligence and security policy.
As new waves of AI technologies continue to enter the public sector, touching a breadth of services critical to the welfare of the American people, this center of excellence will help maintain high standards for responsible public sector AI for decades to come.
The Federation of American Scientists supports the Critical Materials Future Act and the Unearth Innovation Act.
By creating a reliable, user-friendly framework for surfacing provenance, NIST would empower readers to better discern the trustworthiness of the text they encounter, thereby helping to counteract the risks posed by deceptive AI-generated content.
By investing in the mechanisms that connect learning ecosystems, policymakers can build “neighborhoods” of learning that prepare students for citizenship, work, and life.