The rise in national security secrecy in the first year of the Obama Administration was matched by a sharp increase in the financial costs of the classification system, according to a new report to the President (pdf).
The estimated costs of the national security classification system grew by 15% last year to reach $10.17 billion, according to the Information Security Oversight Office (ISOO). It was the first time that annual secrecy costs in government were reported to exceed $10 billion.
An additional $1.25 billion was incurred within industry to protect classified information, for a grand total of $11.42 in classification-related costs, also a new record high.
The cost estimates, based on the classification-related activities of 41 executive branch agencies, were reported to the President by ISOO on April 29 and released yesterday. They include the estimated costs of personnel security (clearances), physical security, information systems security, as well as classification management and training — all of which increased last year.
Many factors contribute to the rise in secrecy costs, but one of them is widespread overclassification. Ironically, the new ISOO report provides a vivid illustration of the overclassification problem.
ISOO did not disclose security cost estimates for the large intelligence agencies — the Office of the Director of National Intelligence, the Central Intelligence Agency, the National Security Agency, the National Geospatial-Intelligence Agency, and the National Reconnaissance Office — because those costs are considered classified.
Secrecy News asked two security officials to articulate the damage to national security that could result from release of the security cost estimates for the intelligence agencies, but they were unable to do so. They said only that the classification of this information was consistent with intelligence community guidance. But this is a circular claim, not an explanation. The information is classified because somebody said it’s classified, not because it could demonstrably or even plausibly damage national security.
This kind of reflexive secrecy, which is characteristic of much of contemporary classification policy, would be stripped away if the Administration’s pending Fundamental Classification Guidance Review were properly and successfully implemented. That Review process is supposed to bring “the broadest possible range of perspectives” to bear on the question of exactly what information should be classified, according to an ISOO implementing directive (pdf). But so far there is no visible indication that the process is bearing fruit, or even that the Administration is seriously committed to it.
Last week Under Secretary of Defense for Intelligence Michael G. Vickers took time away from more urgent matters to sign a new memorandum (pdf) concerning implementation of the President’s 2009 executive order on classification. But the Vickers memorandum is incomplete, dealing only with “immediate” implementation issues, and it does not mention the Fundamental Classification Guidance Review at all.
The Information Security Oversight Office reported last month that the number of original classification decisions — or “new secrets” — that were generated by the Obama Administration in its first full year in office (FY 2010) was 224,734. That was a 22.6 percent increase over the year before.
Protecting the health and safety of the American public and ensuring that the public has the opportunity to participate in the federal decision-making process is crucial. As currently organized, FACs are not equipped to provide the best evidence-based advice.
As new waves of AI technologies continue to enter the public sector, touching a breadth of services critical to the welfare of the American people, this center of excellence will help maintain high standards for responsible public sector AI for decades to come.
The Federation of American Scientists supports the Critical Materials Future Act and the Unearth Innovation Act.
By creating a reliable, user-friendly framework for surfacing provenance, NIST would empower readers to better discern the trustworthiness of the text they encounter, thereby helping to counteract the risks posed by deceptive AI-generated content.