The challenges posed by the use of “sensitive but unclassified” control markings were examined in a comprehensive new report (pdf) from the Government Accountability Office.
“The agencies that GAO reviewed are using 56 different sensitive but unclassified designations (16 of which belong to one agency) to protect information that they deem critical to their missions — for example, sensitive law or drug enforcement information or controlled nuclear information.”
“For most designations there are no governmentwide policies or procedures that describe the basis on which an agency should assign a given designation and ensure that it will be used consistently from one agency to another. Without such policies, each agency determines what designations and associated policies to apply to the sensitive information it develops or shares. More than half the agencies reported challenges in sharing such information.”
See “Information Sharing: The Federal Government Needs to Establish Policies and Processes for Sharing Terrorism-Related and Sensitive but Unclassified Information,” March 2006 (1.8 MB PDF).
The Office of the Director of National Intelligence (ODNI) is currently coordinating an effort to standardize governmentwide procedures for the handling of “sensitive but unclassified” information.
But the ODNI rather impudently refused to cooperate with the GAO because “the review of intelligence activities is beyond the GAO’s purview,” according to Kathleen Turner of the ODNI Office of Legislative Affairs.
The Project on Government Oversight dissected the matter here. (Also flagged by Cryptome.)
See also “Report criticizes U.S. terror info sharing” by Shaun Waterman, United Press International, April 18.
As new waves of AI technologies continue to enter the public sector, touching a breadth of services critical to the welfare of the American people, this center of excellence will help maintain high standards for responsible public sector AI for decades to come.
The Federation of American Scientists supports the Critical Materials Future Act and the Unearth Innovation Act.
By creating a reliable, user-friendly framework for surfacing provenance, NIST would empower readers to better discern the trustworthiness of the text they encounter, thereby helping to counteract the risks posed by deceptive AI-generated content.
By investing in the mechanisms that connect learning ecosystems, policymakers can build “neighborhoods” of learning that prepare students for citizenship, work, and life.