Last week, on the same day that the 2010 intelligence budget totals were revealed, the Office of the Director of National Intelligence also released another previously undisclosed intelligence budget figure — the 2006 budget appropriation for the National Intelligence Program.
“The aggregate amount appropriated to the NIP for fiscal year 2006 was $40.9 Billion,” wrote John F. Hackett (pdf), director of the ODNI Information Management Office.
This disclosure provides one more benchmark in the steady, sharp escalation of intelligence spending in the last decade. (The NIP budgets in the subsequent years from 2007-2010 were: $43.5 billion, $47.5 billion, $49.8 billion, and $53.1 billion.)
But what makes the new disclosure profoundly interesting and even inspiring is something else: In 2008, Mr. Hackett determined (pdf) that disclosure of this exact same information could not be permitted because to do so would damage national security. And just last year, ODNI emphatically affirmed that view on appeal.
“The size of the National Intelligence Program for Fiscal Year 2006 remains currently and properly classified,” wrote Gen. Ronald L. Burgess in a January 14, 2009 letter (pdf). “In addition, the release of this information would reveal sensitive intelligence sources and methods.”
Yet upon reconsideration a year later, those ominous claims have evaporated. In other words, ODNI has found it possible — when prompted by a suitable stimulus — to rethink its classification policy and to reach a new and opposite judgment.
This capacity for identifying, admitting (at least implicitly) and correcting classification errors is of the utmost importance. Without it, there would be no hope for secrecy reform and no real place for public advocacy. But as long as errors can be acknowledged and corrected, then all kinds of positive changes are possible.
The Obama Administration’s pending Fundamental Classification Guidance Review requires classifying agencies to seek out and eliminate obsolete classification requirements based on “the broadest possible range of perspectives” over the next two years. If it fulfills its original conception, the Review will bring this latent, often dormant error correction capacity to bear on the classification system in a focused and consequential way. There are always going to be classification errors, so there needs to be a robust, effective way to find and fix them.
The decision casts uncertainty on the role of scientific and technical expertise in federal decision-making, potentially harming our nation’s ability to respond effectively
Congress should foster a more responsive and evidence-based ecosystem for GenAI-powered educational tools, ensuring that they are equitable, effective, and safe for all students.
Without independent research, we do not know if the AI systems that are being deployed today are safe or if they pose widespread risks that have yet to be discovered, including risks to U.S. national security.
Companies that store children’s voice recordings and use them for profit-driven applications without parental consent pose serious privacy threats to children and families.