FAS

ODNI Rethinks and Releases 2006 Intel Budget

11.01.10 | 2 min read | Text by Steven Aftergood

Last week, on the same day that the 2010 intelligence budget totals were revealed, the Office of the Director of National Intelligence also released another previously undisclosed intelligence budget figure — the 2006 budget appropriation for the National Intelligence Program.

“The aggregate amount appropriated to the NIP for fiscal year 2006 was $40.9 Billion,” wrote John F. Hackett (pdf), director of the ODNI Information Management Office.

This  disclosure provides one more benchmark in the steady, sharp escalation of intelligence spending in the last decade.  (The NIP budgets in the subsequent years from 2007-2010 were:  $43.5 billion, $47.5 billion, $49.8 billion, and $53.1 billion.)

But what makes the new disclosure profoundly interesting and even inspiring is something else:  In 2008, Mr. Hackett determined (pdf) that disclosure of this exact same information could not be permitted because to do so would damage national security.  And just last year, ODNI emphatically affirmed that view on appeal.

“The size of the National Intelligence Program for Fiscal Year 2006 remains currently and properly classified,” wrote Gen. Ronald L. Burgess in a January 14, 2009 letter (pdf).  “In addition, the release of this information would reveal sensitive intelligence sources and methods.”

Yet upon reconsideration a year later, those ominous claims have evaporated.  In other words, ODNI has found it possible — when prompted by a suitable stimulus — to rethink its classification policy and to reach a new and opposite judgment.

This capacity for identifying, admitting (at least implicitly) and correcting classification errors is of the utmost importance.  Without it, there would be no hope for secrecy reform and no real place for public advocacy.   But as long as errors can be acknowledged and corrected, then all kinds of positive changes are possible.

The Obama Administration’s pending Fundamental Classification Guidance Review requires classifying agencies to seek out and eliminate obsolete classification requirements based on “the broadest possible range of perspectives” over the next two years.  If it fulfills its original conception, the Review will bring this latent, often dormant error correction capacity to bear on the classification system in a focused and consequential way.  There are always going to be classification errors, so there needs to be a robust, effective way to find and fix them.

publications
See all publications
Government Capacity
day one project
Policy Memo
Reforming the Federal Advisory Committee Landscape for Improved Evidence-based Decision Making and Increasing Public Trust

Protecting the health and safety of the American public and ensuring that the public has the opportunity to participate in the federal decision-making process is crucial. As currently organized, FACs are not equipped to provide the best evidence-based advice.

02.18.25 | 11 min read
read more
Emerging Technology
day one project
Policy Memo
A Federal Center of Excellence to Expand State and Local Government Capacity for AI Procurement and Use

As new waves of AI technologies continue to enter the public sector, touching a breadth of services critical to the welfare of the American people, this center of excellence will help maintain high standards for responsible public sector AI for decades to come.

02.14.25 | 9 min read
read more
Clean Energy
Press release
Position on the Reintroduction of the Critical Materials Future Act and the Unearth Innovation Act

The Federation of American Scientists supports the Critical Materials Future Act and the Unearth Innovation Act.

02.14.25 | 2 min read
read more
Emerging Technology
day one project
Policy Memo
Strengthening Information Integrity with Provenance for AI-Generated Text Using ‘Fuzzy Provenance’ Solutions

By creating a reliable, user-friendly framework for surfacing provenance, NIST would empower readers to better discern the trustworthiness of the text they encounter, thereby helping to counteract the risks posed by deceptive AI-generated content.

02.13.25 | 7 min read
read more