DoD Releases Directive on Information Operations
A 2006 Department of Defense directive on Information Operations (pdf), which had previously been withheld as “For Official Use Only,” was released last week in response to a Freedom of Information Act request from the Federation of American Scientists.
The directive, issued by the Under Secretary of Defense (Intelligence), assigns baseline responsibilities for the conduct of information operations, an umbrella term that includes electronic warfare, computer network operations, psychological operations, military deception, and operations security.
Among related capabilities, the directive cites “public affairs,” the purpose of which is “to communicate military objectives, counter misinformation and disinformation, deter adversary actions, and maintain the trust and confidence of the U.S. population, as well as our friends and allies. Effective military operations shall be based on credibility and shall not focus on directing or manipulating U.S. public actions or opinion.”
The New York Times reported on April 20 that the Pentagon had mobilized numerous former military officials, some with unacknowledged financial interests in Department programs, to help generate favorable news coverage of the Bush Administration’s war policies. It is not clear (to me, at least) how this practice comports with the declared Pentagon policy on public affairs, i.e. whether it violates the policy, or implements it.
See “Information Operations,” Department of Defense Directive O-3600.1, August 14, 2006.
Researchers have many questions about the modernization of Pakistan’s nuclear-capable aircraft and associated air-launched cruise missiles.
The decision casts uncertainty on the role of scientific and technical expertise in federal decision-making, potentially harming our nation’s ability to respond effectively
Congress should foster a more responsive and evidence-based ecosystem for GenAI-powered educational tools, ensuring that they are equitable, effective, and safe for all students.
Without independent research, we do not know if the AI systems that are being deployed today are safe or if they pose widespread risks that have yet to be discovered, including risks to U.S. national security.