The Department of Defense issued a new Directive last week establishing DoD policy for the development and use of autonomous weapons systems.
An autonomous weapon system is defined as “a weapon system that, once activated, can select and engage targets without further intervention by a human operator.”
The new DoD Directive Number 3000.09, dated November 21, establishes guidelines that are intended “to minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.”
“Failures can result from a number of causes, including, but not limited to, human error, human-machine interaction failures, malfunctions, communications degradation, software coding errors, enemy cyber attacks or infiltration into the industrial supply chain, jamming, spoofing, decoys, other enemy countermeasures or actions, or unanticipated situations on the battlefield,” the Directive explains.
An “unintended engagement” resulting from such a failure means “the use of force resulting in damage to persons or objects that human operators did not intend to be the targets of U.S. military operations, including unacceptable levels of collateral damage beyond those consistent with the law of war, ROE [rules of engagement], and commander’s intent.”
The Department of Defense should “more aggressively use autonomy in military missions,” urged the Defense Science Board last summer in a report on “The Role of Autonomy in DoD Systems.”
The U.S. Army issued an updated Army Field Manual 3-36 on Electronic Warfare earlier this month.
By preparing credible, bipartisan options now, before the bill becomes law, we can give the Administration a plan that is ready to implement rather than another study that gathers dust.
Even as companies and countries race to adopt AI, the U.S. lacks the capacity to fully characterize the behavior and risks of AI systems and ensure leadership across the AI stack. This gap has direct consequences for Commerce’s core missions.
The last remaining agreement limiting U.S. and Russian nuclear weapons has now expired. For the first time since 1972, there is no treaty-bound cap on strategic nuclear weapons.
As states take up AI regulation, they must prioritize transparency and build technical capacity to ensure effective governance and build public trust.