Autonomy in Weapon Systems

11.26.12 | 1 min read | Text by Steven Aftergood

The Department of Defense issued a new Directive last week establishing DoD policy for the development and use of autonomous weapons systems.

An autonomous weapon system is defined as “a weapon system that, once activated, can select and engage targets without further intervention by a human operator.”

The new DoD Directive Number 3000.09, dated November 21, establishes guidelines that are intended “to minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.”

“Failures can result from a number of causes, including, but not limited to, human error, human-machine interaction failures, malfunctions, communications degradation, software coding errors, enemy cyber attacks or infiltration into the industrial supply chain, jamming, spoofing, decoys, other enemy countermeasures or actions, or unanticipated situations on the battlefield,” the Directive explains.

An “unintended engagement” resulting from such a failure means “the use of force resulting in damage to persons or objects that human operators did not intend to be the targets of U.S. military operations, including unacceptable levels of collateral damage beyond those consistent with the law of war, ROE [rules of engagement], and commander’s intent.”

The Department of Defense should “more aggressively use autonomy in military missions,” urged the Defense Science Board last summer in a report on “The Role of Autonomy in DoD Systems.”

The U.S. Army issued an updated Army Field Manual 3-36 on Electronic Warfare earlier this month.