Since 2010, the field of artificial intelligence (AI) has been “jolted” by the “broad and unforeseen successes” of one of its component technologies, known as multi-layer neural networks, leading to rapid developments and new applications, according to a new study from the JASON scientific advisory panel.
The JASON panel reviewed the current state of AI research and its potential use by the Department of Defense. See Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD, JSR-16-Task-003, January 2017.
“AI technologies are of great importance to DoD missions. Defense systems and platforms with varying degrees of autonomy already exist. More importantly, AI is seen as the key enabling technology (along with human-computer interactions of various kinds) of a ‘Third Offset Strategy’ that seeks for the U.S. a unique, asymmetric advantage over near-peer adversaries.”
The JASON report distinguishes between artificial intelligence — referring to the ability of computers to perform particular tasks that humans do with their brains — and artificial general intelligence (AGI) — meaning a human-like ability to pursue long-term goals and exercise purposive behavior.
“Where AI is oriented around specific tasks, AGI seeks general cognitive abilities.” Recent progress in AI has not been matched by comparable advances in AGI. Sentient machines, let alone a revolt of robots against their creators, are still somewhere far over the horizon, and may be permanently in the realm of fiction.
While many existing DoD weapon systems “have some degree of ‘autonomy’ relying on the technologies of AI, they are in no sense a step–not even a small step–towards ‘autonomy’ in the sense of AGI, that is, the ability to set independent goals or intent,” the JASONs said.
“Indeed, the word ‘autonomy’ conflates two quite different meanings, one relating to ‘freedom of will or action’ (like humans, or as in AGI), and the other the much more prosaic ability to act in accordance with a possibly complex rule set based on possibly complex sensor input, as in the word ‘automatic’. In using a terminology like ‘autonomous weapons’, the DoD may, as an unintended consequence, enhance the public’s confusion on this point.”
Nevertheless, even if they are more “automated” than genuinely “autonomous,” many existing applications of artificial intelligence “have achieved performance that exceeds what humans typically do.”
And while artificial intelligence can have damaging or disruptive implications in other contexts, that may not be so in the military. “Displacement of soldiers’ jobs by technology is a benefit, not an economic or social harm,” the JASON report said.
Yet the path forward for DoD to adopt new AI applications is not simple or smooth. A major obstacle is the difficulty of validating those applications to establish their safety and reliability.
Current progress in AI “has not systematically addressed the engineering ‘ilities’: reliability, maintainability, debug-ability, evolvability, fragility, attackability, and so forth. Further, it is not clear that the existing AI paradigm is immediately amenable to any sort of software engineering validation and verification. This is a serious issue, and is a potential roadblock to DoD’s use of these modern AI systems, especially when considering the liability and accountability of using AI in lethal systems.”
Deep neural networks function in a manner which is “almost unknowably intricate, leading to failure modes for which — currently — there is very little human intuition, and even less established engineering practice.”
A recent report from the Defense Science Board on Unmanned Undersea Systems observed that “The ability to communicate periodically can reduce some of the technological requirements and risks associated with the autonomous subsystem that often slows the development and subsequent fielding of systems.” See Next-Generation Unmanned Undersea Systems, October 2016.
The JASONs recommended continued DoD investment in basic research on artificial intelligence especially since, in all likelihood, “AI, both commercially derived and DoD-specific, will be integral to most future DoD systems and platforms.”
This week the Department of Defense announced the demonstration of swarms of “autonomous” micro-drones. “The micro-drones demonstrated advanced swarm behaviors such as collective decision-making, adaptive formation flying, and self-healing,” according to a January 9 news release.
A journalistic account of recent breakthroughs in the use of artificial intelligence for machine translation appeared in the New York Times Magazine last month. See “The Great A.I. Awakening” by Gideon Lewis-Kraus, December 14, 2016.
While the National Labs have a strong workforce, they also face challenges that make it difficult to recruit and retain the people they need to continue leading the world’s scientific research.
An open jobs board for political appointee positions is necessary to building a stronger and more diverse appointee workforce, and for improving government transparency.
The next generation of nuclear energy deployment depends on the Nuclear Regulatory Commission’s willingness to use flexible hiring authorities to shape its workforce
The Federation of American Scientists supports H.R. 8790, the Fix our Forests Act, commends the House of Representatives for passing of the bill on strong bipartisan margins in September, and urges the Senate to consider this legislation.