Crowdsourcing the Verification of Nuclear Safeguards

The possibility of mobilizing members of the public to collect information — “crowdsourcing” — to enhance verification of international nuclear safeguards is explored in a new report from Sandia National Laboratories.

“Our analysis indicates that there are ways for the IAEA [International Atomic Energy Agency] to utilize data from crowdsourcing activities to support safeguards verification,” the authors conclude.

But there are a variety of hurdles to overcome.

“Some implementations of crowdsourcing for safeguards are legally or ethically uncertain, and must be carefully considered prior to adoption.”

Crowdsourced information, like other information, has to be independently verified, particularly since it is susceptible to error, manipulation and deception.

The report builds on previous work cited in a bibliography. See Power of the People: A Technical, Ethical and Experimental Examination of the Use of Crowdsourcing to Support International Nuclear Safeguards Verification by Zoe N. Gastelum, et al, Sandia National Laboratories, October 2017.

Monitoring Nuclear Testing is Getting Easier

The ability to detect a clandestine nuclear explosion in order to verify a ban on nuclear testing and to detect violations has improved dramatically in the past two decades.

There have been “technological and scientific revolutions in the fields of seismology, acoustics, and radionuclide sciences as they relate to nuclear explosion monitoring,” according to a new report published by Los Alamos National Laboratory that describes those developments.

“This document… reviews the accessible literature for four research areas: source physics (understanding signal generation), signal propagation (accounting for changes through physical media), sensors (recording the signals), and signal analysis (processing the signal).”

A “signal” here is a detectable, intelligible change in the seismic, acoustic, radiological or other environment that is attributable to a nuclear explosion.

The new Los Alamos report “is intended to help sustain the international conversation regarding the [Comprehensive Test Ban Treaty] and nuclear explosive testing moratoria while simultaneously acknowledging and celebrating research to date.”

“The primary audience for this document is the next generation of research scientists that will further improve nuclear explosion monitoring, and others interested in understanding the technical literature related to the nuclear explosion monitoring mission.”

See Trends in Nuclear Explosion Monitoring Research & Development —  A Physics Perspective, Los Alamos National Laboratory, LA-UR-17-21274, June 2017.

“A ban on all nuclear tests is the oldest item on the nuclear arms control agenda,” the Congressional Research Service noted last year. “Three treaties that entered into force between 1963 and 1990 limit, but do not ban, such tests. In 1996, the United Nations General Assembly adopted the Comprehensive Nuclear-Test-Ban Treaty (CTBT), which would ban all nuclear explosions. In 1997, President Clinton sent the CTBT to the Senate, which rejected it in October 1999.”

Seeking Verifiable Destruction of Nuclear Warheads

A longstanding conundrum surrounding efforts to negotiate reductions in nuclear arsenals is how to verify the physical destruction of nuclear warheads to the satisfaction of an opposing party without disclosing classified weapons design information. Now some potential new solutions to this challenge are emerging.

Based on tests that were conducted by the Atomic Energy Commission in the 1960s in a program known as Cloudgap, U.S. officials determined at that time that secure and verifiable weapon dismantlement through visual inspection, radiation detection or material assay was a difficult and possibly insurmountable problem.

“If the United States were to demonstrate the destruction of nuclear weapons in existing AEC facilities following the concept which was tested, many items of classified weapon design information would be revealed even at the lowest level of intrusion,” according to a 1969 report on Demonstrated Destruction of Nuclear Weapons.

But in a newly published paper, researchers said they had devised a method that should, in principle, resolve the conundrum.

“We present a mechanism in the form of an interactive proof system that can validate the structure and composition of an object, such as a nuclear warhead, to arbitrary precision without revealing either its structure or composition. We introduce a tomographic method that simultaneously resolves both the geometric and isotopic makeup of an object. We also introduce a method of protecting information using a provably secure cryptographic hash that does not rely on electronics or software. These techniques, when combined with a suitable protocol, constitute an interactive proof system that could reject hoax items and clear authentic warheads with excellent sensitivity in reasonably short measurement times,” the authors wrote.

See Physical cryptographic verification of nuclear warheads by R. Scott Kemp, et al,Proceedings of the National Academy of Sciences, published online July 18.

More simply put, it’s “the first unspoofable warhead verification system for disarmament treaties — and it keeps weapon secrets too!” tweeted Kemp.

See also reporting in Science Magazine, New Scientist, and

In another recent attempt to address the same problem, “we show the viability of a fundamentally new approach to nuclear warhead verification that incorporates a zero-knowledge protocol, which is designed in such a way that sensitive information is never measured and so does not need to be hidden. We interrogate submitted items with energetic neutrons, making, in effect, differential measurements of both neutron transmission and emission. Calculations for scenarios in which material is diverted from a test object show that a high degree of discrimination can be achieved while revealing zero information.”

See A zero-knowledge protocol for nuclear warhead verification by Alexander Glaser, et al, Nature, 26 June 2014.

But the technology of nuclear disarmament and the politics of it are two different things.

For its part, the U.S. Department of Defense appears to see little prospect for significant negotiated nuclear reductions. In fact, at least for planning purposes, the Pentagon foresees an increasingly nuclear-armed world in decades to come.

A new DoD study of the Joint Operational Environment in the year 2035 avers that:

“The foundation for U.S. survival in a world of nuclear states is the credible capability to hold other nuclear great powers at risk, which will be complicated by the emergence of more capable, survivable, and numerous competitor nuclear forces. Therefore, the future Joint Force must be prepared to conduct National Strategic Deterrence. This includes leveraging layered missile defenses to complicate adversary nuclear planning; fielding U.S. nuclear forces capable of threatening the leadership, military forces, and industrial and economic assets of potential adversaries; and demonstrating the readiness of these forces through exercises and other flexible deterrent operations.”

See The Joint Operating Environment (JOE) 2035: The Joint Force in a Contested and Disordered World, Joint Chiefs of Staff, 14 July 2016.

Crowd-Sourcing the Treaty Verification Problem

Verification of international treaties and arms control agreements such as the pending Iran nuclear deal has traditionally relied upon formal inspections backed by clandestine intelligence collection.

But today, the widespread availability of sensors of many types complemented by social media creates the possibility of crowd-sourcing the verification task using open source data.

“Never before has so much information and analysis been so widely and openly available. The opportunities for addressing future [treaty] monitoring challenges include the ability to track activity, materials and components in far more detail than previously, both for purposes of treaty verification and to counter WMD proliferation,” according to a recent study from the JASON defense science advisory panel. See Open and Crowd-Sourced Data for Treaty Verification, October 2014.

“The exponential increase in data volume and connectivity, and the relentless evolution toward inexpensive — therefore ubiquitous — sensors provide a rapidly changing landscape for monitoring and verifying international treaties,” the JASONs said.

Commercial satellite imagery, personal radiation detectors, seismic monitors, cell phone imagery and video, and other collection devices and systems combine to create the possibility of Public Treaty Monitoring, or PTM.

“The public unveiling and confirmation of the Natanz nuclear site in Iran was an important early example of PTM,” the report said. “In December 2002, the Institute for Science and International Security (ISIS), an independent policy institute in Washington, DC, released commercial satellite images of Natanz, and based on these images assessed it to be a gas-centrifuge plant for uranium isotope separation, which turned out to be correct.”

An earlier archetype of public arms control monitoring was the joint verification project initiated in the late 1980s by the Natural Resources Defense Council, the Federation of American Scientists and the Soviet Academy of Sciences to devise acceptable methods for verifying deep reductions in nuclear weapons. The NRDC actually installed seismic monitors around Soviet nuclear test sites.

In the 1990s, John Pike’s Public Eye project pioneered the use of declassified satellite images and newly available commercial imagery for public interest applications.

Recently, among many other examples, commercial satellite imagery has been used to track development of China’s submarine fleet, as discussed in this report by Hans Kristensen of FAS.

Jeffrey Lewis and his colleagues associated with the website are regularly using satellite imagery and related analytic tools to advance public understanding of international military programs.

“The reason that open source data gathering [for treaty verification] is an easier problem is that, increasingly, at least some of it will happen as a matter of course,” the JASONs said. More and more people carry mobile phones, are connected to the Internet, and actively use social media. Even with no specific effort to create incentives or crowd-sourcing mechanisms there is likely to be a wealth of images and sensor data publicly and freely shared from practically every country and region in the world.”

The flood of open source data naturally does not solve all verification problems. Much of the data collected may be of poor quality, and some of it may be erroneous or even deliberately deceptive.

There are “enormous difficulties still to be faced and work to be done before claiming confidence in the reliability of data obtained from open sources,” the JASON report said.

“The validity of the original data is especially problematic with social media, in that eyewitnesses are notoriously unreliable (especially when reporting on unanticipated or extreme events) and there can be strong reinforcement of messages — whether true or not — when a topic ‘goes viral’ on the Internet.”

“In addition to such errors, taken here to be honest mistakes and limitations of sensors, there is the possibility of willful deception and spoofing of the raw data, including through conscious insertion or duplication of false information.”

“Key to validating the findings of open sources will be confirming the independence of two or more of the sources. Multi-reporting — ‘retweeting’ — or posting of the same information is no substitute for establishing credibility.”

On the other hand, the frequent repetition of information “can provide an indication of importance or urgency… The occurrence of the 2008 Wenchuan earthquake was first noted in the United States as an anomalous increase in text-messaging.” Because the information moved at near light speed, it arrived before the seismic waves could reach US seismic stations.

“Commercial satellites have also provided valuable data for analysis of (non-)compliance with the Nuclear Non-Proliferation Treaty, and open sources proved valuable in detecting the use of chemical weapons in Aleppo, and in subsequent steps to remove such weapons from Syria. They also informed the world about the Russian troop movements and threats to Ukraine.”

While the US Government should take steps to promote and exploit open source data collection, the report said, it should do so cautiously. “It is crucial that citizens or groups not be put at risk by encouraging open-source activities that might be interpreted as espionage. The line between open source sensors and ‘spy gear’ is thin.”

In short, the JASON report concluded, “Rapid advances in technology have led to the global proliferation of inexpensive, networked sensors that are now providing significant new levels of societal transparency. As a result of the increase in quality, quantity, connectivity and availability of open information and crowd-sourced analysis, the landscape for verifying compliance with international treaties has been greatly broadened.”

*     *     *

Among its recommendations, the JASON report urged the government to “promote transparency and [data] validation by… keeping open-source information and analysis open to the maximum degree possible and appropriate.”

Within the U.S. intelligence community, such transparency has notably been embraced by the National Geospatial-Intelligence Agency under its director Robert Cardillo.

But the Central Intelligence Agency has chosen to move in the opposite direction by shutting off much of the limited public access to open source materials that previously existed. Generations of non-governmental analysts who were raised on products of the Foreign Broadcast Information Service now must turn elsewhere, since CIA terminated public access to the DNI Open Source Center in 2013.

FAS has asked the Obama Administration to restore and increase public access to open source analyses from the DNI Open Source Center as part of the forthcoming Open Government National Action Plan.

Verification Requirements for a Nuclear Agreement with Iran

Negotiations are currently underway with Iran regarding their nuclear program; as a result, one of the main questions for U.S. government policymakers is what monitoring and verification measures and tools will be required by the United States, its allies, and the International Atomic Energy Agency (IAEA) to ensure Iran’s nuclear program remains peaceful.

To answer this question, the Federation of American Scientists (FAS) convened a non-partisan, independent task force to examine the technical and policy requirements to adequately verify a comprehensive or other sustained nuclear agreement with Iran. Through various methods, the task force interviewed or met with over 70 experts from various technical and policy disciplines and compiled the results in the new report, “Verification Requirements for a Nuclear Agreement with Iran.” Authored by task force leaders Christopher Bidwell, Orde Kittrie, John Lauder and Harvey Rishikof, the report outlines nine recommendations for U.S. policymakers relating to a successful monitoring and verification agreement with Iran.  They are as follows:


Six Elements of an Effective Agreement

1. The agreement should require Iran to provide, prior to the next phase of sanctions relief, a comprehensive declaration that is correct and complete concerning all aspects of its nuclear program both current and past.

2. The agreement should provide the IAEA, for the duration of the agreement, access without delay to all sites, equipment, persons and documents requested by the IAEA, as currently required by UN Security Council Resolution 1929.

3. The agreement should provide that any material acts of non-cooperation with inspectors are a violation of the agreement.

4. The agreement should provide for the establishment of a consultative commission, which should be designed and operate in ways to maximize its effectiveness in addressing disputes and, if possible, building a culture of compliance within Iran.

5. The agreement should provide that all Iranian acquisition of sensitive items for its post-agreement licit nuclear program, and all acquisition of sensitive items that could be used in a post-agreement illicit nuclear program, must take place through a designated transparent channel.

6. The agreement should include provisions designed to preclude Iran from outsourcing key parts of its nuclear weapons program to a foreign country such as North Korea.


Three Proposed U.S. Government Actions to Facilitate Effective Implementation of an Agreement

1. The U.S. Government should enhance its relevant monitoring capabilities, invest resources in monitoring the Iran agreement, and structure its assessment and reporting of any Iranian noncompliance so as to maximize the chances that significant anomalies will come to the fore and not be overlooked or considered de minimis.

2. The U.S. Government and its allies should maintain the current sanctions regime architecture so that it can be ratcheted up incrementally in order to deter and respond commensurately to any Iranian non-compliance with the agreement.

3. The U.S. Government should establish a joint congressional/executive branch commission to monitor compliance with the agreement, similar to Congress having created the Commission on Security and Cooperation in Europe to monitor the implementation of the 1975 Helsinki Accords.

Download Full Report Download Report Summary