The False Hope of Nuclear Forensics? Assessing the Timeliness of Forensics Intelligence
Nuclear forensics is playing an increasing role in the conceptualization of U.S. deterrence strategy, formally integrated into policy in the 2006 National Strategy on Combatting Terrorism (NSCT). This policy linked terrorist groups and state sponsors in terms of retaliation, and called for the development of “rapid identification of the source and perpetrator of an attack,” through the bolstering of attribution via forensics capabilities.12 This indirect deterrence between terrorist groups and state sponsors was strengthened during the 2010 Nuclear Security Summit when nuclear forensics expanded into the international realm and was included in the short list of priorities for bolstering state and international capacity. However, while governments and the international community have continued to invest in capabilities and databases for tracking and characterizing the elemental signatures of nuclear material, the question persists as to the ability of nuclear forensics to contribute actionable intelligence in a post-detonation situation quickly enough, as to be useful in the typical time frame for retaliation to terrorist acts.
In the wake of a major terrorist attack resulting in significant casualties, the impetus for a country to respond quickly as a show of strength is critical.3 Because of this, a country is likely to retaliate based on other intelligence sources, as the data from a fully completed forensics characterization would be beyond the time frame necessary for a country’s show of force. To highlight the need for a quick response, a quantitative analysis of responses to major terrorist attacks will be presented in the following pages. This timeline will then be compared to a prospective timeline for forensics intelligence. Fundamentally, this analysis makes it clear that in the wake of a major nuclear terrorist attack, the need to respond quickly will trump the time required to adequately conduct nuclear forensics and characterize the origins of the nuclear material. As there have been no instances of nuclear terrorism, a scenario using chemical, biological, and radiological weapons will be used as a proxy for what would likely occur from a policy perspective in the event a nuclear device is used.
This article will examine existing literature, outline arguments, review technical attributes,4 examine the history of retaliation to terrorism, and discuss conclusions and policy recommendations. This analysis finds that the effective intelligence period for nuclear forensics is not immediate, optimistically producing results in ideal conditions between 21 and 90 days, if at all. The duration of 21 days is also based on pre-detonation conditions, and should be considered very, if not overly, optimistic. Further, empirical data collected and analyzed suggestions that the typical response to conventional terrorism was on average 22 days, with a median of 12 days, while terrorism that used chemical, biological, or radiological materials warranted quicker response – an average of 19 days and a median of 10 days. Policy and technical obstacles would restrict the effectiveness of nuclear forensics to successfully attribute the origin of a nuclear weapon following a terrorist attack before political demands would require assertive responses.
Literature
Discussions of nuclear forensics have increased in recent years. Non-technical scholarship has tended to focus on the ability of these processes to deter the use of nuclear weapons (in particular by terrorists), by eliminating the possibility of anonymity.5 Here, the deterrence framework is an indirect strategy, by which states signal guaranteed retribution for those who support the actions of an attacking nation or non-state actor. This approach requires the ability to provide credible evidence both as to the origin of material and to the political decision to transfer material to a non-state actor. As a result of insufficient data available on the world’s plutonium and uranium supply, as well as the historical record of the transit of material, nuclear forensics may not be able to provide stand-alone intelligence or evidence against a supplying country. However, scholars have largely assumed that the ‘smoking gun’ would be identifiable via nuclear forensics. Michael Miller, for example, argues that attribution would deter both state actors and terrorists from using nuclear weapons as anyone responsible will be identified via nuclear forensics.6 Keir Lieber and Daryl Press have echoed this position by arguing that attribution is fundamentally guaranteed due to the small number of possible suppliers of nuclear material and the high attribution rate for major terrorist attacks.7 There is an important oversight from both a technical and policy perspective in these types of arguments however.
First, the temporal component of nuclear forensics is largely ignored. The processes of forensics do not produce immediate results. While the length of time necessary to provide meaningful intelligence differs, it is unlikely that nuclear forensics will provide information as to the source of the device in the time frame required by policymakers, who in the wake of a terrorist attack will need to respond quickly and decisively. This is likely to decrease both the credibility of forensics information and its usefulness if the political demand requires a leader to act promptly.
Secondly, the existence and size of a black-market for nuclear and radiological material is generally dismissed as a non-factor as it is assumed that a complex weapon provided by a state with nuclear weapon capacity is necessary. While it is acknowledged that a full-scale nuclear device capable of being deployed on a delivery device certainly requires advanced technical capacity that a terrorist organization would likely not have, a very crude weapon is possible. Devices such as a radiological dispersal device or a low yield nuclear device, or even a failed (fizzle) nuclear weapon, would still create a desirable outcome for a terrorist group in that panic, death, and devastating economic and societal consequences would ensue. Further, black market material could the ideal method of weaponization, as its characterization and origin-tracing would prove nearly impossible due to decoupling, and thus confusion, between perpetrator and originator.
It is evident that there is a gap between a robust technical understanding and arguments as to the viability and speed of nuclear forensics in providing actionable intelligence. This gap could lead to unrealistic expectations in times of crisis.
Technical Perspective
This section will outline the technologies, processes, and limitations of forensics in order to better inform its potential for contributing meaningful data in a crisis involving nuclear material. It should be noted that most open-source literature on the processes and capabilities of nuclear forensics come from a pre-detonation position, as specifics on post-denotation procedures and timelines are classified.8 This has resulted in the technical difficulties and inherent uncertainties in the conduct of forensic operations in a post-detonation situation being ignored. The following will attempt to extrapolate the details of the pre-detonation procedures into the post-detonation context in order to posit a potential time frame for intelligence retrieval.
Fundamentally, nuclear forensics is the analysis of nuclear or radiological material for the purposes of documenting the material’s characteristics and process history. With this information, and a database of material to compare the sample to, attribution of the origin of the material is possible.9 Following usage or attempted usage of a nuclear or radiological device, nuclear forensics would examine the known relationships between material characteristics and process history, seeking to correlate characterized material with known production history. While forensics encompasses the processes of analysis on recovered material, nuclear attribution is the process of identifying the source of nuclear or radioactive material, to determine the point of origin and routes of transit involving such material, and ultimately to contribute to the prosecution of those responsible.
Following a nuclear detonation, panic would likely prevail among the general populace and some first responders charged with helping those injured. Those tasked with collecting data from the site for forensic analysis would take time to deploy.10 While National Guard troops are able to respond to aid the population, specialized units are more dispersed throughout the country.11 Nuclear Emergency Support Teams, which would respond in the wake of a nuclear terror attack, are stationed at several of the national laboratories spread around the country. Depending on the location of the attack, response times may vary greatly. The responders’ first step would be to secure the site, as information required for attribution comes from both traditional forensics techniques (pictures, locating material, measurements, etc.) and the elemental forensics analysis of trace particles released from the detonation. At the site, responders would be able to determine almost immediately if it was indeed a full-scale nuclear detonation, a fizzle, or a radiological dispersal device. This is possible by assessing the level of damage and from the levels of radiation present, which can be determined with non-destructive assay techniques and dosimetry. Responders (through the use of gamma ray spectrometry and neutron detection) will be able to classify the type of material used if it is a nuclear device (plutonium versus uranium). With these factors assessed, radiation detectors would need to be deployed to carefully examine the blast site or fallout area to catalogue and extricate radioactive material for analysis. These materials would then need to be delivered to a laboratory capable of handling them.
Once samples arrive at the laboratories, characterization of the material will be undertaken to provide the full elemental analysis (isotopic and phase) of the radioactive material, including major, minor and trace constituents, and a variety of tools that can help classify into bulk analysis, imaging techniques, and microanalysis. Bulk analysis would provide elemental and isotopic composition on the material as a whole, and would enable the identification of trace material that would need to be further analyzed. Imaging tools capture the spatial and textural heterogeneities that are vital to fully characterizing a sample. Finally, microanalysis examines more granularly the individual components of the bulk material.
The three-step process described above is critical to assessing the processes the material was exposed to and the origin of the material. The process, the tools used at each stage, and a rough sequencing of events is shown in Figure 1.12 This table, a working document produced by the IAEA, presents techniques and methods that would be used by forensics analysts as they proceed through the three-step process, from batch analysis to microanalysis. Each column represents a time frame in which a tool of nuclear forensics could be utilized by analysts. However, this is a pre-detonation scenario. While it does present a close representation of what would happen post-detonation, some of the techniques listed below would be expected to take longer. This is due to several factors such as the spread of the material, vaporization of key items, and safety requirements for handling radioactive material. These processes take time and deal with small amounts of material at a time which would require a multitude of microanalysis on a variety of elements.

IAEA Suggested Sequence for Laboratory Techniques and Methods
It should also be noted that while nuclear forensics does employ developed best practices, it is not an exact science in that a process can be undertaken and definitive results. Rather, it is an iterative process, by which a deductive method of hypothesis building, testing, and retesting is used to guide analysis and extract conclusions. Analysts build hypotheses based on categorization of material, test these hypotheses against the available forensics data and initiate further investigation, and then interpret the results to include or remove actors from consideration. This can take several iterations. As such, while best practices and proven science drive analysis, the experience and quality of the analyst to develop well-informed hypotheses which can be used to focus more on the investigation is critical to success. A visual representation of the process is seen in Figure 2 below.13

IAEA forensic analysis process
A net assessment by the Joint Working Group of the American Physical Society and the American Association for the Advancement of Science of the current status of nuclear forensics and the ability to successfully conduct attribution concluded that the technological expertise was progressing steadily, but greater cooperation and integration was necessary between agencies.14 They also provided a simplified timeline of events following a nuclear attack, which is seen in Figure 3.15 Miller also provides a more nuanced breakdown of questions that would arise in a post-detonation situation; however, it is the opinion of the author that his table overstates technical capacity following a detonation and uses optimistic estimates for intelligence.16

Nuclear forensics activities following a detonation
Many of the processes that provide the most insight simply take time to configure, run, and rerun. Gas chromatography-mass spectrometry, for instance, is able to detect and measure trace organic elements in a bulk sample, a very useful tool in attempting to identify potential origin via varying organics present.17 However, when the material is spread far (mostly vaporized or highly radioactive), it can take time to configure and run successfully. Thermal Ionization Mass Spectrometry (TIMS) allows for the measuring of multiple isotopes simultaneously, enabling ratios between isotope levels to be assessed.18 While critically important, this process takes time to prepare each sample, requiring purification in either a chemical or acid solution.
With this broad perspective in mind, how long would it take for actionable intelligence to be produced by a nuclear forensics laboratory following the detonation of a nuclear weapon? While Figure 1 puts output being produced in as little as one week, this would be high-level information and able to eliminate possible origins, but most likely not able to come to definitive conclusion. The estimates of Figure 3 (ranging from a week to months), are more likely as the iterative process of hypothesis testing and the obstacles leading up to the point at which the material arrives at the laboratory, would slow and hamper progress. Further, if the signatures of the material are not classified into a comprehensive database, though disperse efforts are underway, the difficulty in conclusively saying it is a particular actor increases.19 As such, an estimate of weeks to months, as is highlighted in Figures 4 and 5, is an appropriate time frame by which actionable intelligence would be available from nuclear forensics. The graphics below show the likely production times for definitive findings by the forensics processes and outlines a zone of effective intelligence production. How does this align with the time frame of retribution?

Nuclear forensics timeline (author-created figure, compiled from above cited IAEA reports and AAAS report}

Effective Intelligence Zone (author-created figure, compiled from above cited IAEA reports and AAAS report.)
Retaliation Data
How quickly do policymakers act in the wake of a terrorist attack? This question is largely unexplored in the social science literature. However, it is critical to establishing a baseline period in which nuclear forensics would likely need to be able to provide actionable intelligence following an attack. As such, an examination of the retaliatory time to major terrorist attacks will be examined to understand the time frame likely available to forensics analysts to contribute conclusions on materials recovered.
Major terrorist attacks were identified using the Study for Terrorism and Responses to Terrorism Global Terrorism Database.20 As such, the database was selected to return events that resulted in either 50 or more fatalities or over 100 injured. Also removed were cases occurring in Afghanistan or Iraq after 2001 and 2003 due to the indistinguishability of responses to terror attacks and normal operations of war within the data. This yielded 269 observations between 1990 and 2004. Cases that had immediate responses (same day) were excluded as this would indicate an ongoing armed conflict. Summary statistics for this data are as follows:
The identified terrorist events were then located in Gary King’s 10 Million Events data set21, which uses a proven data capture and classification method to catalogue events between 1990 and 2004. Government responses following the attack were then captured. Actions were restricted to only those where the government engaged the perpetrating group. This was done by capturing events classified as the following: missile attack, arrest, assassination, unconventional weapons, armed battle, bodily punishment, criminal arrests, human death, declare war, force used, artillery attack, hostage taking, torture, small arms attack, armed actions, suicide bombing, and vehicle bombing. This selection spans the spectrum of policy responses available to a country following a domestic terror attack that would demonstrate strength and resolve. Additionally, by utilizing a range of responses, it is possible to examine terrorism levied from domestic and international sources, thus enabling the consideration of both law enforcement and military actions. Speech acts, sanctions, and other policy actions that do not portray resolve and action were excluded, as they would typically occur within hours of an attack and would not be considered retaliation.
Undertaking this approached yielded retaliation dates for all observations. The summary statistics and basic outline of response time by tier of causalities are as follows:
Table 2: Summary Statistics Table 3: Casualties by Retaliation Quartile
Immediately, questions arise as to the relationship between retaliation time and destruction inflicted, as well as the time frame available to nuclear forensics analysts to provide intelligence before a response is required. With an average retaliation time of 22 days, this would fall within the 1-2 month time frame for complete analysis. Further, a median retaliation time of 12 days would put most laboratory analysis outside the bounds of being able to provide meaningful data. Figure 6 further highlights this by illustrating that within 30 days of a terrorist attack, 80 percent of incidents will have been responded to with force.

Response Time
One of the fundamental graphics presented in the Lieber and Press article shows that as the number of causalities in a terror attack increases, the likelihood of attribution increases correspondingly. This weakens their arguments for two reasons. First, forensics following a conventional attack would have significantly more data available than in the case of a nuclear attack, due to the destructive nature of the attack and the inability of responders to access certain locales. Secondly, a country that is attacked via unconventional means could arguably require a more resolute and quicker response. In looking at the data, the overall time to retaliation is 21.66 days. This number is significantly smaller when limited to unconventional weapons (19.04 days) and smaller still when the perpetrators are not clearly identified (18.8 days). This highlights the need for distinction between unconventional and conventional attacks, which Lieber and Press neglected in their quantitative section.
To further highlight the point that nuclear forensics may not meet the political demands put upon it in a post-detonation situation, Table 4 highlights the disconnect between conventional and unconventional attacks and existing threats. To reiterate, the term unconventional is used colloquially here as a substitute for CBRN weapons, and not unconventional tactics. In only 37 percent of the cases observed was the threat a known entity or attributed after the fact. This compares to 85 percent for conventional. In all of these attacks, retaliation did occur; allowing the conclusion that with the severity of an unconventional weapon and the unordinary fear that is likely to be produced that public outcry and a prompt response would be warranted regardless of attribution.
As the use of a nuclear weapon would result in a large number of deaths, the question as to whether or not higher levels of casualties influence response time is also of importance. However, no significant correlation is present between retaliation time and any of the other variables examined. Here, retaliation time (in days) was compared with binary variables for whether or not the perpetrators were known, if the facility was a government building or not, if the device used was a bomb or not, and if an unconventional device was used or not. Scale variables used include number of fatalities, injured, and the total casualties from the attack. Of particular note here is the negative correlation between unconventional attacks and effective attribution at time of response; this reemphasizes the above point on attribution prior to retaliation as being unnecessary following an unconventional attack.
Assessment
From this review, the ability of nuclear forensics to provide rapid, actionable intelligence in unlikely. While it is acknowledged that the process would produce gains along the way, an effective zone of intelligence production can be assumed between 21-90 days optimistically. This is highlighted in Figure 5 above, which aligns the effective zone with the processes that would likely provide definitive details. However, this does not align with the average (22 days) and median (12 days) time of response for conventional attacks. More importantly, unconventional attack responses fall well before this effective zone, with an average of 19 days and a median of 10. While the effective intelligence zone is close to these averages of these data points, the author remains skeptical that the techniques to be performed would produce viable data in a shorter time frame presented given the likely condition of the site and the length of time necessary for each run of each technique.22 This woasuld seem to support an argument that the working timelines for actionable data being outside the boundary of average retaliatory time. More examination is necessary to further narrow down the process times, a task plagued with difficulties due to material classification.ass
A secondary argument that can be made when thinking about unattributed terror attacks is that even without complete attribution, a state will retaliate against a known terror, cult, or insurgent organization following a terror attack to show strength and deter further attacks. This was shown to be the case in 34 of 54 observations (63 percent unattributed). While this number is remarkably high, all states were observed taking decisive action against a group. This would tend to negate the perspective that forensics will matter following an attack, as a state will respond more decisively to unconventional attacks than conventional whether attribution has been established or not.
There are also strategic implications for indirect deterrent strategies as well. Indirect deterrence offers a bit more flexibility in the timing of results, but less so in the uncertainty of results, as it will critical in levying guilty claims against a third-party actor. Thus, nuclear forensics can be very useful, and perhaps even necessary, in indirect deterrent strategies if data is available to compare materials and a state is patient in waiting for the results; however, significant delays in intelligence or uncertainty in results may reduce the credibility of accusations and harm claims of guilt in the international context. From a strategic perspective, the emphasis in the United States policy regarding rapid identification that was discussed at the outset of the paper reflects optimism rather than reality.
Policy Recommendations
While nuclear forensics may not be able to contribute information quickly enough to guide policymakers in their retaliatory decision-making following terrorist attacks, nuclear forensics does have significant merit. Nuclear forensics will be able to rule people out. It will be able to guide decisions for addressing the environmental disaster. Forensics also has significant political importance, as it can be used in a post-hoc situation following retaliation to possibly justify any action taken. It will also continue to be important in pre-detonation interdiction situations, where it has been advanced and excelled to-date, providing valuable information on the trafficking of illicit materials.
However, realistic expectations are necessary and should be made known so that policymakers are able to plan accordingly. The public will demand quick action, requiring officials to produce tangible results. If delay is not possible, attribution may not be possible. To overcome this, ensuring policymakers are aware of the technical limitations and hurdles that are present in conducting forensics analysis of radioactive material would help to manage expectations.
To reduce analytical time and improve attribution success rates, further steps should be taken. Continuing to enlarge the IAEA database on nuclear material signatures is critical, as this will reduce analytical time and uncertainty, making more precise attribution possible. Additional resources for equipment, building up analytical capacity, and furthering cooperation among all states to ensure that signatures are catalogued and accessible is critical. The United States has taken great steps in improving the knowledge base on how nuclear forensics is conducted with fellowships and trainings available through the Department of Energy (DOE) and the Department of Homeland Security (DHS). While funding constraints are tight, expansion of these programs and targeted recruitment of highly-qualified students and individuals is key. Perhaps, these trainings and opportunities could be expanded to cover individuals that are trained to do analytical work, but is not their primary tasking – like a National Guard for nuclear forensics. DOE and other agencies have similar programs for response capacity during emergencies; bolstering analytical capacity for rapid ramp-up in case of emergency would help to reduce analytical time. However, while these programs may reduce time, some of the delay is inherent in the science. Technological advances in analytics may help, but in the short-term are unavailable. In sum, further work in developing the personnel and technological infrastructure for nuclear forensics is needed; in the meantime, prudence is necessary.
Philip Baxter is currently a PhD Candidate in the International Affairs, Science, and Technology program in the Sam Nunn School of International Affairs at Georgia Tech. He completed his BA in political science and history at Grove City College and a MA in public policy, focusing on national security policy, from George Mason University. Prior to joining the Sam Nunn School, Phil worked in international security related positions in the Washington, DC area, including serving as a researcher at the National Defense University and as a Nonproliferation Fellow at the National Nuclear Security Administration. His dissertation takes a network analysis approach in examining how scientific cooperation and tacit knowledge development impacts proliferation latency. More broadly, his research interests focus on international security issues, including deterrence theory, strategic stability, illicit trafficking, U.S.-China-Russia relations, and nuclear safeguards.
Public Interest Report: May 2015
Mind the Empathy Gap
by Charles D. Ferguson
The Risk of Nuclear Winter
by Seth Baum
Since the early 1980s, the world has known that a large nuclear war could cause severe global environmental effects, including dramatic cooling of surface temperatures, declines in precipitation, and increased ultraviolet radiation. How severe would those consequences be? And what should the world be doing about it?
Dual Use Research: Is it Possible to Protect the Public Without Encroaching Rights?
by Tosin Fadeyi
With the continued growth of scientific knowledge and technological development, awareness of the risks associated with the misuse of scientific knowledge and new technology has continued to increase significantly – especially in microbiological research.
Who was Willy Higinbotham?
by Julie Schletter
A compilation of letters by Dr. William Higinbotham, a nuclear physicist who worked on the first nuclear bomb and served as the first chairman of FAS.
The False Hope of Nuclear Forensics? Assessing the Timeliness of Forensics Intelligence
by Philip Baxter
Nuclear forensics is playing an increasing role in the conceptualization of U.S. deterrence strategy, formally integrated into policy in the 2006 National Strategy on Combatting Terrorism (NSCT).
Naval Nuclear Propulsion: Assessing Benefits and Risks
The United States and other countries with nuclear navies have benefited from having nuclear-powered warships. But do the continued benefits depend on indefinite use of highly enriched uranium (HEU)—which can be made into nuclear weapons—as naval nuclear fuel? With budgetary constraints bearing down on the U.S. Department of Defense, the Naval Nuclear Propulsion Program is finding it difficult to address many competing needs including upgrading aging training facilities, handling spent nuclear fuel, and designing the next generation submarines to replace the Virginia-class attack submarines.
FAS convened an independent, nonpartisan task force of experts from the national security, nuclear engineering, nonproliferation and nuclear security fields to examine effective ways to monitor and safeguard HEU and LEU in the naval sector, and consider alternatives to HEU for naval propulsion so as to improve nuclear security and nonproliferation.
The results of the year-long task force study are compiled in the report, Naval Nuclear Propulsion: Assessing Benefits and Risks. The task force concluded that the U.S. Navy has strong incentives to maintain the continuing use of highly enriched uranium and would be reluctant, or even opposed, to shift to use of low enriched uranium unless the naval nuclear enterprise is fully funded and the Naval Nuclear Propulsion Program has adequate financial resources to try to develop a life-of-ship reactor fueled with LEU that would meet the Navy’s performance requirements. The task force endorses having the Obama administration and Congress allocate adequate funding for R&D on advanced LEU fuels no later than 2017 in time for development of the next generation nuclear attack submarine. “The United States should demonstrate leadership in working urgently to reduce the use in naval fuels of highly enriched uranium–that can power nuclear weapons–while addressing the national security needs of the nuclear navy to ensure that the navy can meet its performance requirements with lifetime reactors fueled with low enriched uranium,” said Dr. Charles D. Ferguson, Chair of the Independent Task Force and President of FAS.
Four companion papers written by task force members are also available:
- Investigation into the Unintended Consequences of Converting the U.S. Nuclear Naval Fleet from Highly Enriched Uranium (HEU) to Low Enriched Uranium (LEU) by Dr. Alireza Haghighat, Professor, Virginia Tech Transport Theory Group (VT3G), Nuclear Science and Engineering Laboratory (NSEL) Nuclear Engineering Program, Jack Bell, Graduate Research Assistant and Nathan Roskoff, Graduate Research Assistant.
- Phasing Out Highly Enriched Uranium Fuel in Naval Propulsion: Why It’s Necessary, and How to Achieve It by Dr. Alan J. Kuperman, Coordinator, Nuclear Proliferation Prevention Project and Associate Professor , LBJ School of Public Affairs, University of Texas at Austin.
- The UK Naval Nuclear Propulsion Programme and Highly Enriched Uranium by Dr. Nick Ritchie, University of York, UK.
- A Novel Framework for Safeguarding Naval Nuclear Material by Naomi Egel, Dr. Bethany L. Goldblum, & Erika Suzuki, University of California, Berkeley.
Naval Nuclear Propulsion: Assessing Benefits and Risks can be read and downloaded here (PDF).
The task force members thank the John D. and Catherine T. MacArthur Foundation for its generous support of this project.
Seeking China-U.S. Strategic Nuclear Stability
“To destroy the other, you have to destroy part of yourself.
To deter the other, you have to deter yourself,” according to a Chinese nuclear strategy expert. During the week of February 9th, I had the privilege to travel to China where I heard this statement during the Ninth China-U.S. Dialogue on Strategic Nuclear Dynamics in Beijing. The Dialogue was jointly convened by the China Foundation for International Strategic Studies (CFISS) and the Pacific Forum Center for Strategic and International Studies (CSIS). While the statements by participants were not-for-attribution, I can state that the person quoted is a senior official with extensive experience in China’s strategic nuclear planning.
The main reason for my research travel was to work with Bruce MacDonald, FAS Adjunct Senior Fellow for National Security Technology, on a project examining the security implications of a possible Chinese deployment of strategic ballistic missile defense. We had discussions with more than a dozen Chinese nuclear strategists in Beijing and Shanghai; we will publish a full report on our findings and analysis this summer. FAS plans to continue further work on projects concerning China-U.S. strategic relations as well as understanding how our two countries can cooperate on the challenges of providing adequate healthy food, near-zero emission energy sources, and unpolluted air and water.
During the discussions, I was struck by the gap between American and Chinese perspectives. As indicated by the quote, Chinese strategic thinkers appear reluctant to want to use nuclear weapons and underscore the moral and psychological dimensions of nuclear strategy. Nonetheless, China’s leaders clearly perceive the need for such weapons for deterrence purposes. Perhaps the biggest gap in perception is that American nuclear strategists tend to remain skeptical about China’s policy of no-first-use (NFU) of nuclear weapons. By the NFU policy, China would not launch nuclear weapons first against the United States or any other state. Thus, China needs assurances that it would have enough nuclear weapons available to launch in a second retaliatory strike in the unlikely event of a nuclear attack by another state.
American experts are doubtful about NFU statements because during the Cold War the Soviet Union repeatedly stated that it had a NFU policy, but once the Cold War ended and access was obtained to the Soviets’ plans, the United States found out that the Soviets had lied. They had plans to use nuclear weapons first under certain circumstances. Today, given Russia’s relative conventional military inferiority compared to the United States, Moscow has openly declared that it has a first-use policy to deter massive conventional attack.
Can NFU be demonstrated? Some analysts have argued that China in its practice of keeping warheads de-mated or unattached from the missile delivery systems has in effect placed itself in a second strike posture. But the worry from the American side is that such a posture could change quickly and that as China has been modernizing its missile force from slow firing liquid-fueled rockets to quick firing solid-fueled rockets, it will be capable of shifting to a first-use policy if the security conditions dictate such a change.
The more I talked with Chinese experts in Beijing and Shanghai the more I felt that they are sincere about China’s NFU policy. A clearer and fuller exposition came from a leading expert in Shanghai who said that China has a two-pillar strategy. First, China believes in realism in that it has to take appropriate steps in a semi-anarchic geopolitical system to defend itself. It cannot rely on others for outside assistance or deterrence. Indeed, one of the major differences between China and the United States is that China is not part of a formal defense alliance pact such as the North Atlantic Treaty Organization (NATO) or the alliance the United States has with Japan and South Korea. Although in the 1950s, Chairman Mao Zedong decried nuclear weapons as “paper tigers,” he decided that the People’s Republic of China must acquire them given the threats China faced when U.S. General Douglas MacArthur suggested possible use of nuclear weapons against China during the Korean War. In October 1964, China detonated its first nuclear explosive device and at the same time declared its NFU policy.
The second pillar is based on morality. Chinese strategists understand the moral dilemma of nuclear deterrence. On the one hand, a nuclear-armed state has to show a credible willingness to launch nuclear weapons to deter the other’s launch. But on the other hand, if deterrence fails, actually carrying out the threat condemns millions to die. According to the Chinese nuclear expert, China would not retaliate immediately and instead would offer a peace deal to avert further escalation to more massive destruction. As long as China has an assured second strike, which might consist of only a handful of nuclear weapons that could hit the nuclear attacker’s territory, Beijing could wait hours to days before retaliating or not striking back in order to give adequate time for cooling off and stopping of hostilities.
Because China has not promised to provide extended nuclear deterrence to other states, Chinese leaders would also not feel compelled to strike back quickly to defend such states. In contrast, because of U.S. deterrence commitments to NATO, Japan, South Korea, and Australia, Washington would feel pressure to respond quickly if it or its allies are under nuclear attack. Indeed, at the Dialogue, Chinese experts often brought up the U.S. alliances and especially pointed to Japan as a concern, as Japan could use its relatively large stockpile of about nine metric tons of reactor-grade plutonium (which is still weapons-usable) to make nuclear explosives. Moreover, last July, the administration of Japanese Prime Minister Shinzo Abe announced a “reinterpretation” of the Article 9 restriction in the Japanese Constitution, which prohibits Japan from having an offensive military. (The United States imposed this restriction after the Second World War.) The reinterpretation allows Japanese Self-Defense Forces to serve alongside allies during military actions. Beijing is opposed because then Japan is just one step away from further changing to a more aggressive policy that could permit Japan to act alone in taking military actions. Before and during the Second World War, Japanese military forces committed numerous atrocities against Chinese civilians. Chinese strategists fear that Japan is seeking to further break out of its restraints.
Thus, Chinese strategists want clarity about Japan’s intentions and want to know how the evolving U.S.-Japan alliance could affect Chinese interests. Japan and the United States have strong concerns about China’s growing assertive actions near the disputed Diaoyu Islands (Chinese name) or Senkaku Islands (Japanese name) between China and Japan, and competing claims for territory in the South China Sea. Regarding nuclear forces, some Chinese experts speculate about the conditions that could lead to Japan’s development of nuclear weapons. The need is clear for continuing dialogue on the triangular relationship among China, Japan, and the United States.
Several Chinese strategists perceive a disparity in U.S. nuclear policy toward China. They want to know if the United States will treat China as a major nuclear power to be deterred or as a big “rogue” state with nuclear weapons. U.S. experts have tried to assure their Chinese counterparts that the strategic reality is the former. The Chinese experts also see that the United States has more than ten times the number of deliverable nuclear weapons than China. But they hear from some conservative American experts that the United States fears that China might “sprint for parity” to match the U.S. nuclear arsenal if the United States further reduces down to 1,000 or somewhat fewer weapons.1 According to the FAS Nuclear Information Project, China is estimated to have about 250 warheads in its stockpile for delivery.2Chinese experts also hear from the Obama administration that it wants to someday achieve a nuclear-weapon-free world. The transition from where the world is today to that future is fraught with challenges: one of them being the mathematical fact that to get to zero or close to zero, nuclear-armed states will have to reach parity with each other eventually.
Look to Texas Rather Than Nevada for a Site Selection Process on Nuclear Waste Disposal
Republican gains in the 2014 midterm elections have refocused attention on a number of policy areas–including nuclear waste storage. Although President Obama has consistently championed nuclear power by providing federal loan guarantees for new reactors and placing nuclear power among the “clean energy” sources targeted for an 80 percent share of the nation’s electricity production by 2035, he has also placed the viability of nuclear power in doubt by thwarting efforts to build a high level radioactive waste repository at Yucca Mountain, Nevada. Several newspapers around the country have run editorials arguing that the Yucca Mountain ought to be revived or even, as the Chicago Tribune suggested, “fast-tracked.” Arguments like these emphasize the risks associated with our current interim storage of spent fuel at more than one hundred power plants in close proximity to population centers throughout the country, commitments for disposal capacity the federal government owes utilities and contaminated legacy sites like those in South Carolina and Washington State, and the amount of research and spending that has already been devoted to investigating the suitability of the Yucca Mountain site.
However, it is unlikely that Yucca Mountain will ever receive shipments of nuclear waste. Nevada’s persistent and successful efforts to thwart the Yucca Mountain project and the Nuclear Waste Policy Act of 1982 are likely to continue as they demonstrate the futility of a policy that forces disposal on an unwilling host state. Three years ago the Blue Ribbon Commission on America’s Nuclear Future said as much, recommending instead a “consent-based” approach to siting nuclear waste storage and disposal facilities. How would such an approach work?
For the past three years, Texas has been accepting what so many other states and localities have rejected in past decades- radioactive waste from the nation’s nuclear power plants. A newly opened private facility operated by Waste Control Specialists in Andrews County, Texas has been receiving shipments of low-level radioactive waste from multiple states. This year, the Texas Commission on Environmental Quality has amended the license for the Andrews County site to more than triple its capacity and it can begin accepting “Greater Than Class C Waste”- the most highly radioactive materials in the low-level radioactive waste stream, as well as depleted uranium. Residents and elected officials in Andrews County are now considering whether or not to support a proposal for a high-level radioactive waste disposal facility.
We should take a closer look at past developments in Nevada and more recent decisions in Texas to guide our future nuclear waste policy. These two states are engaging with different aspects of the nuclear waste stream, governed by very different policy approaches. Nevada’s efforts to thwart the Yucca Mountain project are rooted in the coercive approach codified in the Nuclear Waste Policy Act of 1982. In contrast, the willingness of Texas to establish new disposal capacity stems from the Low-level Radioactive Waste Policy Act of 1980—a law that expanded the authority of states hosting disposal sites in an effort to overcome state opposition to waste sites in the midst of an urgent shortage of disposal capacity.
First, let’s consider the troublesome politics that has infused the Nevada case. The Nuclear Waste Policy Act of 1982 established a scientific site selection process for an eastern and western waste repository. However, President Reagan abandoned this process in 1986 by halting the search for an eastern site amid fears of midterm election losses in potential host states of Wisconsin, Georgia and North Carolina. In 1987, Congress abandoned the search for a western site when House Speaker Jim Wright (D-TX), and House Majority Leader Tom Foley (D-WA), amended the law to remove Texas and Washington from consideration. The amended law became known as the “Screw Nevada” plan because it designated Yucca Mountain as the sole site for the waste repository.
While politics effectively trumped science in the selection of Yucca Mountain, opponents- led by Senator Harry Reid of Nevada- have employed politics to effectively thwart the project. In 2005, Reid placed 175 holds on President Bush’s nominations for various executive appointments until Bush finally nominated Reid’s own science advisor, Gregory Jaczko, to the Nuclear Regulatory Commission (NRC). In 2006 Reid persuaded the Democratic National Committee to move the Nevada caucuses to the front of the 2008 presidential primary calendar, prompting each candidate to oppose Yucca Mountain. President Obama fulfilled his campaign promise by tapping Jaczko to chair the NRC and dismantling Yucca Mountain. Each year the President’s budget proposals zeroed out funding for the facility, the NRC defunded the license review process and the Department of Energy has continued to mothball the project. Although court decisions have forced the administration to begin reviewing the project, progress has been slow and in the meantime the Yucca facility offices have been shuttered, workforce eliminated, and computers, equipment and vehicles have been surplused. Jaczko was forced to resign amidst concern from other NRC members that his management style thwarted decision making processes. However, Jaczko’s chief counsel, Stephen Burns was sworn in as the commissioner of the NRC on November 5, 2014.
We should expect, accept, and plan for such political maneuvering. Our system of locally accountable representatives empowers individual office holders with a wealth of substantive and procedural tools that make all nuclear politics local. Any decision making on this issue will be a political contest to locate or avoid the waste. Consequently, if there is to be a politically feasible nuclear waste repository, it will require a willing host. Money and the promise of jobs alone have not proven alluring enough for acceptance of such a project. We would do better to embrace our decentralized politics and offer the host significant authority over the waste stream.
This is the current situation that Texas enjoys: Congress gave states responsibility for establishing low-level radioactive waste sites and, as an incentive, enabled states to join interstate compacts. Once approved by Congress, a compact has the authority to accept or decline waste imports from other states, which is a power that is normally not extended to states because it violates the interstate commerce clause of the U.S. Constitution. Texas is in a compact with Vermont, and as host state, Texas shapes the waste market by determining disposal availability for other states. Texas also has authority to set fees, taxes, and regulations for disposal in collaboration with federal agencies. Compacts can dissolve and host states can cease accepting waste altogether at a future date. While even under these provisions most states will refuse to host radioactive waste, the extension of state authority at least courts the possibility (as in Texas) of the rare case that combines an enthusiastic local host community in a relatively suitable location, a supportive state government, and a lack of opposition from neighboring communities and states. This approach better meets our democratic expectations because it confronts the local, state and national politics openly and directly, courting agreement at each level and extending authority over the waste stream to the unit of government bearing responsibility for long term disposal within its borders.
What if we adopt this approach and there is no willing host for spent fuel at a technically suitable site? What if a site is established, but at some future date the host state and compact exercise authority refuse importation or dissolve altogether? We would be left with interim onsite storage- the same result our current predictably failed policy approach has left us in. If there is no willing host, or if long term disposal is less certain due to the host’s authority over the waste stream, we also gain authentic and valuable feedback on societal support for nuclear energy. That is, our willingness to provide for waste disposal in a process compatible with our democratic norms and decentralized political system should influence our decisions on nuclear energy production and waste generation.
Reflections on the 70th Anniversary of the Manhattan Project: Questions and Answers
I began my professional life by obtaining degrees in physics and entering a conventional academic career in teaching and astronomical research, but I had always been curious about the physics of the Manhattan Project and its role in ending World War II. With grants, publications and tenure established, I began to indulge this interest as a legitimate part of my work and about 20 years ago, to explore it in depth.
As anybody that comes to this topic in more than a casual way will attest, it can grow into an obsession. I have now published two books on the Project, well over two dozen articles and book reviews in technical, historical, and semi-popular journals, and have made a number of presentations at professional conferences. Over this time I must have looked at thousands of archived documents and held hundreds of real and electronic conversations with other scientists, historians, and writers whose interest in this pivotal event parallels my own. While my knowledge of the Project is certainly not and never will be complete, I have learned much about it over the last 20 years.
To my surprise (and pleasure) I am frequently asked questions about the Project by students, family members, guests at dinner parties, colleagues at American Physical Society meetings, and even casual acquaintances at my favorite coffee shop. Typical queries are:
“Why did we drop the bombs? Were they necessary to end the war?”
“Did President Truman and his advisors really understand the power of the bombs and the destruction they could cause?”
“Have nuclear weapons helped deter subsequent large-scale wars, and do we still need a deterrent?”
“What about the ethical aspects?”
“In studying the Manhattan Project, what most surprised you? Do you think it or something similar could be done now?”
At first I was awkward in trying to answer these questions but with passing years, increased knowledge, and much reflection I now feel more comfortable addressing them. With accumulating experience in a scientific career, you often learn that the questions you and others initially thought to be important may not be the ones that the facts address and that there may be much more interesting issues behind the obvious ones. In this spirit, I offer in this essay some very personal reflections on the Project and the legacies of Hiroshima and Nagasaki, framed as responses to questions like those above. In some cases a “yes” or “no” along with an explanation will do, but for many issues the nuances involved obviate a simple response.
I begin with the issue of the “decision” to use the bomb and the state of President Truman’s knowledge. In the spring of 1945, Secretary of War Henry Stimson assembled a committee to consider and advise upon immediate and long-term aspects of atomic energy. This “Interim Committee” comprised eight civilians, including three scientists intimately familiar with the Manhattan Project: Vannevar Bush, James Conant, and Karl Compton. In a meeting on May 31 which was attended by Army Chief of Staff General George C. Marshall, Stimson opened with a statement as to how he viewed the significance of the Project1:
The Secretary expressed the view, a view shared by General Marshall, that this project should not be considered simply in terms of military weapons, but as a new relationship of man to the universe. This discovery might be compared to the discoveries of the Copernican theory and of the laws of gravity, but far more important than these in its effect on the lives of men. While the advances in the field to date had been fostered by the needs of war, it was important to realize that the implications of the project went far beyond the needs of the present war. It must be controlled if possible to make it an assurance of future peace rather than a menace to civilization.
For his part, President Truman had been thoroughly briefed on the project by Stimson and General Leslie Groves, director of the Project, soon after he became President in late April. In late July, Truman recorded his reaction to the Trinity test in his diary2:
We have discovered the most terrible bomb in the history of the world. … Anyway we think we have found the way to cause a disintegration of the atom. An experiment in the New Mexico desert was startling – to put it mildly. Thirteen pounds of the explosive caused the complete disintegration of a steel tower 60 feet high, created a crater 6 feet deep and 1,200 feet in diameter, knocked over a steel tower 1/2 mile away and knocked men down 10,000 yards away. The explosion was visible for more than 200 miles and audible for 40 miles and more. … The target will be a purely military one and we will issue a warning statement asking the Japs to surrender and save lives. I’m sure they will not do that, but we will have given them the chance. It is certainly a good thing for the world that Hitler’s crowd or Stalin’s did not discover this atomic bomb. It seems to be the most terrible thing ever discovered, but it can be made the most useful…
I have no doubt that Stimson, Marshall and Truman were well aware of the revolutionary nature of the bomb and the possibility (indeed, likelihood) that a postwar nuclear arms race would ensue. Any notion that Truman was a disengaged observer carried along by the momentum of events is hard to believe in view of the above comments. These men were making decisions of grave responsibility and were fully briefed as to both the immediate situation of the war and possible long-term geopolitical consequences: the “mature consideration” that Franklin Roosevelt and Winston Churchill agreed in 1943 would have to be carried out before use of the bombs was authorized. Perhaps Truman did not so much make a positive decision to use the bombs so much as he opted not to halt operations that were already moving along when he became President, but I have no doubt that he realized that atomic bombs would be a profoundly new type of weapon. Further, let us not forget that it was Truman who personally intervened after Nagasaki to order a halt to further atomic bombings when the Japanese began to signal a willingness to consider surrender negotiations.
As much as I am convinced that Truman took his duties with the greatest sense of responsibility, I cannot answer “yes” or “no” as to the necessity of the bombings: the question is always loaded with so many unstated perspectives. If the Japanese could not be convinced to surrender, then Truman, Stimson, and Marshall faced the prospect of committing hundreds of thousands of men to a horrific invasion followed by a likely even more horrific slog through the Japanese home islands. After 70 years it is easy to forget the context of the war in the summer of 1945. Historians know that the Japanese were seeking a path to honorable surrender and might have given up within a few weeks, but the very bloody fact on the ground was that they had not yet surrendered; thousands of Allied and Japanese servicemen were dying each week in the Pacific. Military historian Dennis Giangreco has studied Army and War Department manpower projections for the two-part invasion of Japan scheduled for late 1945 and the spring of 19463. Planning was based on having to sustain an average of 100,000 casualties per month from November 1945 through the fall of 1946. The invasion of Kyushu was scheduled to begin on November 1, 1945. Had this occurred, the number of casualties might well have exceeded the number of deaths at Hiroshima and Nagasaki, let alone those which would have occurred in the meantime. From the perspective of preventing casualties, perhaps it was unfortunate that the bombs were not ready at the time of the battle for Iwo Jima, one of the bloodiest protracted battles from February 19 to March 26, 1945, during which more than 25,000 were killed on both sides.
Even if they believe that the Soviet Union’s declaration of war on the night of August 8, 1945, against Japan was the most significant factor in the Japanese decision to surrender, most historians allow that the bombs had at least some effect on that decision. The Soviet invasion came between the two atomic bombings on August 6 (Hiroshima) and August 9 (Nagasaki). These two bombings would convince the Japanese that Hiroshima was not a one-shot deal: America could manufacture atomic bombs in quantity. The impact of the bombings was alluded to by Emperor Hirohito in his message to his people on August 15, 1945, in which he stated that “ … the enemy has begun to employ a new and most cruel bomb,” which was one of the motivations for his government’s decision to accept the terms of the Potsdam Declaration. But there are certainly political aspects that muddy this story, namely justifying the immense resources poured into the Project and sending a message to the Soviets that at least for a while America was the ascendant postwar power in the world. I give a qualified “yes” to the question of necessity.
The necessity debate often overlooks a corollary issue which I have come to think of as “nuclear inoculation.” Had the bombs not been used in 1945 and world leaders made aware of their frightening power, what far more awful circumstances might have unfolded in a later war when there were more nuclear powers armed with more powerful weapons? I am absolutely convinced that the bombings have had a significant deterrent effect and that they may well have prevented the outbreak of further major wars since 1945. Indeed, we know that there were occasions such as the Cuban missile crisis when national leaders looked into the maw of a possible large-scale war and backed away.
The “inoculation” issue leads to the question of whether or not America continues to need a nuclear deterrent. To this I say: “Yes, but for not entirely rational reasons.” Even very conservative military planners estimate that a few hundred warheads would be enough for any conceivable nuclear-mission scenario and that the thousands still stockpiled are a waste of resources and budgets. But the deterrent issue seems to me to be more psychological than mission-driven. With potentially unstable or irrationally-led states pursuing weapons and possibly encouraging proliferation, what “established” nuclear power would consider unilaterally disarming itself? If America and Russia engage in further rounds of treaties and draw down their numbers of deployed and reserved weapons from thousands of warheads, a time may come when these numbers will get down to those held by powers such as Britain, France, China, India and Pakistan4. How then will negotiations proceed? Even if rigorous inspection regimes are agreed to, it seems to me that it will take decades until we might get to a level of trust where we won’t feel compelled to rationalize: “They could be slipping a few weapons into their arsenal under the table; we had better keep some in reserve.” In the meantime, I encourage students and acquaintances to question their elected representatives regarding the Comprehensive Test Ban Treaty and a possible Fissile Materials Cutoff Treaty.
What about the ethics of the bombings? To my mind the answer is: “The war had rendered this issue irrelevant.” Even against the “standards” of present-day terrorist acts, the ferocity of World War II seems almost incomprehensible. Deliberate atrocities against civilians and prisoners by the Axis powers were beyond the ethical pale, but how does one classify the Allied fire-bombings of Coventry, Dresden, and Tokyo even if there were arguable military objectives? The vast majority of victims at Hiroshima and Nagasaki succumbed not to radiation poisoning but to blast and burn effects just like the victims of these other attacks. I do not see that the bombs crossed an ethical threshold that had not already been breached many times before.
What have I learned about the Manhattan Project that especially surprised me? Well, practically everything. I approached the Project as a physicist, and it was a revelation for me that much of the physics involved is entirely accessible to a good undergraduate student. Computing critical mass involves separating a spherical-coordinates differential equation and applying a boundary condition: advanced calculus. Estimating the energy released by an exploding bomb core is a nice example of using the Newtonian work-energy theorem of freshman-level physics in combination with some pressure/energy thermodynamics. Appreciating how a calutron separates isotopes is a beautiful example of using the Lorentz force law of sophomore-level electromagnetism. Estimating the chance that a bomb might detonate prematurely due to a spontaneous fission invokes basic probability theory. These are exotic circumstances which require wickedly difficult engineering to realize, but the physics is really quite fundamental.
Everybody knows that the Manhattan Project was a big undertaking, but I now realize just how truly vast it was. At first, one’s attention is drawn to the outstanding personalities and dramatic events and locales associated the Project: J. Robert Oppenheimer, Enrico Fermi, Groves, Los Alamos, Trinity, Tinian, Hiroshima and Nagasaki. Then the appreciation of the complexity of the production factories at Oak Ridge and Hanford, facilities designed by unappreciated and now largely-forgotten engineers of outstanding talent. Hundreds of contractors and university and government laboratories were involved, staffed by hundreds of thousands of dedicated employees. Also, bombs are not transported by magic to their targets; bombers had to be modified to carry them, and training of crews to fly the missions was initiated well before the final designs of the bombs and choice of targets were settled. The magnitude of the feed materials program to source and process uranium ores is rarely mentioned, but without this there would never have been any bombs (or any later Cold War).
While physics, chemistry, and engineering were front-and-center, I have also come to appreciate that the organization and administration of the Project was equally important. This is a hard thing for an academic scientist to admit! The Project was incredibly well-administered, and there is a lesson here for current times. Yes, the Project had its share of oversight and consultative committees, but they were run by scientists, engineers, government officials and military officers of superb competence and selfless dedication to the national good. These people knew what they were doing and knew how to get things done through the bureaucratic channels involved. An existential threat is always good for getting attention focused on a problem, but somebody has to actually do something. Of course there were security leaks and some inefficiencies, but what else would you expect in an undertaking so large and novel?
Could a Manhattan-type project be done now? I do not doubt for a moment that American scientists, technicians, engineers, and workers still possess the education, brains, dedication, and creativity that characterized Manhattan. But I do not think that such success could be repeated. Rather, headlines and breathless breaking news reports would trumpet waste, inefficiency, disorganization, technically clueless managers, and publicity-seeking politicians. The result would likely be a flawed product which ran far over-budget and delivered late if at all, no matter how intense the motivation. Do the words “Yucca Mountain” require further elaboration?
General Groves’ official history of the Project, the Manhattan District History, can be downloaded from a Department of Energy website, and I encourage readers to look at it5. It is literally thousands of pages, and is simply overwhelming; I doubt that anybody has read it from end-to-end. Click on any page and you will find some gem of information. Beyond the MDH lie thousands of secondary sources: books, popular and technical articles, websites and videos. But I have not one iota of regret that I plunged in. The Project was vast: many aspects of it have yet to be mined, and there are lessons to be had for scientists, engineers, biographers, historians, administrators, sociologists, and policy experts alike.
My research on the Project has made me much more aware of the world nuclear situation. Belief in deterrence aside, I am astonished that there has not been an accidental or intentional aggressive nuclear detonation over the last seventy years. We now know that on many occasions we came very close and that we have been very lucky indeed. While I see the chance of a deliberate nuclear-power-against-nuclear-power exchange as remote, the prospect of a terrorist-sponsored nuclear event does cause me no small amount of concern.
Nuclear energy is the quintessential double-edged sword, and those of us who have some understanding of the history, technicalities and current status of nuclear issues have a responsibility to share our knowledge with our fellow citizens in a thoughtful, responsible way. The stakes are no less existential now than they were seventy years ago.
Nuclear Power and Nanomaterials: Big Potential for Small Particles
Nuclear power plants are large, complex, and expensive facilities. They provide approximately 19 percent of U.S. electricity power supply,1 and in the process consume enormous quantities of water. However, a class of very small particles may be gearing up to lend a helping hand in making power plants more efficient and less costly to operate. This article will briefly introduce nanomaterials and discuss ways in which some of these particles may make nuclear power plants more efficient.
The race to synthesize, engineer, test, and apply new nanoscale materials for solving difficult problems in energy and defense is in full swing. The past twenty five years have ushered in an era of nanomaterials and nanoparticles – objects with at least one dimension between 1 and 100 nanometers2 – and researchers are now implementing these materials in areas as disparate as neuroscience and environmental remediation. To provide a sense of scale, most viruses are a few hundred nanometers in size, most bacteria are a few thousand nanometers in size, and a period at the end of a sentence is about a million nanometers. This new category of materials has ignited the imaginations of scientists and engineers who envision nanomaterials capable of tackling difficult problems in energy, healthcare, and electronics.
Nanomaterials are not new, and indeed occur naturally all over Earth. This includes viruses, the coatings of a lotus leaf, the bottom of a gecko’s foot, and some finely powdered clays. These objects represent natural materials with significant, and often highly functional, nanoscale features. Some researchers have even discovered signs of nanoscale materials in space.3 One of the oldest documented applications of nanomaterials dates back to the Lycurgus Cup, a 4th century Roman glass which was made out of a glass containing gold and silver particles. The result is a glass that appears green when lit from the outside, but red when lit from the inside.4 The effect results from the glass filtering various wavelengths of light differently depending on the various lighting conditions. Of course, the Romans did not know they were using nanoparticles in the process of making this glass.
But what makes nanoparticles interesting or unique? The answer to this question depends on the specific material and application, but a few themes persist. Because of their small size, the physical principles governing how particles behave and interact with their environments change. Some of these changes are due to how basic properties such as volume and surface area change as an object becomes smaller. As a sphere shrinks, the ratio of the surface area to the volume grows. This has far reaching implications for how particles interact with light, heat, and other particles. Visionary researchers are now looking into ways in which these interesting properties may make nuclear power plants more efficient.
One important implication for our discussion is the flow of thermal energy. Consider the process of transferring the thermal energy of your body from your hands to an ice cube. Clearly, you are (hopefully!) warmer than the ice cube. If you place the ice cube on a chilled dish and touch the ice cube with one finger, the cube will melt, but probably fairly slowly. Placing your entire hand over the top half of the ice cube increases the melting rate, and placing the ice cube in your hand and closing your fist further increases the melting rate. This is an example of thermal energy transfer via conduction. Conductive heat transfer from one object to another depends on the area over which the thermal transfer takes place. A larger contact surface area leads to faster conduction. But how does this relate to nanoparticles? As a particle becomes very small, the ratio of the particle’s surface area to its volume increases very rapidly. Since thermal conduction through volume is a function of surface area, particles with large ratios of surface area to volume are able to change temperature very quickly. If you place a large quantity of small cold particles in a warm body of water, the particles will heat quickly. If you take the same volume of particles, but instead compress it into one large particle, then that large particle will warm slowly. As this surface area to volume ratio increases with decreasing size, a general trend is for smaller particles to transfer heat more effectively than larger particles.

So how does this relate to nuclear power plants? Nuclear power plants are water-intense operations and rely on conductive heat transfer to convert nuclear energy to grid-ready electricity. The most common Western reactors are pressurized water reactors (PWRs) in which water is heated by pumping it through the reactor core, then pumping the hot water to a steam generator. This water flows through piping called the primary system and is kept in the liquid state by applying very high pressure through a device called a pressurizer. In the steam generator this primary system water transfers much of its heat to water in a secondary system. High-strength piping, which is a very effective heat conductor, keeps the water in the two systems from directly contacting each other. The secondary system’s water turns to steam when it absorbs the heat from the primary system. The steam is then directed via piping to drive a turbine, which turns an electric generator, thus completing the cycle of converting nuclear energy to readily usable electricity for the grid. After passing through the turbines, the steam is captured and condensed for recycling. This reclaimed water can then be sent back through the steam generator. However, a significant amount of the energy of this steam is lost to the atmosphere via a third system of cooling water that is used to condense the steam. Large amounts of water (in the form of water vapor) are released to the environment in this process. Think of the water vapor plume at the top of the iconic cooling towers seen in the cartoon TV show The Simpsons. (Not all nuclear power plants use these types of cooling towers, but all must emit heat to the environment through some means of cooling.)
A new class of nanomaterials called core-shell phase change nanoparticles may help in reducing the water loss. First, let’s parse the name of the nanoparticle. The core-shell nomenclature refers to the fact that the particle has a center made out of one material, and an outer skin made out of another material. The phase change component of the name refers to the fact that the particle center changes from a liquid to a solid under certain conditions. These particles may be mixed into the water used for transporting the thermal energy generated within the reactor. Once mixed into the reactor water, the particle cores melt as the water picks up thermal energy from the reactor. The melted material in the particle core is contained by a shell, which remains solid at reactor temperatures. Thus, as the water leaves the reactor it carries with it tiny particles containing bundles of liquid thermal energy wrapped in a solid core. The notion is that as these particles travel to the cooling tower, they solidify and dissipate their heat into the surrounding water, thus decreasing the amount of water needed to convert the thermal energy created by the reactor to steam for turning turbines. Additionally, since these particles do not vaporize, they are much more easily retained for recycling. The Electric Power Research Institute is currently working with scientists at Argonne National Laboratory to commercialize these particles and has suggested that this technology could decrease power plant water requirements by as much as 20 percent.5
Another nanoparticle-based approach for increasing reactor efficiency seeks to tackle a different problem. Pressurized water reactors place the water in direct contact with the fuel rods of the nuclear reactor. However, bubbles that form on the surfaces of the fuel rods can significantly decrease efficiency by insulating the rods from the water. When this happens, heat transfer efficiency suffers. One lab at the Massachusetts Institute of Technology (MIT) has implemented alumina nanoparticles that coat the fuel rods and prevent the buildup of bubbles on the heating elements. Alumina, a compound of aluminum and oxygen, is stable and has a high melting temperature. Testing these particles in the MIT reactor, the group found that the alumina nanoparticles coated the fuel rods. The result was an increase in the efficacy of the reactor. The engineers explain the findings by suggesting that the alumina nanoparticles allow for quick removal of the bubbles forming on fuel rod surfaces, thus minimizing the insulating layer of bubbles and maximizing heat transfer efficiency.6 To validate this, the researchers heated identical thin, steel wires a fraction of a millimeter in diameter. One wire was submerged in water, the other in a nanofluid containing alumina particles. The wires were heated to the point of boiling the surrounding fluid. After boiling the wires were examined using a powerful electron microscope. The experimenters observed that the wire heated in the nanofluid was indeed coated with nanoparticles, while the other wire maintained its original smooth surface.
Most importantly, there are also potential safety applications of having nanofluids capable of quickly transporting large quantities of thermal energy. One proposal calls for the use of nanofluids in standby coolant stored in Emergency Core Cooling Systems (ECCS). The ECCS are independent, standby systems designed to safely shut down a reactor in the case of an accident or malfunction. One ECCS component is a set of pumps and backup coolant to be sprayed directly onto reactor rods. Such systems are critical in preventing a loss of coolant accident (LOCA) from spiraling out of control. Because ECCS have backup reservoirs of coolant, technologies that make this backup coolant more effective at removing heat from the reactor could improve the safety of reactors. Because nanofluids can increase the heat transfer efficacy of water by 50 percent or more, some researchers have suggested that they may also be useful in emergency scenarios.7
Steam generators at both nuclear and coal power plants accounts for approximately 3 percent of overall freshwater consumption in the United States. Generally speaking, nuclear power plants consume about 400 gallons of water per megawatt-hour (MWh). Their coal and natural gas counterparts consume approximately 300 and 100 gallons per MWh, respectively.8 Thus, nuclear power plants stand to gain considerably by becoming more water-efficient.
However, there are many hurdles to tackle before nanoparticles can be safely and effectively used in operating power plants. Scaling up particle production to the large volumes of particles necessary for implementation in a power plant is expensive and labor intensive. New synthesis infrastructures may be necessary for large-scale production of these tiny particles. Additionally, broad adoption of this technology will not occur until significant cost savings are proven effective at a functioning plant. As a result, particles must be made available at a cost reasonable for adoption by power plant operators. A rough cost estimate can be made using commercially available alumina nanoparticles, as these particles have been tested extensively in the heat transfer literature. A typical nuclear power plant in the United States supplies enough electricity to power 740,000 homes. To do this, the plant requires between 13 and 23 gallons of water per home per day. Thus, water usages for the plant may range from 10 to 17 million gallons per day. Current vendors of aluminum oxide nanoparticles sell 1kg of nanopowder for around $200. With an expectation that economies of scale would bring that price down to $100/kg and that the particles could be easily recovered and recycled, loading a nuclear power plant with a 0.1% volume fraction of alumina nanoparticles would cost about $14.7 to $25 million per power plant. This is a substantial initial investment. Naturally, if nanoparticles were to cost $10/kg, then particle outfitting costs of $1.5 to $2.5 million per nuclear plant could be achieved. If 100 percent recovery of the particles could be achieved, then this initial cost would be recovered over time by the expected 2 to 4 percent increase in plant efficiency.
In addition to cost-benefit analysis, extensive testing must be performed to ensure long-term application of these particles does not threaten the operational safety of the plant. To accomplish this, smaller scale reactors (like those housed at research facilities and universities) may test these particles over the course of years to track the impacts of long-term use. Potential pitfalls include increased corrosion, system clogging, and nanoparticle leakage into wastewater. Corrosion engineers will be needed to validate the degree to which nanoparticles contribute to the overall aging of reactors in which they are used. Nanoparticle designers and hydrodynamicists will be needed to ensure that system clogging is manageable. Additionally, filtration experts and the Environmental Protection Agency will be needed to establish best practices for minimizing the amount of nanomaterial that exits the facility, as well as understanding and quantifying the environmental impacts of that emitted material. None of these potential roadblocks are trivial. However, while the challenges seem large, it is encouraging to see potential applications of nanotechnology in power plants.
The Making of the Manhattan Project Park
The making of the Manhattan Project National Historical Park took more than five times as long as the making of the atomic bomb itself (1942 to 1945). Fifteen years after the first efforts to preserve some of the Manhattan Project properties at Los Alamos, New Mexico, in 1999, Congress enacted the Manhattan Project National Historical Park Act, signed by President Obama on December 19, 2014. The following provides the story of how the park was created and a preview of coming attractions.
Mandate for a Clean Sweep
After the end of the Cold War in 1989, Congress directed the Department of Energy (DOE) to clean up decades of contamination at its nuclear production facilities. At Los Alamos, the V Site (where the atomic bombs were assembled), was a cluster of garage-like wooden structures left over from the Manhattan Project, far from public view. The main property had high-bay doors to accommodate the “Gadget,” the world’s first atomic device tested at the Trinity Site on July 16, 1945. Along with dozens of other Manhattan Project properties, the Los Alamos National Laboratory (LANL) slated the V Site buildings for demolition.
LANL officials estimated that the costs just to stabilize the buildings would be $3 million. “Preservation would be a waste of taxpayers’ money1,” declared LANL’s Richard Berwick. When the State of New Mexico concurred in the demolition, the buildings were doomed.
Rescuing the V Site Properties
The legacy of the Manhattan Project was in the crosshairs. Were any of the original Manhattan Project properties at Los Alamos going to be saved? Working for the Department of Energy, I called the Advisory Council on Historic Preservation (ACHP) for advice. The Council agreed to add a day to its Santa Fe meeting that fall to visit the V Site.
On November 5, 1998, the Advisory Council members were astonished by the contrast between the simplicity of V Site properties and the complexity of what took place inside them. The group concluded that the V Site would not only qualify as a National Historic Landmark but as a World Heritage Site similar to the Acropolis in Athens or the ancient city of Petra in Jordan. Somewhat chastened, the Los Alamos National Laboratory agreed to take the cluster of V Site buildings off the demolition list. However, funds to restore them would have to come from elsewhere.
Save America’s Treasures
In 1998 Congress and First Lady Hillary Clinton decided to commemorate the millennium by awarding Save America’s Treasures grants to preserve historic federal properties in danger of being lost. In a competitive process run by the National Park Service, the Department of Energy (DOE) was awarded $700,000 to restore the V Site properties.
However, there was a catch-22: the grant had to be matched by non-federal funds, but federal employees cannot solicit funds and DOE has no foundation authorized to do so. Rather than have DOE forfeit the grant, I decided to leave a 25-year career with the federal government in January 2000 to raise the funds and segue to my next “real” job.

Restored V Site at Los Alamos
Gaining Traction
The fund-raising project quickly evolved into a much bigger effort. To galvanize public and political attention, in March 2001 I enlisted the Los Alamos Historical Society to collaborate on a weekend of events called “Remembering the Manhattan Project.” The centerpiece was the “Louis Slotin Sonata,” a new play by Paul Mullin about a Manhattan Project scientist who died in a criticality experiment at Los Alamos in early 1946. The play and a heated discussion afterwards was covered by the New York Times and other press, bringing the Manhattan Project to national attention.
In February 2002, I founded the Atomic Heritage Foundation (AHF), a nonprofit in Washington, DC dedicated to preserving and interpreting the Manhattan Project. Richard Rhodes, Pulitzer Prize-winning author of The Making of the Atomic Bomb, helped open doors to Senators Jeff Bingaman (D-NM), and Pete Domenici (R-NM). To increase interest in preserving the Manhattan Project, in April 2002 we convened a symposium in Washington, DC that was covered by C-SPAN worldwide.
On September 30, 2003, Senators Bingaman, Maria Cantwell (D-WA), and Patty Murray (D-WA), introduced legislation to study the potential for including the Manhattan Project in the National Park System. On the same day, Congressman Doc Hastings (R-WA), introduced similar legislation in the House. Congress passed the study bill in the fall of 2004 and President George W. Bush signed it despite the administration’s opposition to any new parks.
For more than a decade, the Congressional delegations from New Mexico, Washington and Tennessee were a very strong, bipartisan team. Their commitment to the park was critical at every juncture over the next decade but especially in the final weeks of the Congress. The last major public lands omnibus legislation was in 2009; since then very few park bills had been passed. The Senate had a long list of bills that it wanted to attach to the NDAA along with the Manhattan Project National Historical Park. However, efforts to create a small “package” of other bills failed in 2013. Finally, in December 2014, the House passed the legislation as part of the “must pass” 2015 National Defense Authorization Act.
Attaching a large public lands “package” was risky as there was strong opposition in the Senate to expanding public lands and creating new parks. With several close calls in the days before its passage, this time the strategy succeeded. Congress passed the NDAA with a robust “package” of six new national park units, nine park expansions and dozens of other public lands provisions. On December 19, 2014, the President signed the legislation into law.
The new Manhattan Project National Historical Park has units at Los Alamos, NM, Oak Ridge, TN, and Hanford, WA. During World War II, these “secret cities” were not on any map even though some 130,000 people lived in them.
The park will be officially established in late 2015 when the Departments of Energy and Interior enter into an agreement concerning their respective roles, public access and other issues.
Preview of the Park
The new park will focus on three major sites: Los Alamos, NM, where the first atomic bombs were designed; Oak Ridge, TN, where enormous facilities produced enriched uranium; and Hanford, WA, where plutonium was produced. There are over 40 properties that are officially designated as part of the park with provision for adding others later.
Los Alamos, NM
The new park includes 13 properties in the Los Alamos community, many of them originally built by the Los Alamos Ranch School in the 1920s. The government took over the school’s properties in 1943 for the Manhattan Project. The seven former Masters’ cottages became the homes of the top-echelon scientists and military leaders. Because these cottages were the only housing with bathtubs, the street became known as Bathtub Row.
The cottage where J. Robert Oppenheimer and his family lived could be the “jewel in the crown” of the visitors’ experience. Visitors are also welcome at the Guest House, now the Los Alamos Historical Society Museum, and the Fuller Lodge, a handsome ponderosa pine structure that was a social center for the Manhattan Project.

Oppenheimer House, Los Alamos
More than a dozen other properties are owned by the Los Alamos National Laboratory. Public access to these properties could be limited for the first few years to address security issues. The V-Site buildings, saved from demolition in 1998 and restored in 2006, are humble garage-like structures were where the “Gadget” was assembled. The “Gadget” was the initial plutonium-based bomb that was tested at the Trinity Site on July 16, 1945.
A companion facility to the V Site is the Gun Site used to develop and test the “Little Boy” or uranium-based bomb. The gun-type design fired a small projectile of uranium into a greater mass to create an explosion. The Gun Site is undergoing reconstruction but will eventually have a concrete bunker, periscope tower, canons and a firing range.
Oak Ridge, TN
The mission of the Clinton Engineer Works was to produce enriched uranium, one the core ingredients of an atomic bomb. Mammoth plants at Y-12 and K-25 used different techniques to produce enriched uranium. While security is an issue now, visitors will eventually be able to tour the remaining “Calutron” building at Y-12. While the mile-long K-25 building was demolished last year, plans are to recreate a portion of it for visitors.
A third site at Oak Ridge is the X-10 Graphite Reactor, a pilot-scale reactor and prototype for the Hanford plutonium production reactors. Visitors will be able to see the former Guest House (later named the Alexander Inn) built to accommodate distinguished visitors such as General Leslie Groves, Enrico Fermi, and Ernest O. Lawrence. Recently restored as a residence for seniors, the lobby will have Manhattan Project photographs and other memorabilia.

X-10 Site, Oak Ridge
Hanford, WA
There are two iconic Manhattan Project properties at Hanford. The B Reactor, the world’s first full-scale plutonium production reactor, has been welcoming visitors for several years. There many interpretive displays and models that the Atomic Heritage and B Reactor Museum Association have developed. For example, there is an interactive model of the B reactor and the dozens of support buildings that once surrounded it. There is also a cutaway model of the reactor core showing the lattice of uranium fuel rods, graphite blocks, control rods and other features.
The second property is the T Plant, a mammoth “Queen Mary” of the desert used to chemically separate plutonium from irradiated fuel rods. It was one of the first remotely controlled industrial operations. Prospects are that the public will be able to visit a portion of the plant over time.
In addition, four pre-World War II properties located along the Columbia River will be preserved: the Hanford high school, White Bluffs bank, an agricultural warehouse owned by the Bruggemann family, and an irrigation pump house. Here visitors will hear the stories of the pioneering agricultural families as well as the Native Americans who lived, hunted and fished and camped near the Columbia River.

B Reactor, Hanford
At each site, visitors will be able to experience where people lived—in tents, huts, trailers, barracks, and dormitories or for the lucky ones, houses. In the communities of Richland, WA and Oak Ridge, TN, hundreds of “Alphabet” houses built from the same blueprints have been home for families for over seven decades.
For the Atomic Heritage Foundation2, the creation of the Manhattan Project National Historical Park is the culmination of 15 years of effort. Like the Manhattan Project itself, creating a national historical park has been a great collaborative effort.
Perhaps the greatest source of inspiration has been the Manhattan Project veterans themselves. To Stephane Groueff, a Bulgarian journalist who wrote the first comprehensive account of the Manhattan Project3 the participants illustrated “the American way of the time…problem solving, ingenuity, readiness for risk-taking, courage for unorthodox approaches, serendipity, and dogged determination4.” There are many lessons that we can learn from the Manhattan Project.
Please join us for a symposium to mark the 70th anniversary of the Manhattan Project on June 2 and 3, 2015 in Washington, DC. Also, please visit our “Voices of the Manhattan Project5” website with hundreds of oral histories including of principals such as General Leslie Groves and J. Robert Oppenheimer. Our “Ranger in Your Pocket6” website has a series of audio/visual tours of the Manhattan Project sites that visitors can access on their smartphones and tablets. Most of all, plan on visiting the Manhattan Project National Historical Park. Coming soon!
Public Interest Report: February 2015
Seeking China-U.S. Strategic Nuclear Stability
by Charles D. Ferguson
Can NFU be demonstrated? Some analysts have argued that China in its practice of keeping warheads de-mated or unattached from the missile delivery systems has in effect placed itself in a second strike posture. But the worry from the American side is that such a posture could change quickly and that as China has been modernizing its missile force from slow firing liquid-fueled rockets to quick firing solid-fueled rockets, it will be capable of shifting to a first-use policy if the security conditions dictate such a change.
Look to Texas Rather Than Nevada for a Site Selection Process on Nuclear Waste Disposal
by Daniel Sherman
Nevada’s persistent and successful efforts to thwart the Yucca Mountain project and the Nuclear Waste Policy Act of 1982 are likely to continue as they demonstrate the futility of a policy that forces disposal on an unwilling host state.
Reflections on the 70th Anniversary of the Manhattan Project: Questions and Answers
by B. Cameron Reed
Could a Manhattan-type project be done now? I do not doubt for a moment that American scientists, technicians, engineers, and workers still possess the education, brains, dedication, and creativity that characterized Manhattan. But I do not think that such success could be repeated.
Nuclear Power and Nanomaterials: Big Potential for Small Particles
by Lamar O. Mair
The race to synthesize, engineer, test, and apply new nanoscale materials for solving difficult problems in energy and defense is in full swing. The past twenty five years have ushered in an era of nanomaterials and nanoparticles – objects with at least one dimension between 1 and 100 nanometers.
The Making of the Manhattan Project Park
by Cynthia C. Kelly
The making of the Manhattan Project National Historical Park took more than five times as long as the making of the atomic bomb itself.
Public Interest Report: November 2014
Energy Policy and National Security: The Need for a Nonpartisan Plan
by Charles D. Ferguson
Once the global decline starts to take effect, price shocks could devastate the world’s economy. Moreover, as the world’s population is projected to increase from seven billion people today to about nine billion by mid-century, the demand for oil will also significantly increase given business as usual practices.
Is ISIL a Radioactive Threat?
by George M. Moore
Is there a real potential that ISIL could produce a “dirty bomb” and inflict radiation casualties and property damage in the United States, Europe, or any other state that might oppose ISIL as part of the recently formed U.S.-led coalition?
Thinking More Clearly About Nuclear Weapons: The Ukrainian Crisis’ Overlooked Nuclear Risk
by Martin Hellman
It is surprising and worrisome that almost none of the mainstream media’s coverage of the Ukrainian crisis has mentioned its nuclear risk. With the West feeling that Russia is solely to blame, and Russia having the mirror image perspective, neither side is likely to back down if one of their red lines is crossed.
A Looming Crisis of Confidence in Japan’s Nuclear Intentions
by Ryan Shaffer
Global nonproliferation principles undoubtedly remain a high priority for Japan. But it is likely that in the short term, the eyes of Japan’s leaders are focused more intently on bringing nuclear reactors back on line.
The CTBT: At the Intersection of Science and Politics
by Jenifer Mackby
Although the treaty deals with the highly technical and sensitive subject of nuclear test explosions, it has been considered in a political context since the negotiations. Countries possessing nuclear weapons do not want others to know about their facilities or capabilities, so verification provisions of the treaty were exceptionally difficult to negotiate.
Seismic Risk Management Solution for Nuclear Power Plants
by Justin Coleman and Piyush Sabharwall
Management of external hazards to expectable levels of risk is critical to maintaining nuclear facility and nuclear power plant safety.
Is ISIL a Radioactive Threat?
In the past several months, various news stories have raised the possibility that the Islamic State of Iraq and the Levant (ISIL, also commonly referred to as ISIS) could pose a radioactive threat. Headlines such as “Dirty bomb fears after ISIS rebels seize uranium stash,”1 “Stolen uranium compounds not only dirty bomb ingredients within ISIS’ grasp, say experts,”2 “Iraq rebels ‘seize nuclear materials,’” 3 and “U.S. fears ISIL smuggling nuclear and radioactive materials: ISIL could take control of radioactive, radiological materials”4 have appeared in mainstream media publications and on various blog posts. Often these articles contain unrelated file photos with radioactive themes that are apparently added to catch the eye of a potential reader and/or raise their level of concern.
Is there a serious threat or are these headlines over-hyped? Is there a real potential that ISIL could produce a “dirty bomb” and inflict radiation casualties and property damage in the United States, Europe, or any other state that might oppose ISIL as part of the recently formed U.S.-led coalition? What are the confirmed facts? What are reasonable assumptions about the situation in ISIL-controlled areas and what is a realistic assessment of the level of possible threat?
As anyone who has followed recent news reports about the rapid disintegration of the Iraqi Army in Western Iraq can appreciate, ISIL is now in control of sizable portions of Iraq and Syria. These ISIL-controlled areas include oilfields, hospitals, universities, and industrial facilities, which may be locations where various types of radioactive materials have been used, or are being used.
In July 2014, the International Atomic Energy Agency (IAEA) released a statement indicating that Iraq had notified the United Nations “that nuclear material has been seized from Mosul University.”5 The IAEA’s press release indicated that they believed that the material involved was “low-grade and would not present a significant safety, security or nuclear proliferation risk.” However, despite assessing the risk posed by the material as being low, the IAEA stated that “any loss of regulatory control over nuclear and other radioactive materials is a cause for concern.”6 The IAEA’s statement caused an initial flurry of press reports shortly after its release in July.
A second round of reports on the threat of ISIL using nuclear or radioactive material started in early September, triggered by the announcement of a U.S.-Iraq agreement on a Joint Action Plan to combat nuclear and radioactive smuggling.7 According to a Department of State (DOS) press release on the Joint Action Plan, the U.S. will provide Iraq with training and equipment via the Department of Energy’s Global Threat Reduction Initiative (GTRI) that will enhance Iraq’s capability to “locate, identify, characterize, and recover orphaned or disused radioactive sources in Iraq thereby reducing the risk of terrorists acquiring these dangerous materials.”8 Although State’s press release is not alarmist, it does state that the U.S. and Iraq share a conviction that nuclear smuggling and radiological terrorism are “critical and ongoing” threats and that the issues must be urgently addressed.9
While September’s headlines extrapolating State’s press release to U. S. “fears” might be characterized by some as over-hyping the issue, it is clear that both statements from the IAEA and the State Department have indicated that the situation in Iraq may be cause for concern. Did IAEA and DOS go too far in their statements? In their defense, it would be highly irresponsible to indicate that any situation where nuclear or other radioactive material might be in the hands of individuals or groups with a potential for criminal use is not a subject for concern. However, we need to go beyond such statements and determine what risks are posed by the materials that have been reported as possibly being under ISIL control in order to determine how concerned the public should be.
According to the IAEA’s press release, the material reported by Iraq was described as “nuclear material,” but this description does not imply that it is suitable for a yield producing nuclear weapon. In fact, the IAEA’s description of the material as “low-grade” indicates that the IAEA believes that this material is not enriched to the point where it could be used to produce a nuclear explosion. Furthermore, although the agency has not provided a technical description of the nuclear material, it is highly unlikely that this is anything other than low enriched uranium or perhaps even natural or depleted uranium, all of which would fit under the IAEA’s definition of “nuclear material.” If the material is not useful in a yield producing device, is it a radioactive hazard? All forms of uranium are slightly radioactive, but the level of radioactivity is so low that these materials would not pose a serious radioactive threat, (either to persons or property), if they were used in a Radioactive Dispersion Devices (RDDs). Even a “Dirty Bomb,” which is an RDD dispersed by explosives, would not be of significant concern.
Other than the nuclear material mentioned in the report to the United Nations, there are no known open source reports of loss of control of other radioactive materials. However, a lack of specific reporting does not mean that control is still established over any materials that are in ISIL -controlled areas. It would be prudent to assume that all materials in these areas are out of control and assessable to ISIL should it choose to use whatever radioactive materials can be found for criminal purposes. How do we know what materials may be at risk? Hopefully the Iraq Radioactive Sources Regulatory Authority (IRSRA) has/had a radioactive source registry in Iraq. If so, authorities should know in some detail what materials are in ISIL-controlled territories. The Syrian regulatory authority may have at one time had a similar registry that would indicate what may now be out of control in the ISIL-controlled areas of Syria. Unfortunately, there is no open source reporting of any of these materials so we are left to speculate as to what might be involved and what the consequences may be should those materials attempt to be used criminally.
It is doubtful that any radioactive materials in ISIL controlled areas are very large sources. The materials that would pose the greatest risk would probably be for medical uses. These sources are found in hospitals or clinics for cancer treatment or blood irradiation and typically use cesium 137 or cobalt 60, both of which are relatively long-lived (approximately 30 and five years respectively) and produce energetic gamma rays. It is also possible that radiography cameras containing iridium 192 and well logging sources that typically use cesium 137 and an americium beryllium neutron source may also be in the ISIL-controlled areas. Any technical expert would opine that these sources are capable of causing death and that dispersal of these materials would create a cleanup problem and possibly significant economic loss. However, experts almost uniformly agree that such materials do not constitute Weapons of Mass Destruction, but are potential sources for disruption and for causing public fear and panic. Furthermore the scenarios that pose the greatest risk for the United States or Europe from these materials are difficult for ISIL to organize and carry out.
If ISIL were to attempt to use such materials in an RDD, they would need to transport the materials to the target area (for example in the United States or Europe), in a manner that is undetectable and relatively safe for the person(s) transporting or accompanying a movement of the material. Although in some portions of a shipment cycle there would be no need to accompany the materials, at some point people would need to handle the materials. Even if the handlers had suicidal intent, shielding would be required in order to prevent detection of the energetic radiations that would be present for even a weak RDD. Shielding required for really dangerous amounts of these materials is typically both heavy and bulky and therefore the shielded materials cannot be easily transported simply by a person carrying them on their person or in their luggage. They would probably need to be shipped as cargo in or on some sort of vehicle (car, bus, train, ship, or plane). Surface methods of transport might reach Europe, but carriage by ship or air is necessary to reach the United States.10 Aircraft structures do not provide any inherent shielding and so the most logical (albeit not only), method of transportation to the United States or Europe would be by ship, probably from a Syrian port. Even though ISIL controls a significant land area, the logistics of shipping an item that is highly radioactive to the United States or Europe would be a complex process and need to defeat significant post-9/11 detection systems. These systems, although perhaps not 100 percent effective for all types and amounts of radioactive material, typically are thought to be very effective in the detection of high-level sources.
Any materials from ISIL-controlled areas could only be used in the United States or Europe with great difficulty. It is highly probable that the current radiation detection systems would be effective in deterring any such attempted use even if there were no human intelligence that would compromise such an effort. Even if ISIL could use materials for an RDD attack, the actual damage potential of these types of attacks is relatively low when they are compared to far simpler and often used terrorist tactics such as suicide vests and truck and car bombs. The casualties that would result from any theoretical RDD would be probably less than those resulting from a serious traffic accident and that is probably on the high end of casualty estimates. Indeed, many experts feel that most, if not all, of the serious injuries from a “dirty bomb” would result from the explosive effects of the bomb, not from the dispersal of radioactive materials. The major consequence of even a fairly effective dispersal of material would be a cleanup problem with the economic impact determined by the area contaminated and the level to which the area would need to be cleaned.
Efforts by the United States to work with the ongoing government in Iraq in improving detection and control of nuclear and other radioactive materials appear to be a prudent effort to minimize any threat from these materials in the ISIL-controlled areas. To date, ISIL has not made any threats to use radioactive material. That does not mean that ISIL is unaware of the potential, and we should be prepared for ISIL to use their surprisingly effective social media connections to attempt to make any future radioactive threat seem apocalyptic. Rational discussion of potential consequences and responses to an attack scenario should occur before an actual ISIL threat, rather than having the discussion in the 24/7 news frenzy that could invariably follow an ISIL threat.
Dr. George Moore is currently a Scientist in Residence and Adjunct Faculty Member at the Monterey Institute of International Studies’ James Martin Center for Nonproliferation Studies, a Graduate School of Middlebury College, in Monterey, California. He teaches courses and workshops in nuclear trafficking, nuclear forensics, cyber security, drones and surveillance, and various other legal and technical topics. He also manages an International Safeguards Course sponsored by the National Nuclear Security Agency in cooperation with Lawrence Livermore National Laboratory.
From 2007-2012 Dr. Moore was a Senior Analyst in the IAEA’s Office of Nuclear Security where he was involved with, among other issues, the IAEA’s Illicit Trafficking Database System and the development of the Fundamentals of Nuclear Security publication, the top-level document in the Nuclear Security Series. Dr. Moore has over 40 years of computer programming experience in various programming languages and has managed large database and document systems. He completed IAEA training in cyber security at Brandenburg University and is the first instructor to use the IAEA’s new publication NS22 Cyber Security for Nuclear Security Professional as the basis for a course.
Dr. Moore is a former staff member of Lawrence Livermore National Laboratory where he had various assignments in areas relating to nuclear physics, nuclear effects, radiation detection and measurement, nuclear threat analysis, and emergency field operations. He is also a former licensed research reactor operator (TRIGA).
Energy Policy and National Security: The Need for a Nonpartisan Plan
As I write this president’s message, the U.S. election has just resulted in a resounding victory for the Republican Party, which will have control of both the Senate and House of Representatives when the new Congress convenes in January. While some may despair that these results portend an even more divided federal government with a Democratic president and a Republican Congress, I choose to view this event as an opportunity in disguise in regards to the important and urgent issue of U.S. energy policy.
President Barack Obama has staked a major part of his presidential legacy on combating climate change. He has felt stymied by the inability to convince Congress to pass comprehensive legislation to mandate substantial reductions in greenhouse gas emissions. Instead, his administration has leveraged the power of the Environmental Protection Agency (EPA) to craft rules that will, in effect, force the closure of many of the biggest emitters: coal power plants. These new rules will likely face challenges in courts and Congress. To withstand the legal challenge, EPA lawyers are working overtime to make the rules as ironclad as possible.
The Republicans who oppose the EPA rules will have difficulty in overturning the rules with legislation because they do not have the veto-proof supermajority of two-thirds of Congress. Rather, the incoming Senate majority leader Mitch McConnell (R-Kentucky) said before the election that he would try to block appropriations that would be needed to implement the new rules. But this is a risky move because it could result in a budget battle with the White House. The United States cannot afford another grinding halt to the federal budget.
Several environmental organizations have charged many Republican politicians with being climate change deniers. Huge amounts of money were funneled to the political races on both sides of the climate change divide. On the skeptical side, political action groups affiliated with the billionaire brothers Charles and David Koch received tens of millions of dollars; they have cast doubt on the scientific studies of climate change. And on the side of wanting to combat climate change, about $100 million was committed by NextGen Climate, a political action group backed substantially by billionaire Tom Steyer. Could this money have been better spent on investments in shoring up the crumbling U.S. energy infrastructure? Instead of demonizing each side and just focusing on climate change, can the nation try a different approach that can win support from a core group of Democrats and Republicans?
Both Democratic and Republican leaders believe that the United States must have strong national security. Could this form the basis of a bipartisan plan for better energy policy? But this begs another question that would have to be addressed first: What energy policy would strengthen national security? Some politicians, including several former presidents, have called for the United States to be energy independent. Due to the recent energy revolution in technologies to extract so-called unconventional oil and gas from shale and sand geological deposits, the United States is on the verge of becoming a major exporter of natural gas and has dramatically reduced its dependence on outside oil imports (except from the friendly Canadians who are experiencing a bonanza in oil extracted from tar sands). However, these windfall developments do not mean that the United States is energy independent, even including the natural resources in all of North America.
Oil is a globally traded commodity and natural gas (especially in the form of liquefied natural gas) is tending to become this type of commodity. This implies that the United States cannot decouple its oil and gas production and consumption from other countries. For example, a disruption in the Strait of Hormuz leading to the Persian Gulf will affect about 40 percent of the globe’s oil deliveries because of shipments from Iran, Iraq, Kuwait, Qatar, Saudi Arabia, and the United Arab Emirate. Such a disruption might occur in an armed conflict with Iran, which has been at loggerheads with the United States over its nuclear program. Moreover, while the United States has not been importing significant amounts of oil from the Middle East recently, U.S. allies Japan and South Korea rely heavily on oil from that region. Thus, a major principle for U.S. national security is to work cooperatively with these allies to develop a plan to move away from overreliance on oil and gas from this region and an even longer term plan to transition away from fossil fuels.
Actually, this long term plan is not really that far into the future. According to optimistic estimates (for example, from Cambridge Energy Research Associates) for when global oil production will reach its peak, the world only has until at least 2030 before the peak is reached, and then there will be a gradual decline in production over the next few decades after the peak.1 (Pessimistic views such as from oil expert Colin Campbell predict the peak occurring around 2012 to 2015.2 We thus may already be at the peak.) Once the global decline starts to take effect, price shocks could devastate the world’s economy. Moreover, as the world’s population is projected to increase from seven billion people today to about nine billion by mid-century, the demand for oil will also significantly increase given business as usual practices.
For the broader scope national security reason of having a stable economy, it is imperative to develop a nonpartisan plan for transitioning from the “addiction” to oil that President George W. Bush called attention to in his State of the Union Address in January 2006. While skepticism about the science of climate change will prevail, this should not hold back the United States working together with other nations to craft a comprehensive energy plan that saves money, creates more jobs, and overall strengthens international security.
FAS is developing a new project titled Sustainable Energy and International Security. FAS staff will be contacting experts in our network to form a diverse group with expertise in energy technologies, the social factors that affect energy use, military perspectives, economic assessments, and security alliances. I welcome readers’ advice and donations to start this project; please contact me at cferguson@fas.org. FAS relies on donors like you to help support our projects; I urge you to consider supporting this and other FAS projects.