What Are Acceptable Nuclear Risks?

When I read Eric Schlosser’s acclaimed 2013 bookCommand and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety, I found a tantalizing revelation on pages 170-171, when it asked, “What was the ‘acceptable’ probability of an accidental nuclear explosion?” and then proceeded to describe a 1957 Sandia Report, “Acceptable Premature Probabilities for Nuclear Weapons,” which dealt with that question.

Unable to find the report online, I contacted Schlosser, who was kind enough to share it with me. (We owe him a debt of gratitude for obtaining it through a laborious Freedom of Information Act request.) The full reportSchlosser’s FOIA request, and my analysis of the report are now freely accessible on my Stanford web site. (The 1955 Army report, “Acceptable Military Risks from Accidental Detonation of Atomic Weapons,” on which this 1957 Sandia report builds, appears not to be available. If anyone knows of an existing copy, please post a comment.)

Using the same criterion as this report*, which, of course, is open to question, my analysis shows that nuclear terrorism would have to have a risk of at most 0.5% per year to be considered “acceptable.” In contrast, existing estimates are roughly 20 times higher.**

My analysis also shows, that using the report’s criterion*, the risk of a full-scale nuclear war would have to be on the order of 0.0005% per year, corresponding to a “time horizon” of 200,000 years. In contrast, my preliminary risk analysis of nuclear deterrence indicates that risk to be at least a factor 100 and possibly a factor of 1,000 times higher. Similarly, when I ask people how long they think we can go before nuclear deterrence fails and we destroy ourselves (assuming nothing changes, which hopefully it will), almost all people see 10 years as too short and 1,000 years as too long, leaving 100 years as the only “order of magnitude” estimate left, an estimate which is 2,000 times riskier than the report’s criterion would allow.

In short, the risks of catastrophes involving nuclear weapons currently appear to be far above any acceptable level. Isn’t it time we started paying more attention to those risks, and taking steps to reduce them?

* The report required that the expected number of deaths due to an accidental nuclear detonation should be no greater than the number of American deaths each year due to natural disasters, such as hurricanes, floods, and earthquakes.

** In the Nuclear Tipping Point video documentary Henry Kissinger says, “if nothing fundamental changes, then I would expect the use of nuclear weapons in some 10 year period is very possible” – equivalent to a risk of approximately 10% per year. Similarly, noted national security expert Dr. Richard Garwin testified to Congress that he estimate the risk to be in the range of 10-20 percent per year. A survey of national security expertsby Senator Richard Lugar was also in the 10% per year range.

 

More on the Ukraine

With the Crimea voting today on whether to secede from the Ukraine, and early returns indicating strong support for secession, the following perspectives on the crisis are particularly relevant. As before, I am emphasizing unusual perspectives not because the mainstream view (“It’s  Russia’s fault!”) doesn’t have some validity, but because it over-simplifies a complex issue. And, when dealing with a nation capable of destroying us in under an hour, it would be criminally negligent not to look at all the evidence before imposing sanctions or taking other dangerous steps.

In his blog, Russia: Other Points of View, Patrick Armstrong asks, “If, as seems to be generally expected, tomorrow’s [now today’s] referendum in Crimea produces a substantial majority in favour of union with the Russian Federation, what will Moscow’s reaction be?” It will be interesting to assess his answer a week from now, when time will tell if he was right:

I strongly expect that it will be……

Nothing.

There are several reasons why I think this. One is that Moscow is reluctant to break up states. I know that that assertion will bring howls of laughter from the Russophobes who imagine that Putin has geography dreams every night but reflect that Russia only recognised the independence of South Ossetia and Abkhazia after Georgia had actually attacked South Ossetia. The reason for recognition was to prevent other Georgian attacks. Behind that was the memory of the chaos caused in the Russian North Caucasus as an aftermath of Tbilisi’s attacks on South Ossetia and Abkhazia in the 1990s. Russia is a profoundly status quo country – largely because it fears change would lead to something worse – and will not move on such matters until it feels it has no other choice. We are not, I believe, quite at that point yet on Crimea let alone eastern Ukraine.

Moscow can afford to do nothing now because time is on its side. The more time passes, the more people in the West will learn who the new rulers of Kiev are.

To show “who the new rulers of Kiev are,” Armstrong then quotes from a Los Angeles Times article, which starts off:

It’s become popular to dismiss Russian President Vladimir Putin as paranoid and out of touch with reality. But his denunciation of “neofascist extremists” within the movement that toppled the old Ukrainian government, and in the ranks of the new one, is worth heeding. The empowerment of extreme Ukrainian nationalists is no less a menace to the country’s future than Putin’s maneuvers in Crimea. These are odious people with a repugnant ideology.

Read the rest of the article to learn more.

And a Reuters dispatch shows how the interim Ukrainian government is making it more likely that Crimea’s desire to secede and re-join Russia will be honored by Russia:

Ukrainian Prime Minister Arseny Yatseniuk vowed on Sunday to track down and bring to justice all those promoting separatism in its Russian-controlled region of Crimea “under the cover of Russian troops”.

“I want to say above all … to the Ukrainian people: Let there be no doubt, the Ukrainian state will find all those ringleaders of separatism and division who now, under the cover of Russian troops, are trying to destroy Ukrainian independence,” he told a cabinet meeting as the region voted in a referendum on becoming a part of Russia.

“We will find all of them – if it takes one year, two years – and bring them to justice and try them in Ukrainian and international courts. The ground will burn beneath their feet.”

Given that the Ukrainian opposition demanded amnesty for even the violent protesters in Kiev, how can the new government possibly expect the more peaceful Crimean opposition not to secede under such threats? It is also worth noting that this new government was installed by force in violation of an agreement worked out between Yanukovych and the political leaders of the Ukrainian opposition.

Reducing the Risk of Russian-American Standoff

Editor’s Note: Dr. Martin Hellman, Adjunct Fellow for Nuclear Risk, professor at Stanford, and an expert on crisis risk reduction, asks that FAS members and others who read this post to consider contacting their elected representatives about the crisis in Ukraine. Dr. Hellman sent the following letter to President Obama and his Congressional representatives. 

I am writing to encourage you to resist the push for sanctioning Russia over its actions in the Ukraine. While the situation in the Ukraine is deplorable and Russia has made its share of mistakes, it is not solely to blame.

Henry Kissinger recognized this: “We should seek reconciliation, not the domination of a faction. Russia and the West, and least of all the various factions in Ukraine, have not acted on this principle.”

So did Pres. Nixon’s Soviet Adviser, Dmitri Simes. When asked, “how do you assess the Obama administration’s performance so far?” he replied, “I think it has contributed to the crisis.”

So did Ronald Reagan’s former ambassador to Moscow, Jack Matlock: ” I believe it has been a very big strategic mistake – by Russia, by the EU and most of all by the U.S. – to convert Ukrainian political and economic reform into an East-West struggle.”

It is also ancient wisdom: “Let he who is without sin cast the first stone.” If we are going to sanction Russian officials for their actions in Ukraine, what about Pres. Bush, VP Cheney, and others for their actions in Iraq?

Instead of sanctioning perceived evil doers, it would be much more effective to clean up our own act first. That also has the advantage that it would not push Russia to retaliate in some way, for example by selling anti-aircraft missiles to Iran or stopping us from using their territory for our withdrawal from Afghanistan. Most importantly, it would reduce the risk of a Russian-American standoff which could lead to nuclear threats, or even nuclear use.

In closing

lastpageFor a successful technology, reality must take precedence over public relations, for nature cannot be fooled. Nobel prize-winning physicist Richard Feynman, in an appendix to the report on the loss of the space shuttle Challenger.

 

The first post in this series was put up a little more than two years ago and I’ve written a hundred of them (a dozen more, counting Martin Hellman’s estimable contributions). And, for reasons both personal and professional, it’s time to draw this blog to a close. I have enjoyed writing it and I have enjoyed the thoughtful comments that so many of you have made – I hope that you’ve gotten as much out of it as I have. And, as the habit dies hard, I’d like to take one final opportunity to opine, if I may.

Although the topics covered have been primarily radiological and nuclear-related, I have at times delved into areas of geology, astronomy, the life sciences, and even into philosophy and ethics. But regardless of the topic I have tried to take the same approach to everything – to try to take a skeptical look at the science that underlies claims or stories that are based on science. Anybody can use invective, can rely on “gut” feelings, cast aspersions, and so forth – but if something rests on a foundation of science then it cannot be resolved without understanding that science. And any attempt to circumvent the science tends to be an attempt to circumvent the facts – to bolster an argument that might have little or no basis.

To me, skepticism is of paramount importance – but I need to make sure we’re all on the same page with what is meant by skepticism. First, being skeptical does not mean simply rejecting every claim or statement that’s made – this is simply being contrary, and contrarianism is actually fairly brainless. It doesn’t take much to say “you’re wrong” all the time, and it takes no thought at all to have this as your default response. Being skeptical also doesn’t mean steadfastly opposing a particular point of view, regardless of any information that might support that point of view. This approach is denialism and it also requires little thought or effort.  Skepticism is a bit more difficult a beast – it means questioning, probing, and ultimately deciding whether or not the weight of evidence supports the claim being made. And – very importantly – skepticism also means questioning claims that might support your preconceptions, lest we fall prey to confirmation bias. In fact, I remember looking at some plots of data with my master’s advisor – he commented that “they look plausible but they’re not what I’d expected; so they might just be right.” Skepticism takes work, but if the stakes (intellectual, scientific, technical, societal, or otherwise) are high enough then it’s effort that must be made.

Unfortunately, the reality of science is that what is true is often counter-intuitive, contrary to what we think we see, and different than what we would like to be the case. At one time in the past, for example, fossils were thought to be rocks that looked strangely like bones and shells, the Earth resided at the center of an infinite universe, time moved at the same rate for everybody everywhere, and mountains formed as the Earth slowly shrank due to cooling. That we now know differently is due to past scientists exercising their skepticism, their rationality, and choosing to look beyond what their obvious gut feelings were telling them.

The fact is that the world and the universe run according to the laws of science –astronomers have found fairly convincing evidence that the laws of physics seem to be the same across the universe while geologists and physicists have shown similar consistency over time. Not only that, but the scientific method has been developed, refined, and tested over centuries. To have all of the tools of science available to us and to simply disregard it in favor of an emotional gut feeling is something I just can’t understand. Gut feelings, instinct, and intuition have their place in some areas – fields that are more person-oriented – but they have only limited utility in science-based arguments. Let’s face it – whether we’re talking about radiation dose limits, global warming, nuclear energy, vaccines, or any of the myriad of questions with which we are confronted – if we ignore the science then we cannot arrive at a good answer except by sheer chance. To that end, I’d like to draw your attention to a fascinating website, a checklist, and an associated paper.

These links deal with forecasting – along the lines of weather forecasting, but extended to a number of areas in which people make predictions about what might happen next – but they have relevance to many areas of science. Predictions can take the form of models (such as climate models), calculations of cancer risk from radiation, forecasts of the stock market, or predictions of terrorist activities. People – even trained scientists – are often not very good at assessing these sorts of questions; this is why we have developed the scientific method and why the scientific process can take years or decades to play out. But even then, scientists are frequently too willing to rely on their scientific intuition, to make predictions based on their experience rather than on a scientific process, to overlook (or exclude) information that doesn’t support their hypotheses, and to give excessive weight to studies that agree with them. The principles outlined on the website, checklist, and paper I’ve linked to help all of us to avoid all of the mistakes of thinking that can otherwise lead us astray.

The bottom line is that the universe runs according to science and it doesn’t care what we would like to be true. All of our wishful thinking, outrage, and wishes can’t change the laws of physics; and issues of fairness – even ethics and morality – don’t matter to the universe one whit. If we try to use these principles – regardless of how important they might be in unscientific matters – we will be led astray.

I would like to invite you to continue exercising your own skepticism, especially any time you read (or hear) a story that seems either too good to be true, or too bad to be true. Be on the lookout for pathological science and for arguments that play to the emotions rather than to the rational and the scientific. Being a skeptic doesn’t mean being a contrarian – it means that you ask someone to prove their case to you rather than just accepting it at face value. It also means trying – as much as possible – to remove your feelings from the picture; once you think you’ve figured out what’s going on you can decide how it makes you feel but you can’t use your emotions to solve a scientific problem.

So, as a parting thought, I would urge you to take the time to think carefully about all of the media stories that are (or ought to be) science-based. If claims seem to be incredible – either too good or too dire – ask yourself if they make sense. Take an hour to go through the Standards and Practices for Forecasting (linked to earlier in this post) to see whether or not the argument(s) presented have any legitimate scientific justification, or if they are simply the opinions of scientist, however dressed up they might be. Most importantly, as Ronald Reagan famously told Mikhail Gorbachev with regards to nuclear weapons limits, “trust but verify.”

Again, I’ve enjoyed writing ScienceWonk for the last two years. I very much appreciate the Federation of American Scientists for giving me a home for this blog and I especially appreciate all of you who have taken the time to read it, to comment, and hopefully to think about what I’ve written. Many thanks for your attention – and I hope you have got as much out of it as I have.

The post In closing appears on ScienceWonk, FAS’s blog for opinions from guest experts and leaders.

A foolish consistency

EmersonConsistency is good – there’s a sense of security in knowing that some things will generally remain constant over time. We can always count on gravity, for example, to hold us firmly to the ground; politicians are typically pandering and self-serving; I can count on radioactivity to consistently decay away; and so forth. Of course, not all consistency is good – as Emerson noted, “A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines.” We can also count on the American public to consistently question whether or not evolution actually occurs; many of us know that our perfectionist boss will always insist on yet another round of reviews and edits before letting a document go out the door; we will always find people who are apparently proud of their lack of knowledge; and we can expect that a certain category of blogger will continue to see the end of the world on the near horizon. It is this latter category I’d like to talk about this time – particularly the batch that continues to insist that the reactor accident at the Fukushima Dai’ichi site is going to kill millions.

Before launching into this piece I’d like to point you to a wonderful counter-example of what I just said – a blog posting by oceanographer and University of Washington professor Kim Martini. I have been accused of being part of the pro-nuclear and/or pro-radiation lobby because of my long years of experience as a radiation safety professional – Dr. Martini told me that she became interested in this topic, researched it herself, and came to her conclusions independently of the nuclear energy and radiation safety professionals. In short, she is scientifically competent, intelligent, and has no reason to be biased either pro- or anti-nuclear.

The latest round of Fukushima silliness is the contention that Americans need to evacuate the West Coast because of an apparently imminent release from one or more of the affected reactors and/or the Reactor 4 spent fuel pool. There are also those who blame the Fukushima accident for massive starfish die-offs, for sick animals along the Alaskan coast, and more – all of which (according to the good Dr. Martini) are far from accurate. And anti-nuclear activist Helen Caldicott has gone as far as to state that the entire Northern Hemisphere might need to be evacuated if things get as bad as she fears and the Unit 4 spent fuel pool collapses. So let’s see what the facts are, what the science can tell us, and what the real story might be.

Can the melted reactors go critical?

There have been predictions that the ruined reactor cores will somehow achieve criticality, producing more fission products and spreading more contamination into the water. While this is not strictly speaking impossible it is highly unlikely – sort of like saying that it is remotely possible that Bill Gates will leave me his fortune, but I’m still contributing to my 401(k) account. To achieve criticality (to a nuclear engineer or a reactor operator, “criticality” simply means that the reactor is operating at a constant power) requires reactor fuel that’s enriched to the right percentage of U-235, a critical mass of the uranium (enough to sustain a chain reaction), and it has to be in a configuration (the critical geometry) that will permit fission to occur. Also important in most reactors is a moderator – a substance such as water that will slow neutrons down to the point where they can be absorbed and cause the U-235 atoms to fission. In reactors such as the ones destroyed in Fukushima require all of these components to achieve criticality – take away any one of them and there will be no fission chain reaction.

The ruined reactor cores meet some of these requirements – since they’d been operating at the time of the accident we know that they had a critical mass of sufficiently enriched uranium present. Surrounded by water (either seawater or groundwater), they are likely also immersed in a moderator. But absent a critical geometry the cores cannot sustain a fission chain reaction. So the question is whether or not these cores can, by chance, end up in a critical geometry. And the answer to this is that it is highly improbable.

Consider, for example, the engineering and design that goes into making a nuclear reactor core. Granted, much of this design goes into making the reactors as efficient and as cost-effective to operate as possible, but the fact is that we can’t just slap some uranium together in any configuration and expect it to operate at all, let alone in a sustained fashion. In addition, reactors keep their fuel in an array of fuel rods that are immersed in water – the water helps slow the neutrons down as they travel from one fuel element to the next. A solid lump of low-enriched uranium has no moderator to slow down these neutrons; the only moderated neutrons are those that escape into the surrounding water and bounce back into the uranium; the lumps in a widely dispersed field of uranium will be too far apart to sustain a chain reaction. Only a relatively compact mass of uranium that is riddled with holes and channels is likely to achieve criticality – the likelihood that a melted core falling to the bottom of the reactor vessel (or the floor of the containment) would come together in a configuration that could sustain criticality is vanishingly low.

How much radioactivity is there?

First, let’s start off with the amount of radioactivity that might be available to release into the ocean. Where it comes from is the uranium fission that was taking place in the core until the reactors were shut down – the uranium itself is slightly radioactive, but each uranium atom that’s split produces two radioactive atoms (fission fragments). The materials of the reactor itself become radioactive when they’re bombarded with neutrons but these metals are very corrosion-resistant and aren’t likely to dissolve into the seawater. And then there are transuranic elements such as plutonium and americium formed in the reactor core when the non-fissioning U-238 captures neutrons. Some of these transuranics have long half-lives, but a long half-life means that a nuclide is only weakly radioactive – it takes 15 grams of Pu-239 to hold as much radioactivity as a single gram of radium-226 (about 1 Ci or 37 GBq in a gram of Ra-226), and the one gram of Cs-137 has about as much radioactivity as over a kilogram of Pu-239. So the majority of radioactivity available to be released comes from the fission products with activation and neutron capture products contributing in a more minor fashion.

This part is basic physics and simply isn’t open to much interpretation – decades of careful measurements have shown us how many of which fission products are formed during sustained uranium fission. From there, the basic physics of radioactive decay can tell us what’s left after any period of decay. So if we assume the worst case – that somehow all of the fission products are going to leak into the ocean – the logical starting place is to figure out how much radioactivity is even present at this point in time.

In January 2012 the Department of Energy’s Pacific Northwest National Laboratory (PNNL) used a sophisticated computer program to calculate the fission product inventory of the #1 and #3 reactors at the Fukushima Dai’ichi site – they calculated that each reactor held about 6.2 million curies (about 230 billion mega-becquerels) of radioactivity 100 days after shut-down. The amount of radioactivity present today can be calculated (albeit not easily due to the number of radionuclides present) – the amount of radioactivity present today reflects what there was nearly three years ago minus what has decayed away since the reactors shut down. After 1000 days (nearly 3 years) the amount of radioactivity is about 1% of what was present at shutdown (give or take a little) and about a tenth what was present after 100 days. Put all of this together and accounting for what was present in the spent fuel pools (the reactor in Unit 4 was empty but the spent fuel pool still contains decaying fuel rods) and it seems that the total amount of radioactivity present in all of the affected reactors and their spent fuel pools is in the vicinity of 20-30 million curies at this time.

By comparison, the National Academies of Science calculated in 1971 (in a report titled Radioactivity in the Marine Environment) that the Pacific Ocean holds over 200 billion curies of natural potassium (about 0.01% of all potassium is radioactive K-40), 19 billion curies of rubidium-87, 600 million curies of dissolved uranium, 80 million curies of carbon-14, and 10 million curies of tritium (both C-14 and H-3 are formed by cosmic ray interactions in the atmosphere).

How much radioactivity might be in the water?

A fair amount of radioactivity has already escaped from Units 1, 2, and 3 – many of the volatile and soluble radionuclides have been released to the environment. The remaining radionuclides are in the fuel precisely because they are either not very mobile in the environment or because they are locked inside the remaining fuel. Thus, it’s unlikely that a high fraction of this radioactivity will be released. But let’s assume for the sake of argument that 30 million curies of radioactivity are released into the Pacific Ocean to make their way to the West Coast – how much radioactivity will be in the water?

The Pacific Ocean has a volume of about 7×1023 ml or about 7×1020 liters and the North Pacific has about half that volume (it’s likely that not much water has crossed the equator in the last few years). If we ignore circulation from the Pacific into other oceans and across the equator the math is simple – 30 million curies dissolved into 3×1020 liters comes out to about 10-13 curies per liter of water, or about 0.1 picocuries (pCi) per liter (1 curie is a million million pCi). Natural radioactivity (according to the National Academy of Sciences) from uranium and potassium in seawater is about 300 pCi/liter, so this is a small fraction of the natural radioactivity in the water. If we make a simplifying assumption that all of this dissolved radioactivity is Cs-137 (the worst case) then we can use dose conversion factors published by the US EPA in Federal Guidance Report #12 to calculate that spending an entire year immersed in this water would give you a radiation dose of much less than 1 mrem – a fraction of the dose you’d get from natural background radiation in a single day (natural radiation exposure from all sources – cosmic radiation, radon, internal radionuclides, and radioactivity in the rocks and soils – is slightly less than 1 mrem daily). This is as close as we can come to zero risk.

This is the worst case – assuming that all of the radioactivity in all of the reactors and spent fuel pools dissolves into the sea. Any realistic case is going to be far lower. The bottom line is that, barring an unrealistic scenario that would concentrate all of the radioactivity into a narrow stream, there simply is too little radioactivity and too much water for there to be a high dose to anyone in the US. Or to put it another way – we don’t have to evacuate California, Alaska, or Hawaii; and Caldicott’s suggestion to evacuate the entire Northern Hemisphere is without any credible scientific basis. And this also makes it very clear that – barring some bizarre oceanographic conditions – radioactivity from Fukushima is incapable of causing any impact at all on the sea life around Hawaii or Alaska let alone along California.

Closing thoughts

There’s no doubt that enough radiation can be harmful, but the World Health Organization has concluded that Fukushima will not produce any widespread health effects in Japan (or anywhere else) – just as Chernobyl failed to do nearly three decades ago. And it seems that as more time goes by without the predicted massive environmental and health effects they’ve predicted, the doom-sayers become increasingly strident as though shouting ever-more dire predictions at increasing volume will somehow compensate for the fact that their predictions have come to naught.

In spite of all of the rhetoric, the facts remain the same as they were in March 2011 when this whole saga began – the tsunami and earthquake killed over 20,000 people to date while radiation has killed none and (according to the World Health Organization) is likely to kill none in coming years. The science is consistent on this point as is the judgment of the world’s scientific community (those who specialize in radiation and its health effects). Sadly, the anti-nuclear movement also remains consistent in trying to use the tragedy of 2011 to stir up baseless fears. I’m not sure which of Emerson’s categories they would fall into, but I have to acknowledge their consistency, even when the facts continue to oppose them.

The post A foolish consistency appears on ScienceWonk, FAS’s blog for opinions from guest experts and leaders.

The Mexican radiation accident (Part II)

radioactiveA highly respected colleague and friend of mine says he no longer refers to “lessons learned” but, rather, to “lessons recognized” because he has noticed that we don’t always learn our lessons. It’s not too early to recognize some lessons from the Mexican accident of the other week, but the fact that this accident happened at all suggests that we have failed to learn from past accidents. In this posting I’d like to go over some past radiation accidents (as opposed to nuclear accidents) and the lessons that we should have learned from them, as well as devoting a few paragraphs to the issue of radioactive materials security.

Goiania Brazil, 1987

Goiania Brazil is a big city that had over a million inhabitants in 1987. Most large cities make extensive use of radioactivity and radiation in medicine and Goiania was no exception. But things were a little lax in the 1980s, and when a cancer therapy clinic closed in 1987 the radioactive therapy source was simply abandoned instead of being transferred to a disposal facility. Thus, when scrap metal scavengers broke into the clinic they were able to walk out with a radiation therapy unit, including a high-activity (almost 1500 curies) Cs-137 source. Not knowing what they had found the scavengers opened the irradiator head and the source itself. Impressed with the pretty blue talcum powder-like filling, they took it home with them to show to family and friends. When all was said and done, four people had died of radiation sickness and over a hundred were exposed to enough radiation or contamination to require medical attention.

Like in Mexico, the thieves in Goiania were unaware of what they had stolen and, also like the Mexican theft, an underlying problem was a relative paucity of good security. We can also infer scanty regulatory controls in both cases, to permit the Brazilian source to be abandoned and the Mexican source to be transported without properly packaging or securing the source during shipping. Unlike the recent incident, the Goiania source was filled with easily dispersible Cs-137 as opposed to Co-60, which is typically found as a solid chunk of metal; this contributed to the wide spread contamination in Goiania compared to the relatively “clean” Mexican incident. The health toll of the Mexican accident is not yet known, although it seems likely that whoever removed the sources from the irradiator head would have received enough radiation to cause severe radiation sickness or death.

New Delhi, India, 2010

In 2010 the University of Delhi became aware of a cobalt irradiator that had been in storage for over a quarter century. Cobalt-60 has a half-life of only 5.27 years; after 5 half-lives the amount of radioactivity had decayed to only about 3% of the original activity. But 3% of a large number can still be significant – when the university decided to simply sell the entire irradiator off as scrap metal there were still about 20 curies of activity remaining; enough to be deadly under the right circumstances.

In this case, over 100 pieces of radioactivity were scattered through a number of scrap metal yards in the Delhi area and other pieces were given to eight workers at the scrap metal yard. One worker received a dose of over 300 rem and died of radiation sickness; two other workers developed radiation sickness but eventually recovered. After being informed of the incident the Indian government scoured the scrap metal yards, recovering (they think) all of the radioactivity. Interestingly, though, a few years later some contaminated metal products made of Indian stainless steel showed up in the US (I wrote about this incident in two earlier postings to this blog). This suggests that either additional pieces remained at large or that there was the loss of another Indian Co-60 source that was not reported. Either way, this is another incident in which radioactive materials were disposed of improperly and without adequate checks (not to mention without proper radioactive materials security).

 

Lessons recognized

There are more. A source was lost in Mexico in 1984 that ended up melted with scrap metal – it was found  when a load of contaminated metal was picked up in the US. In Bolivia an industrial radiographer was unable to retract a source into its shield and, instead of measuring radiation levels to confirm the location of the source, he simply bundled everything up and put it in the cargo area of a bus, exposing the passengers to (luckily) low doses of radiation. And other radiation incidents happened in every continent for the last half-century and more. But there are some common threads woven through most of these incidents that are worth trying to tease out to see if we can recognize the lessons.

One of these is that most of the lost sources were not properly secured. Had the Mexican source, for example, been properly guarded the truck might not have been stolen; had it been shipped in an appropriate container it could not have been opened by the thieves and there would have been no exposure. Similarly, the Goiania source was simply left behind in an abandoned building, making it easy pickings for the scrap metal scavengers. Proper attention to securing radioactive sources would have saved lives.

Another common theme is that many sources were being used by personnel who neglected to perform proper radiation surveys. This might not have made a difference in Mexico earlier this month, but a simple radiation survey would have shown the people at the Delhi University that the cobalt in its irradiator had not yet decayed to stability – this would have saved at least one life and would have spared the remaining victims their radiation sickness. Radiation surveys would also have shown that sources had become unshielded in accidents that occurred in Iran, Bolivia, Turkey, and elsewhere. Part of the problem here is that many of those tasked with using or safeguarding these sources were not radiation safety professionals, who would have understood the risks posed by high-activity sources and would almost certainly have performed surveys that would most likely have averted these tragedies.

The final commonality among the incidents noted here and others that have taken place is the relative paucity of effective regulatory oversight. While a great many nations adhere (on paper) to standards developed by the International Atomic Energy Agency, they may lack the ability or the trained personnel to enforce their regulations. In fact, I have visited some nations in which radioactive materials users had never seen a government inspection; even some nations in which the users were unaware that their nations had radiation regulations at all (in one case, I visited an industrial radiographer who was using an aged copy of our own American regulations, being unaware that his nation had adopted IAEA standards). In spite of my own disagreements with regulators from time to time these accidents and my own experiences have convinced me that regulatory oversight is essential, if only to keep licensees on their toes.  The lack of such oversight makes it all too easy for minor errors to turn into something potentially (or actually) life-threatening.

One of the things I found in the Navy is that most accidents are the result of multiple failures and that the process leading up to an accident can be interrupted at any of these steps. In the most recent accident, the use of a proper shipping container, proper security procedures, and appropriate regulatory oversight were all lacking – attend to any one of these factors appropriately and the accident would not have occurred. In a safe system a single failure should not put lives or health at risk. At this point it’s too late to help the people who were presumably exposed in Mexico, and too late to help the others exposed in India, Brazil, Iran, and so many other nations. But one can hope that other nations in which potentially dangerous radioactive sources are in use (virtually every nation on Earth) will not only recognize these lessons, but will learn from them as well. We have over a century of experience in working with radiation and we know how to do so safely – how to manage the risks so that nobody need be harmed. It would be a shame if others in coming years were to be harmed by something that is relatively easily controlled, simply because the lessons of past mistakes were recognized – but not learned.

Final note: Because of the holidays, there will be no new posting here until the second week of 2014 as I’ll be out of town with family. But stay tuned because there’s a lot more to discuss – claims that Fukushima’s spent fuel poses a threat to the West Coast, concerns that an India/Pakistan nuclear exchange could launch a nuclear winter, killing up to 2 billion people, and more. For those of you who feel as though two weeks off is more than you can handle, there are a number of my early postings that you might not yet have read – feel free to peruse and post your comments on those if you feel it appropriate. And to everyone, whatever end-of-the-year holiday you prefer, I hope it’s a happy one for you and for those you care about.

The post The Mexican radiation accident (Part II) appears on ScienceWonk, FAS’s blog for opinions from guest experts and leaders.

The Mexican radiation accident (Part I)

Source and truckMost news stories involving radiation are, to be blunt, overblown. Radiation can be dangerous, but the risk it actually poses is usually far lower than what the media stories would have us believe. So my first inclination when I hear about another story involving “deadly radiation” is to be skeptical. And then every now and again there’s the exception – a story about radiation that’s not overblown and an incident in which there is a very real risk; sometimes an incident in which lives are put at risk or even ended. Last week we had the latter sort of radiation story, and it’s worth a little discussion.

First, a short recap. A cancer therapy clinic in Tijuana Mexico was shipping a highly radioactive radiation therapy source to Mexico’s radioactive waste disposal facility near the center of the nation – at the time of the theft the source consisted of over 2500 curies of cobalt-60. Auto theft is common in Mexico – the truck driver claims he was sleeping in the truck at the side of the road when armed thieves ordered him out of the truck and stole it, source and all. There is every indication that the thieves were unaware of the source itself – that they were after the truck. And recent history bears this out since there have been a number of similar thefts (albeit with lower-activity sources) in recent years. Anyhow, the thieves seem to have removed the source from the back of the truck; it was found at the side of the road several miles from where the abandoned truck was located. From here things get a little speculative – a Mexican official feels it likely that at least a few of the thieves were exposed to fatal doses of radiation, and a half-dozen people came forward to be tested for radiation sickness (the tests came back negative). At the present time, the source was under guard by the Mexican military with a perimeter about 500 meters (a little over a quarter mile) away. So with this as a backdrop, let’s take a look at the science behind all of this.

Dose and dose rates

First, let’s think about the radiation dose rates and doses – the most important question in any radiation injury situation is how much dose a person received.

Radiation dose is a measure of the amount of energy deposited in a receptor – in this case, the receptor would be the thieves, but it could just as easily be a radiation detector. Cobalt-60 has two high-energy gamma rays; one curie of Co-60 gives off enough energy that it will expose a person to a dose rate of 1.14 R/hr at a distance of a meter (about arm’s length). So 2500 curies of activity will give a radiation dose of 2850 R/hr a meter away. A radiation dose of 1000 rem is invariably fatal, so a person would receive a fatal dose of radiation in a little over 20 minutes. Without medical treatment a dose of 400 rem is fatal to half of those who receive it – a person would receive this dose in eight minutes a meter away. And radiation sickness, which takes only about 100 rem, would start to appear in only 2-3 minutes (although it might not manifest itself for a few weeks). No two ways about it – this was a very dangerous source.

Radiation dose rate drops off with the inverse square of one’s distance from a source, so doubling your distance reduces the dose rate by a factor of four (and tripling your distance, by a factor of nine). This means that distance is your friend – take a long step away and a source that can be fatal in 20 minutes at arm’s length will take 80 minutes to have the same impact – still dangerous, but a little less immediately so. At a distance of 100 meters dose rate will be almost 0.3 R/hr – about the same dose in one hour that most of us will receive in an entire year from natural sources. The perimeter was set up at a distance of 500 meters – the dose rate from an unshielded source here will be about 12 mR/hr – at least 500 times normal environmental radiation levels, but well within the realm of safety. I have some radiation detectors that will accurately measure radiation dose rates that are only slightly higher than natural background levels – to get to the point at which the stolen source would fail to show up on these more sensitive detectors I’d have to be close to ten miles away.  This doesn’t mean that the radiation is dangerous at these distances – just that it would be detectable.

Why Co-60?

Of course, a good question to ask is why there was cobalt-60 on the truck in the first place. And this gets a little more involved than one might think, going back over a century.

It didn’t take long for people to realize that radiation can burn the skin – within the first decade after its discovery there was anecdotal evidence of its ability to cause harm, which was confirmed by experiments. And it didn’t take much of a leap of imagination to figure out that, if radiation can burn healthy skin then it can also be used to burn out unwanted tissue – such as cancers. So doctors began experimenting, settling quickly on radium as a cancer therapy. Radium, though, has its own problems, including the fact that it decays to radioactive progeny nuclides – with the advent of the nuclear age scientists found they could produce a highly radioactive nuclide of cobalt that emitted high-energy gammas that were ideal for reaching even those cancers buried deep within the body. Other nuclides were also discovered – Cs-137 and Ir-192 are among them – but cobalt does a great job.

For over a half-century these artificial radionuclides ruled the roost in radiation oncology, joined by iodine (I-131) for treating cancers of the thyroid. But radionuclides have their own problems, chief among them being that they can never be turned off (so they always pose a risk) and that they require a costly radioactive materials license. As technology improved many of the more advanced nations began using linear accelerators to produce more finely tuned beams of radiation – today Co-60 is rarely used for cancer therapy in the US, Japan, or Western Europe. On the other hand, linear accelerators are expensive and they need a fairly high level of infrastructure to maintain the precise power requirements these touchy machines require. So we still find cobalt irradiators in much of the developing world.

Mexico (among other nations) is in the process of swapping out their irradiators for linear accelerators, including the Tijuana cancer clinic where this source originated. But with a half-life of 5.27 years it’s not advised to just let the cobalt decay to stability, a process that could take two generations or longer. So at some point these obsolete sources must be shipped for disposal – that was (and apparently still is) the fate in store for the Tijuana source.

But wait – there’s more!

There’s more to this story than what I’ve gone into here, but space keeps me from getting into all the questions it raises. In particular, there have been a number of incidents over the last half-century or so in which radioactive sources such as this one have cost lives, contaminated consumer products, and they’ve contaminated scrap metal mills. Next week we’ll talk about some of these incidents as well as the risk posed by these sources should they go accidentally or deliberately astray. At the same time we’ll talk about radioactive materials security and what protective actions make sense.

The post The Mexican radiation accident (Part I) appears on ScienceWonk, FAS’s blog for opinions from guest experts and leaders.

That Fracking Radon

frackingAlthough there continues to be a great deal of comment-worthy material about Fukushima (including the latest idiotic suggestion that a collapse of the spent fuel storage in Unit 4 might call for the evacuation of California) I’d like to take a bit of a break from the apparent never-ending story. Partly I’d like to cover topics other than Fukushima (although the continuing scientifically ill-informed silliness does make it a fertile field), partly because I need more time to research some of the Fukushima stories, and partly because, with winter upon us, it seems a good time to look at where an increasing amount of our natural gas comes from and whether or not it brings with it any radiological hazards. In this case, the question is whether or not fracking (hydraulic fracturing) releases the torrents of radon that many claim it does. But first, a little background.

Where the gas and radon come from

Life first appeared on Earth about four billion years ago and as soon as it appeared organisms began to die and they drifted to the bottom of whatever sea they lived in where they were covered with silt and clay. Over time these accumulated to form sizeable deposits; over a longer time they became deeply buried. The Earth’s geothermal gradient is about 25˚ C per km of depth (away from the margins of tectonic plates and away from hot spots); bury something deeply enough and it begins to cook. Heat up organic material to about 100˚ C (equivalent to burial to 4 km), subject it to high pressures, and cook for a few tens of millions of years and the rocks start to fill with natural gas. Petroleum forms at lower temperatures; heat the rock too high and the hydrocarbons are cooked away altogether. Also contained in the rock are huge quantities of brine; water from the ancestral sea in which the original organisms grew and died.

So that’s where the gas comes from; the next part of the question is how the radioactivity gets into the gas. And this part is pretty interesting.

Uranium chemistry is about as complex as any of the natural elements – one aspect is that uranium, while soluble in oxygen-saturated water, is insoluble in waters that lack oxygen. During the first few billion years of Earth’s history the Earth’s atmosphere was largely anoxic and uranium was fairly immobile in the environment; after that time oxygen began to build up in the atmosphere and to dissolve into the seawater. At about this time uranium began to mobilize and move through the environment. And when it entered regions that contained the decaying remains of the early organisms it precipitated out of solution. With time the uranium decayed, forming radioactive progeny which, themselves, decayed – after over a dozen such steps the uranium finally turned into stable lead. But it’s the intermediate steps that are important because they include radium and radon – over the eons, the natural gas deposit accumulated radioactivity and if we fast-forward to the present we find that virtually every natural gas deposit on the planet (oil and coal as well) contains radioactivity. Recovering natural gas not only liberates the gaseous radon contained in the oil, but radium and other radionuclides are also dissolved in the concentrated brine – they precipitate out of solution and contaminate the scale that lines natural gas pipelines and settles out as sludge in the holding tanks. And this is important to remember – every natural gas deposit contains this radioactivity, not only the gas recovered by fracking.

Radon in the Marcellus Shale

Getting gas out of a formation is not as easy as just drilling a hole and letting it flow – if the rock is porous then this will happen, but many rocks just aren’t all that porous, and shale is a particularly “tight” rock. But a huge percentage of natural gas formed in rocks that derived from the mud and silt that covered the ancient organisms – sediments that formed fairly impermeable shale. To get appreciable amounts of gas from these tight deposits we have to find a way to break them up – by forcing fluid in under high pressures and by forcing sand into the formation as well to prop open the cracks formed by the high-pressure fluids. This particular posting is not the place to discuss all of the issues of this controversial topic – all that I’ll tackle is the question of radon.

Among the concerns raised by drilling into shale for natural gas recovery is the concern about radon entering the natural gas. As I mentioned above, there’s radon in all natural gas so the question isn’t so much whether or not there’s radon in the gas so much as is there more radon in gas that originates in shale formations than there is in other natural gas and, if so, whether or not this poses a health risk. In January 2012 a report authored by anti-nuclear activist Marvin Resnikoff suggested that using natural gas from the Marcellus Shale (a rock formation that extends through much of New York and Pennsylvania) would release enough radon to cause tens of thousands of deaths annually. Resnikoff’s conclusions were refuted by a July 2012 report written by Lynn Anspaugh, a respected radiation scientist who has served on a large number of highly respected national and international radiation advisory bodies (a complete list is included in his resume which is appended to the report linked to above).

The crux of Resnikoff’s argument is his claim that natural gas from the Marcellus shale is extraordinarily rich in radon, that this radon will be incorporated in the gas when it reaches homes in New York City, and that this extra radiation exposure places New Yorkers at risk. Resnikoff calculated that there could be as many as 30,000 additional annual cancer deaths from this radiation exposure. But, having read Resnikoff’s report I have to say that I don’t place much credence in his conclusions. Here’s why.

Resnikoff makes three crucial errors in his report:

  1. He failed to actually measure radon concentrations in the natural gas at any point from the wellhead to the customer’s home. Instead he relied on a series of calculations based on shaky information found in preliminary studies performed a number of years ago.
  2. He vastly over-estimates the amount of radon in the Marcellus Shale natural gas in his report, compared to actual radon concentrations that have been measured.
  3. He overestimates the risk from exposure to low levels of radon, ignoring the advice of the EPA and of both national and international radiation advisory bodies.

Anspaugh points out that, in addition to these mistakes, Resnikoff’s calculations are based on a series of parameters for which he provided no basis – Resnikoff provides no reference for any of the values he uses, and neither does he account for the inevitable variability and uncertainty in these values. This is contrary to the normal scientific methodology. And, as Ansbaugh notes, Resnikoff also failed to make a single radon measurement that could have either supported or refuted his argument – he never measured the actual radon concentrations in either the natural gas supply or in the homes he was concerned about.

When radiation dose calculations are based on actual radon concentrations it turns out that the added radiation dose is trivial – on the order of a few tens of microSieverts (a few millirem) annually. It’s only when these trivial doses are multiplied by millions of people and extended over a lifetime that they seem to become significant. But this logic is flawed – ten million people exposed to 10 µSv annually (we are typically exposed to about 3000-4000 µSv annually from natural radiation) are no more likely to develop cancer than would ten million people who each have a 1-gram rock thrown at them. True – the cumulative dose might be 100 Sv (or 10 tons), which is certainly enough to cause harm. But what we’re interested in is the dose to the individual. Throw a small pebble at each of ten million people and you’ll have a bunch of irritated folks, but not a single crushing death in spite of the cumulative “dose.” Similarly, a dose of 10 µSv is a trivial dose of radiation no matter how many people receive it. According to the International Commission on Radiation Protection “Collective effective dose is not intended as a tool for epidemiological risk assessment, and it is inappropriate to use it in risk projections. The aggregation of very low individual doses over extended time periods is inappropriate, and in particular, the calculation of the number of cancer deaths based on collective effective doses from trivial individual doses should be avoided.” Resnikoff either ignored or was unaware of this guidance.

There are plenty of concerns about the use of hydraulic fracturing to extract natural gas from shale formations, just as there are plenty of reasons why this technique was developed and is being used. But the risk of radiation exposure to the users of this natural gas is a specious argument that tends to obscure, rather than to illuminate, this question.

The post That Fracking Radon appears on ScienceWonk, FAS’s blog for opinions from guest experts and leaders.

Once more into the breach

Don-QuixoteI’d been planning on waiting a little longer before returning to the topics of Fukushima and radiation health effects, but a particularly egregiously bad New York Times op-ed piece deserves some attention. So once more into the breach.

Writing in the October 30 New York Times, pediatrician and anti-nuclear activist Helen Caldicott used the nuclear reactor accident in Fukushima as an opportunity to express her concerns about nuclear energy – a calling she has followed since the Three Mile Island reactor accident. Unfortunately, Caldicott included a number of errors in her editorial that are sufficiently serious as to invalidate her conclusions. I’d like to take an opportunity to take a look at these mistakes and to explain the science behind them.

In the first paragraph of her article, Caldicott states that “the mass of scientific and medical literature…amply demonstrates that ionizing radiation is a potent carcinogen and that no dose is low enough not to induce cancer.”

To the contrary, even the most conservative hypothesis (linear no-threshold) holds that low doses of radiation pose very little threat of cancer. Using a slope factor of 5% added risk of cancer fatality per 1 Sv (100 rem) of exposure, the risk of developing cancer from 1 rem of radiation is about 0.05% (5 chances in 10,000). This risk is far lower than the risk of developing cancer as a habitual smoker, from working with a number of solvents (e.g. benzene), working with a number of laboratory chemicals, and so forth. Epidemiologists have noted no increase in cancer rates among people living in areas with high levels of natural background radiation, as well as among the lowest-dose groups of atomic bomb survivors (in fact, people living in the states with the highest levels of natural radiation have lower cancer rates than do those who live in the lowest-dose rate states). Not only that, but age-adjusted cancer rates have dropped steadily (with the exception of smoking-related cancers) over the last century, in spite of dramatic increases in medical radiation exposure. In the words of respected radiation biologist Antone Brooks, these observations show us that “if (low levels of) radiation cause cancer it’s not a heavy hitter.” The bottom line is that, if even the lowest doses of radiation can cause cancer (which has not yet been shown to be either correct or incorrect), radiation is a weak carcinogen – not the “potent carcinogen” that Caldicott would have us believe.

In the second paragraph of her article, Caldicott states that “Large areas of the world are becoming contaminated by long-lived nuclear elements secondary to catastrophic meltdowns: 40% of Europe from Chernobyl, and much of Japan.”

This is a difficult statement to parse because it is such a nebulous statement. If, by “contaminated,” Caldicott means that radionuclides are present that would not otherwise be there, she is wrong – in fact, you can find traces of artificial radionuclides across virtually every square mile of Europe, Asia, and North America as opposed to the 40% she claims. But all that this means is that we can detect trace levels of these nuclides in the soil – doing the same we can also find traces from the atmospheric nuclear weapons testing in the 1940s through the 1960s. And for that matter, we can find lead contamination over virtually the entire world as well from the days of leaded gasoline. But lead contamination goes much deeper as well – scientists found traces of lead in Greenland glaciers that date back to the Roman Empire. But nobody is getting lead poisoning from the Ancient Romans’ pollution, just as nobody is getting radiation sickness (or cancer) from the minute traces of Cs-137 and Sr-90 that can be found across the Northern Hemisphere. But Caldicott can’t really comment on the fact that artificial nuclides have contaminated the world for nearly 70 years because this would shatter her claim that radioactive contamination from Fukushima and Chernobyl is causing death and destruction in Europe and Japan.

In the third paragraph, Caldicott states that “A New York Academy of Science report from 2009 titled ‘Chernobyl’ estimates that nearly a million have already died from this catastrophe. In Japan, 10 million people reside in highly contaminated locations.”

Caldicott is correct that the NYAS reported over a million deaths from Chernobyl. However, this report itself was highly criticized for being scientifically implausible – the NYAS is a respected organization, but in this case their conclusions are at odds with the reality noted on the ground by the World Health Organization. Specifically, the WHO concluded that in the first 20 years, fewer than 100 people could be shown to have died from radiation sickness and radiation-induced cancers and they further concluded that, even using the worst-case LNT model, fewer than 10,000 would eventually succumb from radiation-induced cancer as a result of this accident. This is not a trivial number – but it is less than 1% of the one million deaths the NYAS claims. And in fact the actual number is likely to be far lower, as physician Michael Repacholi noted in an interview with the BBC. In fact, even the WHO’s International Agency for Research on Cancer acknowledges that “Tobacco smoking will cause several thousand times more cancer in the same population.” Even if contamination from Chernobyl and Fukushima are sufficient to cause eventual health problems, we can do far more good to the public by devoting attention to smoking cessation (or, for that matter, to childhood vaccinations) than by spending hundreds of billions of dollars cleaning up contamination that doesn’t seem to be causing any harm.

In the fourth paragraph of her piece, Caldicott notes that “Children are 10 to 20 times more radiosensitive than adults, and fetuses thousands of times more so; women are more sensitive than men.”

To the contrary – the National Academies of Science published a sweeping 2006 report that summarizes the state of the world’s knowledge on the “Health Risks from Exposure to Low Levels of Ionizing Radiation” in which they conclude that children are between 2-3 times as sensitive to radiation as are adults – more sensitive as adults, but a far cry from Caldicott’s claim.

The reproductive effects of radiation are also well-known – fetal radiation exposures of less than 5 rem are incapable of causing birth defects according to our best science, and the Centers for Disease Control flatly states that exposure to even higher radiation doses is not a cause for alarm under most circumstances. This conclusion, by the way, is based on studies of hundreds of thousands of women who were exposed to radiation from medical procedures as well as during the atomic bombings in Japan – it is based on a tremendous amount of hard evidence.

This claim of Caldicott’s, by the way, is particularly egregious and has the potential to do vast harm if it’s taken seriously. Consider – in the aftermath of the Chernobyl accident it is estimated that over 100,000 women had abortions unnecessarily because they received poor medical advice from physicians who, like Caldicott, simply didn’t understand the science behind fetal radiation exposure. There are estimates that as many as a quarter million such abortions took place in the Soviet Union, although these numbers can’t be confirmed.

But even in this country we see this level of misinformation causing problems today – during my stint as a radiation safety officer I was asked to calculate nearly 100 fetal radiation dose estimates – primarily in pregnant women who received x-rays following serious traffic accidents – and many of the women were seriously considering therapeutic abortions on the advice of their physicians. When I performed the dose calculations there was not a single woman whose baby received enough radiation to cause problems. And it doesn’t stop there – we also had parents who refused CT scans for their children, preferring exploratory surgery and its attendant risks to the perceived risks from x-ray procedures. The bottom line is that this sort of thinking – that children and developing babies are exquisitely sensitive to radiation – can cause parents to choose needless abortions and places children at risk; by espousing these views, Caldicott is transgressing the Hippocratic oath she took to “first do no harm” and she should be taken to task for doing so.

Finally, in the last paragraph of her tirade, Caldicott claims that “Radiation of the reproductive organs induces genetic mutations in the sperm and eggs, increasing the incidence of genetic diseases like diabetes, cystic fibrosis, hemochromatosis, and thousands of others over future generations. Recessive mutations take up to 20 generations to be expressed.”

All that I can say to this is that Caldicott decided to go out with a bang. The fact is that there is not a single case in the medical or scientific literature in which birth defects or genetic disease is linked to pre-conception radiation exposure. This is not my conclusion – it’s the conclusion of Dr. Robert Brent, who knows more about this topic than anyone else in the world. Eggs and sperm might be damaged, but Dr. Brent notes that there is a “biological filter” that prevents cells that are damaged from going on to form a baby. Another line of reasoning supports Brent’s claim – areas with high levels of natural radiation also have no increase in birth defects compared to areas with lower levels of natural radiation. Caldicott’s claim that low levels of radiation exposure cause long-term genetic damage are simply not supported by the scientific or medical literature or by any observations that have been made.

Caldicott’s claim that radiation is also responsible for a host of genetic diseases is similarly dubious. The world’s premier radiation science organizations (the International Council on Radiation Protection, the United Nations Committee on the Effects of Atomic Radiation, and the National Council on Radiation Protection and Measurements) all agree that, if radiation contributes to multi-factorial disease then the effect is very weak indeed – possibly too weak to be distinguished from natural sources of these diseases. Specifically, UNSCEAR calculated that – if pre-conception radiation exposure can cause these problems – exposing the population of each generation to 1 rem of radiation each might lead to an additional 100 cases of dominant genetic disease per million births per generation and 15 cases of recessive genetic disease (ICRP calculated similar, but lower rates). This is far lower than the background incidence of genetic disease in the population as a whole. Oh – UNSCEAR also determined that “multifactorial diseases are predicted to be far less responsive to induced mutations than Mendelian disease, so the expected increase in disease frequencies are very small” – a statement with which the ICRP is in agreement. In other words, Caldicott’s claim runs contrary to the best work of the most-respected scientific organizations that specialize in radiation health effects.

With respect to the length of time required for genetic effects – if any – to manifest themselves, I honestly don’t know where Caldicott pulled the number of 20 generations from. This is a number I haven’t seen anywhere in the scientific literature, nowhere in any of the genetics classes I took in grad school, and nothing I ever calculated or saw calculated. As near as I can tell, she is either repeating something she heard somewhere or she made the number up to impress the reader.

Conclusion

The bottom line is that  Caldicott’s editorial is grounded more on invective than on scientific or medical fact. The Fukushima accident was bad, but it pales in comparison to the natural disaster that set it off. The aftereffects of the accident are bad enough – thousands of families displaced, hundreds of thousands of Japanese who were evacuated from their homes, along with the stress, anxiety, and depression they have been suffering. TEPCO and the Japanese government will have to spend billions of dollars tearing down the plant and billions more cleaning up the contaminated area – in many cases, cleaning up places not because they pose a genuine risk to life and health but because contamination levels exceed an arbitrary level. Things are bad enough, and Caldicott is trying to score cheap points by making claims that have no connection to scientific or medical reality, simply in order to advance her anti-nuclear agenda. Her article does nothing to advance the debate – it only serves to use the tragedy in Japan to inflame the public’s fears.

The post Once more into the breach appears on ScienceWonk, FAS’s blog for opinions from guest experts and leaders.

Defending the Earth

The 60-mile diameter Manicouagan impact feature in Canada

The 60-mile diameter Manicouagan impact feature in Canada

As astrophysicist Neil deGrasse Tyson has pointed out, we live in a cosmic shooting gallery. Less than a year ago a good-sized chunk of cosmic rock exploded over the Russian city of Chelyabinsk with a force of over 400 kilotons – over 30 times as powerful as the bomb that flattened Hiroshima. The impact was huge, blowing out windows and knocking people off their feet over hundreds of square miles – over 1500 people sought medical care for their injuries. And that was a fairly small rock – about the size of a school bus. There are much larger rocks out there with our name on them – like the 6-mile asteroid that dredged a hundred-mile crater (killing the dinosaurs in the process), or the even larger ones that excavated craters over 160 miles in diameter in Canada.

In fact, there are at least 4 craters on Earth that were formed by impacts large enough to cause mass extinctions – and these are only the ones we know about. Given that well over half the Earth’s surface is water-covered it stands to reason that there have been about twice as many huge water impacts as those on land. On top of that, we also have to wonder how many have eroded away, been covered by sediments, or destroyed by plate tectonics. Over the history of our planet it’s possible that we’ve had our bell rung by at least a dozen major impacts – every few hundred million years or so. Given that complex multicellular life has only been around for 500-600 million years most of these impacts would be invisible in the fossil record, but every one of them would have been catastrophic to life all over our planet – any of them would have been fatal to our civilization and would have pushed humanity to (maybe even past) the brink of extinction. And remember – it doesn’t take a dinosaur-killing strike to end our civilization – something far smaller is more than sufficient to put an end to our current technological civilization. Considering all of this, it might not be a bad idea to have some contingency plans.

Believe it or not there’s been a fair amount of work on this topic – watching the 1993 impact of Comet Shoemaker-Levy 9 leave Earth-sized bruises on the face of Jupiter convinced scientists that cosmic impacts can still play an important role in today’s Solar System. That led to Congress tasking NASA with locating all of the largest asteroids that have a chance of hitting Earth – to date the American programs have located over 2400 near-Earth asteroids, many of them large enough to pose a serious threat to our civilization.

Locating threats is a good first step but it would be nice to be able to do something other than passively watch an asteroid all the way to a collision – it would be nice to be able to deflect it somehow. Over the years there have been a number of suggestions, including gravitational tractors (parking a massive spacecraft nearby to let the gravity of the spacecraft tug the asteroid out of a collision course), using a giant mirror to heat one side of the asteroid to help divert it, and even coating half the asteroid with reflective materials to let the very slight pressure of reflected light push an asteroid out of our path. But the more dramatic methods – usually involving rocket motors or nuclear explosives – have pretty much been relegated to the realm of science fiction.

Part of the reason for this is that rockets and explosions are pretty dramatic and high-impact events – not only are they hard to get into position to use, but they are also just as likely to break an asteroid into pieces as to push it off course. This would seem to be OK – but in actuality, getting hit with three 2-mile diameter rocks is about as bad (maybe even worse) as being hit with a single 4-mile object. Unless whatever we were to do were to break the incoming object into pieces small enough to break up or burn up while passing through the atmosphere we might end up making things worse. Nevertheless, the concept of using nuclear weapons to help divert an incoming asteroid remains under consideration. In general, the further out we can predict a collision the more time we have to avoid trouble – and the gentler the methods we can use. But if we don’t see something until the last minute – a few years before collision – we might have to resort to more violent methods. This is where nuclear weapons might play a role, and according to a recent story in the Global Security Newswire, both Russian and American scientists are interested in using their skills to help develop weapons that might help to save our bacon.

So here’s the question – actually one of many – are nuclear weapons designers and the governments who employ them really interested in saving the planet, or are they just looking for a pretext to keep working on (and maybe testing) new and improved weapons? And a follow-on question – there’s a very real risk of a catastrophic collision in the next hundred million years, but a very small risk in the next century; do we face a greater risk from a possible asteroid collision or from developing and testing a new generation of nuclear explosives ostensibly aimed at averting such a collision?

I don’t have an answer to that one, but society needs to decide. If we, as a society, decides that the risk of a civilization-ending asteroid strike is sufficiently high that we need to have plans, backup plans, and an ultimate backup then we will need to not only design, but also to test new nuclear weapons that might someday save humanity – and we’ll also have to trust the governments and the scientists who design and test these devices that they will only be used for that purpose. If we don’t feel we can make this leap of faith then perhaps we ought to beef up our efforts to locate and track everything that poses a risk so that we don’t need to fall back on a last-ditch and last-minute effort to blow something out of our sky.

Personally, I think it makes sense to hedge our bets. There are only a few nations that have proven themselves capable of developing an asteroid-moving nuclear weapon and all of these nations have shown themselves able to resist the temptation to use these weapons in tense situations. I’d like to think that these nations will continue to show this level of restraint. And I also have to say that, to me, there is a certain symmetry in the thought that the weapons we thought might destroy civilization and launch a nuclear winter might one day be used to save the world.

The post Defending the Earth appears on ScienceWonk, FAS’s blog for opinions from guest experts and leaders.