In closing

lastpageFor a successful technology, reality must take precedence over public relations, for nature cannot be fooled. Nobel prize-winning physicist Richard Feynman, in an appendix to the report on the loss of the space shuttle Challenger.

 

The first post in this series was put up a little more than two years ago and I’ve written a hundred of them (a dozen more, counting Martin Hellman’s estimable contributions). And, for reasons both personal and professional, it’s time to draw this blog to a close. I have enjoyed writing it and I have enjoyed the thoughtful comments that so many of you have made – I hope that you’ve gotten as much out of it as I have. And, as the habit dies hard, I’d like to take one final opportunity to opine, if I may.

Although the topics covered have been primarily radiological and nuclear-related, I have at times delved into areas of geology, astronomy, the life sciences, and even into philosophy and ethics. But regardless of the topic I have tried to take the same approach to everything – to try to take a skeptical look at the science that underlies claims or stories that are based on science. Anybody can use invective, can rely on “gut” feelings, cast aspersions, and so forth – but if something rests on a foundation of science then it cannot be resolved without understanding that science. And any attempt to circumvent the science tends to be an attempt to circumvent the facts – to bolster an argument that might have little or no basis.

To me, skepticism is of paramount importance – but I need to make sure we’re all on the same page with what is meant by skepticism. First, being skeptical does not mean simply rejecting every claim or statement that’s made – this is simply being contrary, and contrarianism is actually fairly brainless. It doesn’t take much to say “you’re wrong” all the time, and it takes no thought at all to have this as your default response. Being skeptical also doesn’t mean steadfastly opposing a particular point of view, regardless of any information that might support that point of view. This approach is denialism and it also requires little thought or effort.  Skepticism is a bit more difficult a beast – it means questioning, probing, and ultimately deciding whether or not the weight of evidence supports the claim being made. And – very importantly – skepticism also means questioning claims that might support your preconceptions, lest we fall prey to confirmation bias. In fact, I remember looking at some plots of data with my master’s advisor – he commented that “they look plausible but they’re not what I’d expected; so they might just be right.” Skepticism takes work, but if the stakes (intellectual, scientific, technical, societal, or otherwise) are high enough then it’s effort that must be made.

Unfortunately, the reality of science is that what is true is often counter-intuitive, contrary to what we think we see, and different than what we would like to be the case. At one time in the past, for example, fossils were thought to be rocks that looked strangely like bones and shells, the Earth resided at the center of an infinite universe, time moved at the same rate for everybody everywhere, and mountains formed as the Earth slowly shrank due to cooling. That we now know differently is due to past scientists exercising their skepticism, their rationality, and choosing to look beyond what their obvious gut feelings were telling them.

The fact is that the world and the universe run according to the laws of science –astronomers have found fairly convincing evidence that the laws of physics seem to be the same across the universe while geologists and physicists have shown similar consistency over time. Not only that, but the scientific method has been developed, refined, and tested over centuries. To have all of the tools of science available to us and to simply disregard it in favor of an emotional gut feeling is something I just can’t understand. Gut feelings, instinct, and intuition have their place in some areas – fields that are more person-oriented – but they have only limited utility in science-based arguments. Let’s face it – whether we’re talking about radiation dose limits, global warming, nuclear energy, vaccines, or any of the myriad of questions with which we are confronted – if we ignore the science then we cannot arrive at a good answer except by sheer chance. To that end, I’d like to draw your attention to a fascinating website, a checklist, and an associated paper.

These links deal with forecasting – along the lines of weather forecasting, but extended to a number of areas in which people make predictions about what might happen next – but they have relevance to many areas of science. Predictions can take the form of models (such as climate models), calculations of cancer risk from radiation, forecasts of the stock market, or predictions of terrorist activities. People – even trained scientists – are often not very good at assessing these sorts of questions; this is why we have developed the scientific method and why the scientific process can take years or decades to play out. But even then, scientists are frequently too willing to rely on their scientific intuition, to make predictions based on their experience rather than on a scientific process, to overlook (or exclude) information that doesn’t support their hypotheses, and to give excessive weight to studies that agree with them. The principles outlined on the website, checklist, and paper I’ve linked to help all of us to avoid all of the mistakes of thinking that can otherwise lead us astray.

The bottom line is that the universe runs according to science and it doesn’t care what we would like to be true. All of our wishful thinking, outrage, and wishes can’t change the laws of physics; and issues of fairness – even ethics and morality – don’t matter to the universe one whit. If we try to use these principles – regardless of how important they might be in unscientific matters – we will be led astray.

I would like to invite you to continue exercising your own skepticism, especially any time you read (or hear) a story that seems either too good to be true, or too bad to be true. Be on the lookout for pathological science and for arguments that play to the emotions rather than to the rational and the scientific. Being a skeptic doesn’t mean being a contrarian – it means that you ask someone to prove their case to you rather than just accepting it at face value. It also means trying – as much as possible – to remove your feelings from the picture; once you think you’ve figured out what’s going on you can decide how it makes you feel but you can’t use your emotions to solve a scientific problem.

So, as a parting thought, I would urge you to take the time to think carefully about all of the media stories that are (or ought to be) science-based. If claims seem to be incredible – either too good or too dire – ask yourself if they make sense. Take an hour to go through the Standards and Practices for Forecasting (linked to earlier in this post) to see whether or not the argument(s) presented have any legitimate scientific justification, or if they are simply the opinions of scientist, however dressed up they might be. Most importantly, as Ronald Reagan famously told Mikhail Gorbachev with regards to nuclear weapons limits, “trust but verify.”

Again, I’ve enjoyed writing ScienceWonk for the last two years. I very much appreciate the Federation of American Scientists for giving me a home for this blog and I especially appreciate all of you who have taken the time to read it, to comment, and hopefully to think about what I’ve written. Many thanks for your attention – and I hope you have got as much out of it as I have.

The post In closing appears on ScienceWonk, FAS’s blog for opinions from guest experts and leaders.

A foolish consistency

EmersonConsistency is good – there’s a sense of security in knowing that some things will generally remain constant over time. We can always count on gravity, for example, to hold us firmly to the ground; politicians are typically pandering and self-serving; I can count on radioactivity to consistently decay away; and so forth. Of course, not all consistency is good – as Emerson noted, “A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines.” We can also count on the American public to consistently question whether or not evolution actually occurs; many of us know that our perfectionist boss will always insist on yet another round of reviews and edits before letting a document go out the door; we will always find people who are apparently proud of their lack of knowledge; and we can expect that a certain category of blogger will continue to see the end of the world on the near horizon. It is this latter category I’d like to talk about this time – particularly the batch that continues to insist that the reactor accident at the Fukushima Dai’ichi site is going to kill millions.

Before launching into this piece I’d like to point you to a wonderful counter-example of what I just said – a blog posting by oceanographer and University of Washington professor Kim Martini. I have been accused of being part of the pro-nuclear and/or pro-radiation lobby because of my long years of experience as a radiation safety professional – Dr. Martini told me that she became interested in this topic, researched it herself, and came to her conclusions independently of the nuclear energy and radiation safety professionals. In short, she is scientifically competent, intelligent, and has no reason to be biased either pro- or anti-nuclear.

The latest round of Fukushima silliness is the contention that Americans need to evacuate the West Coast because of an apparently imminent release from one or more of the affected reactors and/or the Reactor 4 spent fuel pool. There are also those who blame the Fukushima accident for massive starfish die-offs, for sick animals along the Alaskan coast, and more – all of which (according to the good Dr. Martini) are far from accurate. And anti-nuclear activist Helen Caldicott has gone as far as to state that the entire Northern Hemisphere might need to be evacuated if things get as bad as she fears and the Unit 4 spent fuel pool collapses. So let’s see what the facts are, what the science can tell us, and what the real story might be.

Can the melted reactors go critical?

There have been predictions that the ruined reactor cores will somehow achieve criticality, producing more fission products and spreading more contamination into the water. While this is not strictly speaking impossible it is highly unlikely – sort of like saying that it is remotely possible that Bill Gates will leave me his fortune, but I’m still contributing to my 401(k) account. To achieve criticality (to a nuclear engineer or a reactor operator, “criticality” simply means that the reactor is operating at a constant power) requires reactor fuel that’s enriched to the right percentage of U-235, a critical mass of the uranium (enough to sustain a chain reaction), and it has to be in a configuration (the critical geometry) that will permit fission to occur. Also important in most reactors is a moderator – a substance such as water that will slow neutrons down to the point where they can be absorbed and cause the U-235 atoms to fission. In reactors such as the ones destroyed in Fukushima require all of these components to achieve criticality – take away any one of them and there will be no fission chain reaction.

The ruined reactor cores meet some of these requirements – since they’d been operating at the time of the accident we know that they had a critical mass of sufficiently enriched uranium present. Surrounded by water (either seawater or groundwater), they are likely also immersed in a moderator. But absent a critical geometry the cores cannot sustain a fission chain reaction. So the question is whether or not these cores can, by chance, end up in a critical geometry. And the answer to this is that it is highly improbable.

Consider, for example, the engineering and design that goes into making a nuclear reactor core. Granted, much of this design goes into making the reactors as efficient and as cost-effective to operate as possible, but the fact is that we can’t just slap some uranium together in any configuration and expect it to operate at all, let alone in a sustained fashion. In addition, reactors keep their fuel in an array of fuel rods that are immersed in water – the water helps slow the neutrons down as they travel from one fuel element to the next. A solid lump of low-enriched uranium has no moderator to slow down these neutrons; the only moderated neutrons are those that escape into the surrounding water and bounce back into the uranium; the lumps in a widely dispersed field of uranium will be too far apart to sustain a chain reaction. Only a relatively compact mass of uranium that is riddled with holes and channels is likely to achieve criticality – the likelihood that a melted core falling to the bottom of the reactor vessel (or the floor of the containment) would come together in a configuration that could sustain criticality is vanishingly low.

How much radioactivity is there?

First, let’s start off with the amount of radioactivity that might be available to release into the ocean. Where it comes from is the uranium fission that was taking place in the core until the reactors were shut down – the uranium itself is slightly radioactive, but each uranium atom that’s split produces two radioactive atoms (fission fragments). The materials of the reactor itself become radioactive when they’re bombarded with neutrons but these metals are very corrosion-resistant and aren’t likely to dissolve into the seawater. And then there are transuranic elements such as plutonium and americium formed in the reactor core when the non-fissioning U-238 captures neutrons. Some of these transuranics have long half-lives, but a long half-life means that a nuclide is only weakly radioactive – it takes 15 grams of Pu-239 to hold as much radioactivity as a single gram of radium-226 (about 1 Ci or 37 GBq in a gram of Ra-226), and the one gram of Cs-137 has about as much radioactivity as over a kilogram of Pu-239. So the majority of radioactivity available to be released comes from the fission products with activation and neutron capture products contributing in a more minor fashion.

This part is basic physics and simply isn’t open to much interpretation – decades of careful measurements have shown us how many of which fission products are formed during sustained uranium fission. From there, the basic physics of radioactive decay can tell us what’s left after any period of decay. So if we assume the worst case – that somehow all of the fission products are going to leak into the ocean – the logical starting place is to figure out how much radioactivity is even present at this point in time.

In January 2012 the Department of Energy’s Pacific Northwest National Laboratory (PNNL) used a sophisticated computer program to calculate the fission product inventory of the #1 and #3 reactors at the Fukushima Dai’ichi site – they calculated that each reactor held about 6.2 million curies (about 230 billion mega-becquerels) of radioactivity 100 days after shut-down. The amount of radioactivity present today can be calculated (albeit not easily due to the number of radionuclides present) – the amount of radioactivity present today reflects what there was nearly three years ago minus what has decayed away since the reactors shut down. After 1000 days (nearly 3 years) the amount of radioactivity is about 1% of what was present at shutdown (give or take a little) and about a tenth what was present after 100 days. Put all of this together and accounting for what was present in the spent fuel pools (the reactor in Unit 4 was empty but the spent fuel pool still contains decaying fuel rods) and it seems that the total amount of radioactivity present in all of the affected reactors and their spent fuel pools is in the vicinity of 20-30 million curies at this time.

By comparison, the National Academies of Science calculated in 1971 (in a report titled Radioactivity in the Marine Environment) that the Pacific Ocean holds over 200 billion curies of natural potassium (about 0.01% of all potassium is radioactive K-40), 19 billion curies of rubidium-87, 600 million curies of dissolved uranium, 80 million curies of carbon-14, and 10 million curies of tritium (both C-14 and H-3 are formed by cosmic ray interactions in the atmosphere).

How much radioactivity might be in the water?

A fair amount of radioactivity has already escaped from Units 1, 2, and 3 – many of the volatile and soluble radionuclides have been released to the environment. The remaining radionuclides are in the fuel precisely because they are either not very mobile in the environment or because they are locked inside the remaining fuel. Thus, it’s unlikely that a high fraction of this radioactivity will be released. But let’s assume for the sake of argument that 30 million curies of radioactivity are released into the Pacific Ocean to make their way to the West Coast – how much radioactivity will be in the water?

The Pacific Ocean has a volume of about 7×1023 ml or about 7×1020 liters and the North Pacific has about half that volume (it’s likely that not much water has crossed the equator in the last few years). If we ignore circulation from the Pacific into other oceans and across the equator the math is simple – 30 million curies dissolved into 3×1020 liters comes out to about 10-13 curies per liter of water, or about 0.1 picocuries (pCi) per liter (1 curie is a million million pCi). Natural radioactivity (according to the National Academy of Sciences) from uranium and potassium in seawater is about 300 pCi/liter, so this is a small fraction of the natural radioactivity in the water. If we make a simplifying assumption that all of this dissolved radioactivity is Cs-137 (the worst case) then we can use dose conversion factors published by the US EPA in Federal Guidance Report #12 to calculate that spending an entire year immersed in this water would give you a radiation dose of much less than 1 mrem – a fraction of the dose you’d get from natural background radiation in a single day (natural radiation exposure from all sources – cosmic radiation, radon, internal radionuclides, and radioactivity in the rocks and soils – is slightly less than 1 mrem daily). This is as close as we can come to zero risk.

This is the worst case – assuming that all of the radioactivity in all of the reactors and spent fuel pools dissolves into the sea. Any realistic case is going to be far lower. The bottom line is that, barring an unrealistic scenario that would concentrate all of the radioactivity into a narrow stream, there simply is too little radioactivity and too much water for there to be a high dose to anyone in the US. Or to put it another way – we don’t have to evacuate California, Alaska, or Hawaii; and Caldicott’s suggestion to evacuate the entire Northern Hemisphere is without any credible scientific basis. And this also makes it very clear that – barring some bizarre oceanographic conditions – radioactivity from Fukushima is incapable of causing any impact at all on the sea life around Hawaii or Alaska let alone along California.

Closing thoughts

There’s no doubt that enough radiation can be harmful, but the World Health Organization has concluded that Fukushima will not produce any widespread health effects in Japan (or anywhere else) – just as Chernobyl failed to do nearly three decades ago. And it seems that as more time goes by without the predicted massive environmental and health effects they’ve predicted, the doom-sayers become increasingly strident as though shouting ever-more dire predictions at increasing volume will somehow compensate for the fact that their predictions have come to naught.

In spite of all of the rhetoric, the facts remain the same as they were in March 2011 when this whole saga began – the tsunami and earthquake killed over 20,000 people to date while radiation has killed none and (according to the World Health Organization) is likely to kill none in coming years. The science is consistent on this point as is the judgment of the world’s scientific community (those who specialize in radiation and its health effects). Sadly, the anti-nuclear movement also remains consistent in trying to use the tragedy of 2011 to stir up baseless fears. I’m not sure which of Emerson’s categories they would fall into, but I have to acknowledge their consistency, even when the facts continue to oppose them.

The post A foolish consistency appears on ScienceWonk, FAS’s blog for opinions from guest experts and leaders.

The Mexican radiation accident (Part I)

Source and truckMost news stories involving radiation are, to be blunt, overblown. Radiation can be dangerous, but the risk it actually poses is usually far lower than what the media stories would have us believe. So my first inclination when I hear about another story involving “deadly radiation” is to be skeptical. And then every now and again there’s the exception – a story about radiation that’s not overblown and an incident in which there is a very real risk; sometimes an incident in which lives are put at risk or even ended. Last week we had the latter sort of radiation story, and it’s worth a little discussion.

First, a short recap. A cancer therapy clinic in Tijuana Mexico was shipping a highly radioactive radiation therapy source to Mexico’s radioactive waste disposal facility near the center of the nation – at the time of the theft the source consisted of over 2500 curies of cobalt-60. Auto theft is common in Mexico – the truck driver claims he was sleeping in the truck at the side of the road when armed thieves ordered him out of the truck and stole it, source and all. There is every indication that the thieves were unaware of the source itself – that they were after the truck. And recent history bears this out since there have been a number of similar thefts (albeit with lower-activity sources) in recent years. Anyhow, the thieves seem to have removed the source from the back of the truck; it was found at the side of the road several miles from where the abandoned truck was located. From here things get a little speculative – a Mexican official feels it likely that at least a few of the thieves were exposed to fatal doses of radiation, and a half-dozen people came forward to be tested for radiation sickness (the tests came back negative). At the present time, the source was under guard by the Mexican military with a perimeter about 500 meters (a little over a quarter mile) away. So with this as a backdrop, let’s take a look at the science behind all of this.

Dose and dose rates

First, let’s think about the radiation dose rates and doses – the most important question in any radiation injury situation is how much dose a person received.

Radiation dose is a measure of the amount of energy deposited in a receptor – in this case, the receptor would be the thieves, but it could just as easily be a radiation detector. Cobalt-60 has two high-energy gamma rays; one curie of Co-60 gives off enough energy that it will expose a person to a dose rate of 1.14 R/hr at a distance of a meter (about arm’s length). So 2500 curies of activity will give a radiation dose of 2850 R/hr a meter away. A radiation dose of 1000 rem is invariably fatal, so a person would receive a fatal dose of radiation in a little over 20 minutes. Without medical treatment a dose of 400 rem is fatal to half of those who receive it – a person would receive this dose in eight minutes a meter away. And radiation sickness, which takes only about 100 rem, would start to appear in only 2-3 minutes (although it might not manifest itself for a few weeks). No two ways about it – this was a very dangerous source.

Radiation dose rate drops off with the inverse square of one’s distance from a source, so doubling your distance reduces the dose rate by a factor of four (and tripling your distance, by a factor of nine). This means that distance is your friend – take a long step away and a source that can be fatal in 20 minutes at arm’s length will take 80 minutes to have the same impact – still dangerous, but a little less immediately so. At a distance of 100 meters dose rate will be almost 0.3 R/hr – about the same dose in one hour that most of us will receive in an entire year from natural sources. The perimeter was set up at a distance of 500 meters – the dose rate from an unshielded source here will be about 12 mR/hr – at least 500 times normal environmental radiation levels, but well within the realm of safety. I have some radiation detectors that will accurately measure radiation dose rates that are only slightly higher than natural background levels – to get to the point at which the stolen source would fail to show up on these more sensitive detectors I’d have to be close to ten miles away.  This doesn’t mean that the radiation is dangerous at these distances – just that it would be detectable.

Why Co-60?

Of course, a good question to ask is why there was cobalt-60 on the truck in the first place. And this gets a little more involved than one might think, going back over a century.

It didn’t take long for people to realize that radiation can burn the skin – within the first decade after its discovery there was anecdotal evidence of its ability to cause harm, which was confirmed by experiments. And it didn’t take much of a leap of imagination to figure out that, if radiation can burn healthy skin then it can also be used to burn out unwanted tissue – such as cancers. So doctors began experimenting, settling quickly on radium as a cancer therapy. Radium, though, has its own problems, including the fact that it decays to radioactive progeny nuclides – with the advent of the nuclear age scientists found they could produce a highly radioactive nuclide of cobalt that emitted high-energy gammas that were ideal for reaching even those cancers buried deep within the body. Other nuclides were also discovered – Cs-137 and Ir-192 are among them – but cobalt does a great job.

For over a half-century these artificial radionuclides ruled the roost in radiation oncology, joined by iodine (I-131) for treating cancers of the thyroid. But radionuclides have their own problems, chief among them being that they can never be turned off (so they always pose a risk) and that they require a costly radioactive materials license. As technology improved many of the more advanced nations began using linear accelerators to produce more finely tuned beams of radiation – today Co-60 is rarely used for cancer therapy in the US, Japan, or Western Europe. On the other hand, linear accelerators are expensive and they need a fairly high level of infrastructure to maintain the precise power requirements these touchy machines require. So we still find cobalt irradiators in much of the developing world.

Mexico (among other nations) is in the process of swapping out their irradiators for linear accelerators, including the Tijuana cancer clinic where this source originated. But with a half-life of 5.27 years it’s not advised to just let the cobalt decay to stability, a process that could take two generations or longer. So at some point these obsolete sources must be shipped for disposal – that was (and apparently still is) the fate in store for the Tijuana source.

But wait – there’s more!

There’s more to this story than what I’ve gone into here, but space keeps me from getting into all the questions it raises. In particular, there have been a number of incidents over the last half-century or so in which radioactive sources such as this one have cost lives, contaminated consumer products, and they’ve contaminated scrap metal mills. Next week we’ll talk about some of these incidents as well as the risk posed by these sources should they go accidentally or deliberately astray. At the same time we’ll talk about radioactive materials security and what protective actions make sense.

The post The Mexican radiation accident (Part I) appears on ScienceWonk, FAS’s blog for opinions from guest experts and leaders.

Once more into the breach

Don-QuixoteI’d been planning on waiting a little longer before returning to the topics of Fukushima and radiation health effects, but a particularly egregiously bad New York Times op-ed piece deserves some attention. So once more into the breach.

Writing in the October 30 New York Times, pediatrician and anti-nuclear activist Helen Caldicott used the nuclear reactor accident in Fukushima as an opportunity to express her concerns about nuclear energy – a calling she has followed since the Three Mile Island reactor accident. Unfortunately, Caldicott included a number of errors in her editorial that are sufficiently serious as to invalidate her conclusions. I’d like to take an opportunity to take a look at these mistakes and to explain the science behind them.

In the first paragraph of her article, Caldicott states that “the mass of scientific and medical literature…amply demonstrates that ionizing radiation is a potent carcinogen and that no dose is low enough not to induce cancer.”

To the contrary, even the most conservative hypothesis (linear no-threshold) holds that low doses of radiation pose very little threat of cancer. Using a slope factor of 5% added risk of cancer fatality per 1 Sv (100 rem) of exposure, the risk of developing cancer from 1 rem of radiation is about 0.05% (5 chances in 10,000). This risk is far lower than the risk of developing cancer as a habitual smoker, from working with a number of solvents (e.g. benzene), working with a number of laboratory chemicals, and so forth. Epidemiologists have noted no increase in cancer rates among people living in areas with high levels of natural background radiation, as well as among the lowest-dose groups of atomic bomb survivors (in fact, people living in the states with the highest levels of natural radiation have lower cancer rates than do those who live in the lowest-dose rate states). Not only that, but age-adjusted cancer rates have dropped steadily (with the exception of smoking-related cancers) over the last century, in spite of dramatic increases in medical radiation exposure. In the words of respected radiation biologist Antone Brooks, these observations show us that “if (low levels of) radiation cause cancer it’s not a heavy hitter.” The bottom line is that, if even the lowest doses of radiation can cause cancer (which has not yet been shown to be either correct or incorrect), radiation is a weak carcinogen – not the “potent carcinogen” that Caldicott would have us believe.

In the second paragraph of her article, Caldicott states that “Large areas of the world are becoming contaminated by long-lived nuclear elements secondary to catastrophic meltdowns: 40% of Europe from Chernobyl, and much of Japan.”

This is a difficult statement to parse because it is such a nebulous statement. If, by “contaminated,” Caldicott means that radionuclides are present that would not otherwise be there, she is wrong – in fact, you can find traces of artificial radionuclides across virtually every square mile of Europe, Asia, and North America as opposed to the 40% she claims. But all that this means is that we can detect trace levels of these nuclides in the soil – doing the same we can also find traces from the atmospheric nuclear weapons testing in the 1940s through the 1960s. And for that matter, we can find lead contamination over virtually the entire world as well from the days of leaded gasoline. But lead contamination goes much deeper as well – scientists found traces of lead in Greenland glaciers that date back to the Roman Empire. But nobody is getting lead poisoning from the Ancient Romans’ pollution, just as nobody is getting radiation sickness (or cancer) from the minute traces of Cs-137 and Sr-90 that can be found across the Northern Hemisphere. But Caldicott can’t really comment on the fact that artificial nuclides have contaminated the world for nearly 70 years because this would shatter her claim that radioactive contamination from Fukushima and Chernobyl is causing death and destruction in Europe and Japan.

In the third paragraph, Caldicott states that “A New York Academy of Science report from 2009 titled ‘Chernobyl’ estimates that nearly a million have already died from this catastrophe. In Japan, 10 million people reside in highly contaminated locations.”

Caldicott is correct that the NYAS reported over a million deaths from Chernobyl. However, this report itself was highly criticized for being scientifically implausible – the NYAS is a respected organization, but in this case their conclusions are at odds with the reality noted on the ground by the World Health Organization. Specifically, the WHO concluded that in the first 20 years, fewer than 100 people could be shown to have died from radiation sickness and radiation-induced cancers and they further concluded that, even using the worst-case LNT model, fewer than 10,000 would eventually succumb from radiation-induced cancer as a result of this accident. This is not a trivial number – but it is less than 1% of the one million deaths the NYAS claims. And in fact the actual number is likely to be far lower, as physician Michael Repacholi noted in an interview with the BBC. In fact, even the WHO’s International Agency for Research on Cancer acknowledges that “Tobacco smoking will cause several thousand times more cancer in the same population.” Even if contamination from Chernobyl and Fukushima are sufficient to cause eventual health problems, we can do far more good to the public by devoting attention to smoking cessation (or, for that matter, to childhood vaccinations) than by spending hundreds of billions of dollars cleaning up contamination that doesn’t seem to be causing any harm.

In the fourth paragraph of her piece, Caldicott notes that “Children are 10 to 20 times more radiosensitive than adults, and fetuses thousands of times more so; women are more sensitive than men.”

To the contrary – the National Academies of Science published a sweeping 2006 report that summarizes the state of the world’s knowledge on the “Health Risks from Exposure to Low Levels of Ionizing Radiation” in which they conclude that children are between 2-3 times as sensitive to radiation as are adults – more sensitive as adults, but a far cry from Caldicott’s claim.

The reproductive effects of radiation are also well-known – fetal radiation exposures of less than 5 rem are incapable of causing birth defects according to our best science, and the Centers for Disease Control flatly states that exposure to even higher radiation doses is not a cause for alarm under most circumstances. This conclusion, by the way, is based on studies of hundreds of thousands of women who were exposed to radiation from medical procedures as well as during the atomic bombings in Japan – it is based on a tremendous amount of hard evidence.

This claim of Caldicott’s, by the way, is particularly egregious and has the potential to do vast harm if it’s taken seriously. Consider – in the aftermath of the Chernobyl accident it is estimated that over 100,000 women had abortions unnecessarily because they received poor medical advice from physicians who, like Caldicott, simply didn’t understand the science behind fetal radiation exposure. There are estimates that as many as a quarter million such abortions took place in the Soviet Union, although these numbers can’t be confirmed.

But even in this country we see this level of misinformation causing problems today – during my stint as a radiation safety officer I was asked to calculate nearly 100 fetal radiation dose estimates – primarily in pregnant women who received x-rays following serious traffic accidents – and many of the women were seriously considering therapeutic abortions on the advice of their physicians. When I performed the dose calculations there was not a single woman whose baby received enough radiation to cause problems. And it doesn’t stop there – we also had parents who refused CT scans for their children, preferring exploratory surgery and its attendant risks to the perceived risks from x-ray procedures. The bottom line is that this sort of thinking – that children and developing babies are exquisitely sensitive to radiation – can cause parents to choose needless abortions and places children at risk; by espousing these views, Caldicott is transgressing the Hippocratic oath she took to “first do no harm” and she should be taken to task for doing so.

Finally, in the last paragraph of her tirade, Caldicott claims that “Radiation of the reproductive organs induces genetic mutations in the sperm and eggs, increasing the incidence of genetic diseases like diabetes, cystic fibrosis, hemochromatosis, and thousands of others over future generations. Recessive mutations take up to 20 generations to be expressed.”

All that I can say to this is that Caldicott decided to go out with a bang. The fact is that there is not a single case in the medical or scientific literature in which birth defects or genetic disease is linked to pre-conception radiation exposure. This is not my conclusion – it’s the conclusion of Dr. Robert Brent, who knows more about this topic than anyone else in the world. Eggs and sperm might be damaged, but Dr. Brent notes that there is a “biological filter” that prevents cells that are damaged from going on to form a baby. Another line of reasoning supports Brent’s claim – areas with high levels of natural radiation also have no increase in birth defects compared to areas with lower levels of natural radiation. Caldicott’s claim that low levels of radiation exposure cause long-term genetic damage are simply not supported by the scientific or medical literature or by any observations that have been made.

Caldicott’s claim that radiation is also responsible for a host of genetic diseases is similarly dubious. The world’s premier radiation science organizations (the International Council on Radiation Protection, the United Nations Committee on the Effects of Atomic Radiation, and the National Council on Radiation Protection and Measurements) all agree that, if radiation contributes to multi-factorial disease then the effect is very weak indeed – possibly too weak to be distinguished from natural sources of these diseases. Specifically, UNSCEAR calculated that – if pre-conception radiation exposure can cause these problems – exposing the population of each generation to 1 rem of radiation each might lead to an additional 100 cases of dominant genetic disease per million births per generation and 15 cases of recessive genetic disease (ICRP calculated similar, but lower rates). This is far lower than the background incidence of genetic disease in the population as a whole. Oh – UNSCEAR also determined that “multifactorial diseases are predicted to be far less responsive to induced mutations than Mendelian disease, so the expected increase in disease frequencies are very small” – a statement with which the ICRP is in agreement. In other words, Caldicott’s claim runs contrary to the best work of the most-respected scientific organizations that specialize in radiation health effects.

With respect to the length of time required for genetic effects – if any – to manifest themselves, I honestly don’t know where Caldicott pulled the number of 20 generations from. This is a number I haven’t seen anywhere in the scientific literature, nowhere in any of the genetics classes I took in grad school, and nothing I ever calculated or saw calculated. As near as I can tell, she is either repeating something she heard somewhere or she made the number up to impress the reader.

Conclusion

The bottom line is that  Caldicott’s editorial is grounded more on invective than on scientific or medical fact. The Fukushima accident was bad, but it pales in comparison to the natural disaster that set it off. The aftereffects of the accident are bad enough – thousands of families displaced, hundreds of thousands of Japanese who were evacuated from their homes, along with the stress, anxiety, and depression they have been suffering. TEPCO and the Japanese government will have to spend billions of dollars tearing down the plant and billions more cleaning up the contaminated area – in many cases, cleaning up places not because they pose a genuine risk to life and health but because contamination levels exceed an arbitrary level. Things are bad enough, and Caldicott is trying to score cheap points by making claims that have no connection to scientific or medical reality, simply in order to advance her anti-nuclear agenda. Her article does nothing to advance the debate – it only serves to use the tragedy in Japan to inflame the public’s fears.

The post Once more into the breach appears on ScienceWonk, FAS’s blog for opinions from guest experts and leaders.

The dose makes the poison

radiation_dna_damage_bigOne of the most potent arguments against all things nuclear is the idea that even a vanishingly small amount of radiation exposure has the chance to cause cancer. Even if that risk is incredibly low there’s still a risk, and if a huge number of people are exposed to even a small risk then there could be a significant number of deaths. Say, for example, that the entire population of the US were exposed to something that carried a risk of one in a million – nearly 400 people could die nationally.

We can debate whether or not we could “see” these deaths using epidemiology (for example, with over 500,000 cancer deaths annually even as many as 400 additional cancer deaths crammed into a single year would represent an increase of less than one tenth of one percent) but that’s not the point of this posting – rather, the point is to discuss two fascinating papers that discuss the origins of the hypothesis that any incremental amount of radiation exposure can increase our risk of developing cancer, and that this added risk increases linearly with the amount of exposure; what is known as the Linear No-Threshold (LNT) hypothesis. Specifically, the author of these papers, respected University of Massachusetts toxicologist Edward Calabrese, presents a compelling case that the acceptance of this hypothesis as the basis of global radiation regulations is the result of a deliberate campaign that ignored a great deal of scientific evidence to the contrary. But first let’s back up a little bit to discuss what LNT is and how it’s used before digging into this matter and what it might mean.

When ionizing radiation passes through a cell there’s a chance that it will interact with the atoms in that cell – it can strip electrons from neutral atoms, creating an ion pair. Where once there was a happy electrically neutral atom there are now two ions, one with a positive charge (the atom) and a negative electron ejected by the radiation. Once formed the ions might recombine, in which case the story is over. But the ions can also interact with other atoms and molecules in the cell, forming free radicals that can then go on to interact with DNA in the cell’s nucleus. Sometimes these interactions cause DNA damage.

Of course, damaging DNA is only the first step in a process that might lead to cancer, but it’s most likely that nothing will happen. It could be that the damage is repaired by one or more of our exceptionally capable DNA repair mechanisms, and it’s also possible that any unrepaired damage will be in a stretch of “junk” DNA or in a gene that’s inactive in the affected cell. This is described in greater detail in an earlier posting in this series – for the purpose of this one, it’s safe to skip to the end, which is that the overwhelming majority of DNA damage is either repaired or has no impact on the organism (damage to junk DNA or to an inactive gene can’t go on to cause cancer). It’s only the unrepaired (or mis-repaired) DNA damage – and only damage that’s in one of very few specific genes – that can progress to a cancer.

There’s more to the whole matter than this. For example, our cells are always experiencing DNA damage at quite substantial rates – one estimate is that each cell is subject to several million DNA-damaging events per year – and the damage due to radiation is indistinguishable from that caused by other agents. So for us to decide how damaging a particular dose of radiation might be, for us to try to calculate a risk from a particular dose of radiation we’ve got to first understand how much DNA damage this dose will cause, then to determine how much of this damage goes unrepaired (or mis-repaired), to compare this level of damage to the background damage that is always afflicting our cells, and finally to figure out whether or not that damage will affect one of the few genes that can progress towards cancer. The important part of this is that DNA damage due to radiation doesn’t occur in a vacuum – it adds to the damage that is already occurring. It takes a dose of about 100 rem to double the amount of damage that occurs in a year – a dose that will increase a person’s lifetime cancer risk by about 5% according to the current thinking. This relationship is well-accepted at radiation doses in excess of about 10 rem (over a lifetime – 5 rem if the exposure takes place in a very short period of time); the question is whether or not it remains constant at any level of radiation exposure, no matter how slight. This is where we get to Calabrese’s recent work.

To use a simple analogy, think of the DNA damage in our cells as a variant on the bathtub problems we all got to solve in middle school algebra – the accumulation of DNA damage from whatever source is the water filling the tub and the repair of this DNA damage (or the damage that occurs in inert sections of DNA) is the drain. If the rate of removal is the same as the rate of accumulation then there’s no net impact on the health of the organism. So the question is whether or not the normal rate of accumulation is enough to max out our DNA damage repair mechanisms or if our cells have residual repair capacity. And, on top of that, if, when any residual capacity gets fired up, it repairs the same amount of damage that was inflicted, a little bit more, or a little bit less. To use the tub analogy, if you have the faucet turned on full and water level in the tub is holding steady, will pouring an additional stream of water into the tub cause it to overflow? If the drain is just barely keeping up with the influx then it will start to fill up and will eventually overflow; otherwise the tub can accept a little more water without making a mess. So here’s the question – if we don’t know in advance the capacity of the drain and if the answer is potentially a matter of life and death then what should we assume – the worst case or the best? Obviously the answer is, in the absence of any firm information, it makes sense to assume the worst and in the case of radiation risk this would be LNT. But when further knowledge is available it makes sense to adapt our hypothesis to make use of the new information.

This is precisely what Calabrese says some of the earliest researchers in this field failed to do – in fact, there seems to be evidence that they willfully ignored evidence that could have led to some significant revisions to the use of the LNT hypothesis – the question is whether or not “willfully ignored” means that the scientists chose not to include data that they felt were flawed, if they omitted studies simply because the results contradicted their own results, if the scientists omitted results to try to mislead the scientific community, or something else. In other words, did these scientists set out to deceive the scientific community (for whatever reason)?

At this point, with all of the early scientists dead, we can only guess at their intent or their motives. Calabrese lays out his case – quite convincingly – in two papers, summaries of which can be found online in the two pages linked to here. And for what it’s worth, while I’ve reached my own conclusions on this matter, I’m not sure whether or not I can approach the matter objectively, so rather than relate them here I think it’s better to simply refer you to Calabrese’s work so that you can draw your own conclusions from the information he lays out.

So what have we got? Well, for starters we have the issue of intellectual honesty. Did scientists overlook crucial research or did they make a conscious decision to omit scientific research that contradicted what they believed – or what they wanted – to be the truth? Did they make a mistake, did they deceive themselves, did they deceive others? Or were they right, but instead of arguing their case they chose to leave out information they felt to be irrelevant. But regardless of which of these possibilities is correct – even if those who first came up with the LNT hypothesis were correct – we have to ask ourselves if any them is completely intellectually honest. The only option that gets these authors off the hook is if they were simply unaware of studies that contradicted the hypothesis that they came up with. But even here they fall short because it’s the scientists’ job to know about – and to discuss – these contrary studies, if only to demonstrate why the contrary studies are wrong. After reading Calabrese’s papers I find myself wondering about the intellectual honesty of the early scientists who developed the LNT hypothesis.

The other question we have to ask ourselves is whether or not it matters. Sometimes it doesn’t. Fibbing about the discovery of a new species of insect, for example, might not have much of an impact on our society. But the risk from low levels of radiation is different – it affects how we think about the risks from medical radiation, from nuclear power, air travel, from airport x-ray screening, from radiological terrorism, and more. The use of radiation permeates our society and the manner in which we control the risks from radiation is based on our perception of the risks it poses. Radiation protection is not cheap – if our radiation safety measures are based on a hypothesis that’s overly conservative then we are wasting money on protective measures that don’t gain us any added safety. It’s important to know if the hypothesis – LNT – is accurate and it’s just as important to know whether or not it stands on solid intellectual foundations.

The post The dose makes the poison appears on ScienceWonk, FAS’s blog for opinions from guest experts and leaders.

Where does the plutonium come from?

new_horizonsLast week I wrote about how the shortage of Pu-238 might impact the exploration of the outer Solar System, but I didn’t much get into where the plutonium comes from. After all, while there are trace amounts of natural plutonium, there certainly isn’t nearly enough to fuel a space probe. So this week it seemed as though it might be worth going over where we get our plutonium, if only to understand why NASA (or DOE) needs tens of millions of dollars to produce it.

On the Periodic Table plutonium is two spots above uranium – uranium has an atomic number of 92 (that is, it has 92 protons) and plutonium is at 94. To make plutonium we somehow have to add two protons to a uranium atom. The way this happens is sort of cool – and there are different routes depending on the plutonium isotope that’s being produced.

To make Pu-239, the nuclide used in nuclear weapons, it’s a fairly simple process. Natural uranium is over 99% U-238, which doesn’t fission all that well. Put the U-238 (which makes up a minimum of 95% of the reactor fuel) into the middle of a reactor, which is seething with neutrons from uranium fission, and it will capture a neutron and turn into U-239. The U-239, in turn, decays by emitting a beta particle to neptunium-239, which gives off another beta particle. Since each beta decay turns a neutron into a proton, these two beta decays suffice to turn a uranium atom into one of plutonium. Thus, a single U-238 atom absorbing a single neutron and being allowed to sit long enough to undergo two beta decays (a few weeks or so) will turn into a single atom of Pu-239. Making heavier plutonium nuclides is just as easy – when Pu-239 captures additional neutrons it turns into Pu-240, Pu-241, Pu-242, and more. Not only is it fairly easy, but it happens all the time in any operating nuclear reactor.

OK – so we can see how simple neutron capture and patience can give us plutonium nuclides heavier than U-238, but this really doesn’t help us to make the Pu-238 needed to power a spacecraft. Making the lighter nuclide is a little more roundabout.

Remember that, through neutron capture, a reactor produces Pu-241. It turns out that Pu-241 also decays by beta emission, creating Am-241 – the stuff that’s used in smoke detectors (among other things). Am-241 is an alpha emitter and it decays to a lighter variety of neptunium (Np-237) which, when subjected to neutron irradiation, captures a neutron to become Np-238. One final transformation – a last beta decay – is the last step to producing Pu-238. This is the reason why Pu-238 is so expensive – making it requires two bouts of irradiation (the first long enough to produce the Pu-241), enough time for all of the radioactive decays to transform plutonium into americium and the americium into neptunium, and several steps of chemical processing to isolate the various elements of interest that are formed.

Although it sounds convoluted (well, I guess it is convoluted), making Pu-238 is fairly straight-forward. The science and engineering are both well-known and well-established, and its production certainly breaks no new scientific or technical ground. But the politics…that’s another matter altogether.

As I mentioned last week, the American Pu-238 production line shut down over two decades ago. Since then we’ve been buying it from the Russians, but they’ve got their own space program and have limited stocks to boot. So this option is not going to work for much longer, regardless of the future of US-Russian international relations.

A recent blog posting by Nuclear Watch suggested that the US might be able to meet its Pu-238 needs by dismantling nuclear weapons and by digging into its inventory of scrap Pu-238 – it notes that the Los Alamos National Laboratory (LANL) documents indicate that over 2000 RTGs’ worth of the nuclide can be recovered from nuclear weapons alone. But I’m not sure if I can accept this assertion, primarily because putting this nuclide into a nuclear weapon makes absolutely no sense. I can’t comment on the “scraps” of Pu-238 that LANL is said to have lying around, and unfortunately Nuclear Watch didn’t provide a link to the LANL documents they cited, making it difficult to check or to comment further. But if there is a Pu-238 stockpile at LANL it would certainly be nice to tap it for space exploration – not to mention the savings in disposal costs.

Yet another way to make Pu-238 is in a liquid fluoride thorium reactor (LFTR) – a reactor that uses naturally occurring thorium (Th-232) to breed U-233, which fissions quite nicely. Additional neutron captures can turn U-233 into Pu-238, which can be chemically separated from the fuel. There’s a lot more to the topic than this, but I covered the topic of thorium reactors fairly thoroughly last year (the first of these posts is at this URL, and there are three others in the same series) and it’s also covered on the Thorium Energy Alliance’s website. There are a lot of nice things about thorium reactors in addition to their being able to produce Pu-238, and it’s a technology that’s been worked out and tested – but the US shows no sign of building any of them anytime soon. India and China might develop extensive thorium reactor systems – but what these nations might do a decade or two in the future won’t do much for NASA in the next few years. The bottom line is that, however promising they might be for future needs, thorium reactors aren’t likely to help us send more spacecraft to the outer Solar System anytime soon.

So here’s where we stand. The US stopped producing the Pu-238 needed to run our deep-space probes and we’ve pretty much used up our stocks of the material. In the intervening years we’ve been buying Russian Pu-238, but that won’t be available for much longer, leaving us high and dry. There may be scraps of the material – possibly even stockpiles – at various DOE facilities, but dismantling nuclear weapons is probably not going to do the job. Over the long run thorium-cycle reactors might be a great way to make it, but these reactors aren’t operating anywhere in the world today and there are no American plans to build any of them anytime soon. That would seem to leave us with only three options – re-start our Pu-238 production line, find another way to make (or obtain) the material, or confine ourselves to the inner Solar System. As I mentioned last week, I sincerely hope we don’t go the last route. So let’s see what we can come up with – and let’s hope we don’t leave the solution (and decisions) too long.

The post Where does the plutonium come from? appears on ScienceWonk, FAS’s blog for opinions from guest experts and leaders.

Houston – we need some plutonium

Pu-238 glowing with the heat of alpha radiation decay

Pu-238 glowing with the heat of alpha radiation

The outer Solar System is a dark and lonely place – solar energy drops off with the inverse square of distance to the Sun so a spaceship in orbit around Jupiter (5.5 times as far from the Sun as the Earth) receives only about 3% as much solar energy as one orbiting Earth. Solar panels do a great job of powering spacecraft out about as far as Mars but anything sent to the outer reaches of the Solar System needs to find some other source of power. For most spacecraft this means using plutonium – specifically the isotope Pu-238. And according to some recent reports, we might be running out this particular flavor of plutonium. Since we can’t visit the outer solar system on solar power and batteries have a limited lifespan, if we want to go past the asteroid belt we’ve got to go nuclear with either radioisotope thermoelectric generators (RTGs) or reactors. And according to a NASA scientist (quoted in the story linked to above) we are running out of Pu-238 – if we don’t take steps to either replenish our stocks or to develop an alternative then our deep space exploration might grind to a halt. But before getting into that, let’s take a quick look at why Pu-238 is such a good power source.

As with any other element, plutonium has a number of isotopes – Pu-239 is the one that fissions nicely enough to be used in nuclear weapons, and the slightly heavier version (Pu-240) also fissions nicely. These heavier plutonium isotopes are both produced in nuclear reactors when U-238 captures a neutron or two – any operating reactor produces them and, for that matter, fissioning these plutonium isotopes produces a significant amount of energy in any nuclear reactor. Pu-238 is also produced in reactors, but through a slightly more convoluted pathway. The bottom line is that useable quantities of plutonium – fissionable or non – are produced in reactors.

What makes Pu-238 valuable is that it decays away quite nicely and produces a boatload of energy when it decays – it has a long enough half-life (just a tad less than 88 years) to last for decades and it gives off a high-energy alpha particle (for those who are interested, the alpha energy is over 5.5 MeV).

So let’s look at how this is turned into energy. Plutonium-238 has a half-life of 87.7 years and a decay constant (a measure of the fraction of Pu-238 atoms that will decay in a year) of 0.0079. To get a bit geekish, if we can calculate the number of atoms in a kg of Pu-238 then we can multiply the number of atoms by the decay constant to figure out how many decays will occur in a given period of time. A kg of Pu-238 has about 2.5×1023 atoms – multiply this by the decay constant and we find that there should be about 2×1022 atoms decaying every year; a year has about 3.1×107 seconds so this will give a decay rate of about 6.4×1014 atoms every second. And since each decay carries with it about 5.5 million electron volts (MeV), 1 kg of Pu-238 produces 3.5×1015 MeV every second. Doing some unit conversions gives us an energy production of about 550 joules per second – one J/sec is 1 watt, so each kilogram of Pu-238 produces 550 watts of power. A 5-kg RTG (like the one that’s powering the Curiosity rover on Mars) will put out nearly 3 kW of thermal power. This is enough heat that a sufficiently large mass of Pu-238 will glow red-hot; captured, it can be transformed into electricity to power the spacecraft – with a 5% conversion efficiency from thermal to electrical energy, this 10 kg of Pu will produce about 150 watts of electrical power. There are more efficient ways of turning heat into electricity, but they all have their limitations or are untried technologies.

This is where the Pu-238 half-life comes into play – it will take 87.7 years for 50% of the Pu-238 (and for power production to drop by half), so power will drop by only about 0.8% in a year. The Pu-238 half life is short enough to make for a furious decay rate – enough to produce the power needed to run a spaceship – but long enough to last for the decades needed to reach Pluto (the destination of the New Horizons ship) or to linger in orbit around Jupiter and Saturn (a la Galileo and Cassini). Without RTGs powered by Pu-238 we can’t explore much beyond the asteroid belt. This is why the possible exhaustion of our stocks of this nuclide so alarms Adams. According to Adams, NASA has already delayed or cancelled a number of planned missions to the outer Solar System, including a mission to study Europa, whose oceans are considered a prime candidate as an abode for life outside of Earth. The Department of Energy estimates that an annual outlay of $20 million or less would be enough to supply NASA’s Pu-238 needs, but this amount has not been forthcoming.

The space program is controversial and has been controversial for a half-century. Some decried the spending on Apollo, in spite of the fact that it gave us humanity’s first steps on another world. The Shuttle program also came under fire for a number of reasons, as has the International Space Station. And unmanned programs have been criticized as well. The common thread in most of this criticism is a matter of money – asking why in the world we should spend billions of dollars to do something that doesn’t provide any tangible benefit to those of us on Earth. Those making this argument are those who are reluctant to spend (or waste, as they’d put it) a few tens of millions of dollars annually to power the spacecraft that could help us learn more about our cosmic neighborhood.

The economic argument is hard to refute on economic grounds – there’s no denying that close-up photos of Saturn’s rings or Titan’s hydrocarbon seas haven’t fed a single hungry person here at home. And for that matter, even finding life on Mars (or Europa) will not feed the hungry here on Earth. But there has got to be more to life than simple economics – if not then there would be no need for art, for music, for sports, or for any of the other things we do when we’re not working, eating, sleeping, or attending to personal hygiene.

Discussing the relative merits of “pure” science is beyond the scope of this post (although I did discuss it in an earlier post in this blog). But I think it’s worth pointing out that the public showed a genuine interest in the exploits of the Voyager probe, the Galileo mission, and the Cassini craft – not to mention the missions to Mars, Venus, and elsewhere. I’d like to think that the deep space program is worth another few tens of millions of dollars a year for the entertainment value alone – especially given the vast sums that are spent on movies and TV shows that are watched by fewer people and that provide little in the way of enlightenment or uplifted spirits.

One other point that’s worth considering is that NASA’s outer Solar System missions are billion-plus dollar missions and the cost of plutonium is a small fraction of this amount. While not a major part of the nation’s economy, NASA programs employ a lot of people throughout the US to design and build the machines and the rockets that loft them into space, not to mention everyone who works to collect and analyze the data as it comes to Earth. That our deep-space capacity and those who keep it running might grind to a halt for lack of a few tens of millions of dollars of plutonium is a shame. The loss of everything else that goes along with our space program – the influx of new knowledge, the cool pictures, the sense of pride that we can send a working spacecraft so far and can keep it working so long, and the sense of wonder that comes from considering (even if only for a short time) our place in the universe – losing this for want of a little plutonium would be a crime.

The post Houston – we need some plutonium appears on ScienceWonk, FAS’s blog for opinions from guest experts and leaders.

The rocks of Yucca Mountain

crosssection2As I noted in last week’s posting, nuclear reactors produce high-level radioactive waste during their normal operation. This waste is not voluminous, but it can be dangerous and it needs to be sequestered in an out-of-the-way location for several millennia – in the 70 years since the first nuclear reactor was built there have been a number of suggested solutions for HLW disposal – some have advocated letting it melt its way into the Antarctic ice cap, others pushed for sinking containers into the deep and stable sediments of the ocean’s abyssal plains. Neither of these are now considered viable options (not to mention being forbidden by international treaty), leaving the nuclear nations with few options – the two that are being pursued by most nations are dry cask storage and burial in a deep geologic repository.

As radioactive elements decay to stability they give off radiation; when this radiation is absorbed by matter it deposits energy, which raises the temperature of whatever it is that’s absorbing the energy – the energy given off by radioactive decay is called decay heat and it can be significant. NASA makes use of this to help power their spacecraft – their plutonium-powered radio-isotopic thermal generators (RTGs) produce enough energy via radioactive decay to keep the spacecraft running for decades. Spent reactor fuel produces even more decay heat than do NASA’s RTGs, which is why fresh spent fuel is kept immersed in water at first.

The first fission products produced are predominantly short-lived radionuclides and they quickly decay to longer-lived atoms. This means that the rate at which energy is given off – and the degree of heating – drops over time. After six months the spent reactor fuel is substantially cooler – both thermally and radiologically – than it was when first removed from the reactor. During these first months – actually for the first few years – enough heat is given off that the best coolant is water; with time the heat production is low enough that water is no longer needed as a coolant. This is the point at which the two storage options – dry cask versus geological burial – come into play.

In dry cask storage the spent fuel is placed into huge concrete and steel casks. The decay heat permeates through the cask and thence into the atmosphere – the casks may be warm, but the spent fuel remains cool enough to remain undamaged. I visited a dry cask storage facility in Lithuania several years ago – the casks were at least 10 feet tall and 6 feet in diameter and weighed several tons. At the moment there are a number of nuclear power plants authorized to use dry cask storage and more are contemplating it as they run out of room in their spent fuel pools. The biggest fly in the ointment is that dry cask storage isn’t a permanent solution to the HLW problem – it takes millennia for the longer-lived nuclides to decay away and, at best, the casks will likely last only a century. In fact, some are showing signs of physical decay already and may last only a few decades. Simply put, dry cask storage is not a long-term solution for HLW disposal.

This is the reason that a number of nations are looking at deep geologic repositories for HLW disposal. Put the waste in a deep and stable rock formation, the thinking goes, and it will remain isolated from the environment for as long as we need. As of this writing, five nations (Belgium, France, Korea, Sweden, and Switzerland) are using geologic repositories, and facilities in another several nations are either under construction or have applied for operating licenses.

When it comes to geologic repositories, not all rock is created equal. Rock is not always a solid, impermeable mass – rocks crack and break, some rocks are fractured and faulted, and some are naturally porous. Since over time water will dissolve just about anything, we will want to try to keep water away from our radioactive waste – that way it can’t corrode the canisters, can’t dissolve the spent fuel, and can’t carry the radioactivity into the environment. Ideally, the rock used to contain high-level radioactive waste for hundreds of millennia should be able to keep that waste isolated from the environment – it should be tough, impermeable, and dry.

Some rocks disqualify themselves fairly quickly. Limestone, for example, is soluble, which is why there are so many beautiful caves in limestone country – Mammoth Caves is one of the best-known. Sandstone is porous and conducts water nicely, which rules it out, and both shale and slate fracture quite easily. Most high-level waste repositories that are in operation or that are planned are in granite – a tough, impermeable igneous rock. But there’s another approach that can be used – isolating the waste in a medium that’s impermeable but that will flow around the waste containers, sealing it in for the ages. That’s the approach being taken at the Waste Isolation Pilot Plant (WIPP), which is dug into a thick layer of salt. Over the years, the salt will deform and flow around the waste containers, locking in the waste for millions of years. Other HLW repositories are placed in clay formations, which will also flow around the waste, or into plastic sedimentary rocks. In all of these cases the idea is to put the waste into a layer of rock that will keep water away from the waste and that will keep the waste away from the environment for the requisite amount of time.

The rock at Yucca Mountain is neither granite nor a plastic material that will entomb the waste – Yucca Mountain is made primarily of rocks called tuff and ignimbrite – also called pyroclastics. Both of these rocks are the remnants of ancient volcanic eruptions – volcanic ash and debris that fell and that solidified into a solid mass. Both of these rocks can be very porous – some tuffs have a porosity of over 50% (meaning that half of their volume consists of pore space) – but they are not necessarily permeable. In other words, they might be built like a sponge, with lots of pores and holes, but the pores aren’t well-connected so there aren’t many pathways for water to easily flow. Not only that, but over time other minerals have formed in much of the pore space, blocking what flow paths do exist.

What all of this means is that the rocks of which Yucca Mountain is composed should do a fairly good job of keeping water away from the waste that would be stored there, but the rock might not be monolithic. In fact, virtually all rock everywhere is riven by cracks and faults and these can act as conduits to lead water from the surface to the spaces below. And, indeed, the rocks that comprise Yucca Mountain are fractured and faulted. The question is whether or not these will let enough water into the waste repository quickly enough to rot the waste storage containers.

The answer to this seems to be “it depends.” For example, water will flow more quickly along fractures and cracks, but the water will also deposit secondary minerals along this path and these minerals will eventually help to seal off the cracks (this same process is behind the beautiful crystalline linings of geodes). Both of these seem to have happened at Yucca Mountain in the past – geologists have found these secondary minerals lining fractures in the rock, indicating the past flow of water through the fissures, but the mineralization has also served to clog these conduits. So the question should be not so much “are there fractures” so much as “how much water can flow along these fractures?” In the case of Yucca Mountain we need to consider not only the permeability of these fissures, but also the amount of rain and groundwater available to percolate.

At present, Yucca Mountain is in the middle of a fairly dry piece of real estate where the annual rainfall is only a handful of inches annually. But climate changes over time – the Sahara was a fairly lush grassland within the last few tens of thousands of years – so we can’t guarantee that the current dry conditions will last until all of the radioactivity is gone. But that’s a topic for the next posting – which will be on the hydrogeology of Yucca Mountain. For now, let it suffice to say that the rock that makes up the mountain is not ideal due to its porosity and the occasional fracture and fault. However, secondary mineralization seems to have plugged the majority of the cracks and holes. Thus, while not as tough as granite and not as impermeable as salt or clay, the rocks of Yucca Mountain seem up to the task to which they might someday be put.

The post The rocks of Yucca Mountain appears on ScienceWonk, FAS’s blog for opinions from guest experts and leaders.