A foolish consistency

EmersonConsistency is good – there’s a sense of security in knowing that some things will generally remain constant over time. We can always count on gravity, for example, to hold us firmly to the ground; politicians are typically pandering and self-serving; I can count on radioactivity to consistently decay away; and so forth. Of course, not all consistency is good – as Emerson noted, “A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines.” We can also count on the American public to consistently question whether or not evolution actually occurs; many of us know that our perfectionist boss will always insist on yet another round of reviews and edits before letting a document go out the door; we will always find people who are apparently proud of their lack of knowledge; and we can expect that a certain category of blogger will continue to see the end of the world on the near horizon. It is this latter category I’d like to talk about this time – particularly the batch that continues to insist that the reactor accident at the Fukushima Dai’ichi site is going to kill millions.

Before launching into this piece I’d like to point you to a wonderful counter-example of what I just said – a blog posting by oceanographer and University of Washington professor Kim Martini. I have been accused of being part of the pro-nuclear and/or pro-radiation lobby because of my long years of experience as a radiation safety professional – Dr. Martini told me that she became interested in this topic, researched it herself, and came to her conclusions independently of the nuclear energy and radiation safety professionals. In short, she is scientifically competent, intelligent, and has no reason to be biased either pro- or anti-nuclear.

The latest round of Fukushima silliness is the contention that Americans need to evacuate the West Coast because of an apparently imminent release from one or more of the affected reactors and/or the Reactor 4 spent fuel pool. There are also those who blame the Fukushima accident for massive starfish die-offs, for sick animals along the Alaskan coast, and more – all of which (according to the good Dr. Martini) are far from accurate. And anti-nuclear activist Helen Caldicott has gone as far as to state that the entire Northern Hemisphere might need to be evacuated if things get as bad as she fears and the Unit 4 spent fuel pool collapses. So let’s see what the facts are, what the science can tell us, and what the real story might be.

Can the melted reactors go critical?

There have been predictions that the ruined reactor cores will somehow achieve criticality, producing more fission products and spreading more contamination into the water. While this is not strictly speaking impossible it is highly unlikely – sort of like saying that it is remotely possible that Bill Gates will leave me his fortune, but I’m still contributing to my 401(k) account. To achieve criticality (to a nuclear engineer or a reactor operator, “criticality” simply means that the reactor is operating at a constant power) requires reactor fuel that’s enriched to the right percentage of U-235, a critical mass of the uranium (enough to sustain a chain reaction), and it has to be in a configuration (the critical geometry) that will permit fission to occur. Also important in most reactors is a moderator – a substance such as water that will slow neutrons down to the point where they can be absorbed and cause the U-235 atoms to fission. In reactors such as the ones destroyed in Fukushima require all of these components to achieve criticality – take away any one of them and there will be no fission chain reaction.

The ruined reactor cores meet some of these requirements – since they’d been operating at the time of the accident we know that they had a critical mass of sufficiently enriched uranium present. Surrounded by water (either seawater or groundwater), they are likely also immersed in a moderator. But absent a critical geometry the cores cannot sustain a fission chain reaction. So the question is whether or not these cores can, by chance, end up in a critical geometry. And the answer to this is that it is highly improbable.

Consider, for example, the engineering and design that goes into making a nuclear reactor core. Granted, much of this design goes into making the reactors as efficient and as cost-effective to operate as possible, but the fact is that we can’t just slap some uranium together in any configuration and expect it to operate at all, let alone in a sustained fashion. In addition, reactors keep their fuel in an array of fuel rods that are immersed in water – the water helps slow the neutrons down as they travel from one fuel element to the next. A solid lump of low-enriched uranium has no moderator to slow down these neutrons; the only moderated neutrons are those that escape into the surrounding water and bounce back into the uranium; the lumps in a widely dispersed field of uranium will be too far apart to sustain a chain reaction. Only a relatively compact mass of uranium that is riddled with holes and channels is likely to achieve criticality – the likelihood that a melted core falling to the bottom of the reactor vessel (or the floor of the containment) would come together in a configuration that could sustain criticality is vanishingly low.

How much radioactivity is there?

First, let’s start off with the amount of radioactivity that might be available to release into the ocean. Where it comes from is the uranium fission that was taking place in the core until the reactors were shut down – the uranium itself is slightly radioactive, but each uranium atom that’s split produces two radioactive atoms (fission fragments). The materials of the reactor itself become radioactive when they’re bombarded with neutrons but these metals are very corrosion-resistant and aren’t likely to dissolve into the seawater. And then there are transuranic elements such as plutonium and americium formed in the reactor core when the non-fissioning U-238 captures neutrons. Some of these transuranics have long half-lives, but a long half-life means that a nuclide is only weakly radioactive – it takes 15 grams of Pu-239 to hold as much radioactivity as a single gram of radium-226 (about 1 Ci or 37 GBq in a gram of Ra-226), and the one gram of Cs-137 has about as much radioactivity as over a kilogram of Pu-239. So the majority of radioactivity available to be released comes from the fission products with activation and neutron capture products contributing in a more minor fashion.

This part is basic physics and simply isn’t open to much interpretation – decades of careful measurements have shown us how many of which fission products are formed during sustained uranium fission. From there, the basic physics of radioactive decay can tell us what’s left after any period of decay. So if we assume the worst case – that somehow all of the fission products are going to leak into the ocean – the logical starting place is to figure out how much radioactivity is even present at this point in time.

In January 2012 the Department of Energy’s Pacific Northwest National Laboratory (PNNL) used a sophisticated computer program to calculate the fission product inventory of the #1 and #3 reactors at the Fukushima Dai’ichi site – they calculated that each reactor held about 6.2 million curies (about 230 billion mega-becquerels) of radioactivity 100 days after shut-down. The amount of radioactivity present today can be calculated (albeit not easily due to the number of radionuclides present) – the amount of radioactivity present today reflects what there was nearly three years ago minus what has decayed away since the reactors shut down. After 1000 days (nearly 3 years) the amount of radioactivity is about 1% of what was present at shutdown (give or take a little) and about a tenth what was present after 100 days. Put all of this together and accounting for what was present in the spent fuel pools (the reactor in Unit 4 was empty but the spent fuel pool still contains decaying fuel rods) and it seems that the total amount of radioactivity present in all of the affected reactors and their spent fuel pools is in the vicinity of 20-30 million curies at this time.

By comparison, the National Academies of Science calculated in 1971 (in a report titled Radioactivity in the Marine Environment) that the Pacific Ocean holds over 200 billion curies of natural potassium (about 0.01% of all potassium is radioactive K-40), 19 billion curies of rubidium-87, 600 million curies of dissolved uranium, 80 million curies of carbon-14, and 10 million curies of tritium (both C-14 and H-3 are formed by cosmic ray interactions in the atmosphere).

How much radioactivity might be in the water?

A fair amount of radioactivity has already escaped from Units 1, 2, and 3 – many of the volatile and soluble radionuclides have been released to the environment. The remaining radionuclides are in the fuel precisely because they are either not very mobile in the environment or because they are locked inside the remaining fuel. Thus, it’s unlikely that a high fraction of this radioactivity will be released. But let’s assume for the sake of argument that 30 million curies of radioactivity are released into the Pacific Ocean to make their way to the West Coast – how much radioactivity will be in the water?

The Pacific Ocean has a volume of about 7×1023 ml or about 7×1020 liters and the North Pacific has about half that volume (it’s likely that not much water has crossed the equator in the last few years). If we ignore circulation from the Pacific into other oceans and across the equator the math is simple – 30 million curies dissolved into 3×1020 liters comes out to about 10-13 curies per liter of water, or about 0.1 picocuries (pCi) per liter (1 curie is a million million pCi). Natural radioactivity (according to the National Academy of Sciences) from uranium and potassium in seawater is about 300 pCi/liter, so this is a small fraction of the natural radioactivity in the water. If we make a simplifying assumption that all of this dissolved radioactivity is Cs-137 (the worst case) then we can use dose conversion factors published by the US EPA in Federal Guidance Report #12 to calculate that spending an entire year immersed in this water would give you a radiation dose of much less than 1 mrem – a fraction of the dose you’d get from natural background radiation in a single day (natural radiation exposure from all sources – cosmic radiation, radon, internal radionuclides, and radioactivity in the rocks and soils – is slightly less than 1 mrem daily). This is as close as we can come to zero risk.

This is the worst case – assuming that all of the radioactivity in all of the reactors and spent fuel pools dissolves into the sea. Any realistic case is going to be far lower. The bottom line is that, barring an unrealistic scenario that would concentrate all of the radioactivity into a narrow stream, there simply is too little radioactivity and too much water for there to be a high dose to anyone in the US. Or to put it another way – we don’t have to evacuate California, Alaska, or Hawaii; and Caldicott’s suggestion to evacuate the entire Northern Hemisphere is without any credible scientific basis. And this also makes it very clear that – barring some bizarre oceanographic conditions – radioactivity from Fukushima is incapable of causing any impact at all on the sea life around Hawaii or Alaska let alone along California.

Closing thoughts

There’s no doubt that enough radiation can be harmful, but the World Health Organization has concluded that Fukushima will not produce any widespread health effects in Japan (or anywhere else) – just as Chernobyl failed to do nearly three decades ago. And it seems that as more time goes by without the predicted massive environmental and health effects they’ve predicted, the doom-sayers become increasingly strident as though shouting ever-more dire predictions at increasing volume will somehow compensate for the fact that their predictions have come to naught.

In spite of all of the rhetoric, the facts remain the same as they were in March 2011 when this whole saga began – the tsunami and earthquake killed over 20,000 people to date while radiation has killed none and (according to the World Health Organization) is likely to kill none in coming years. The science is consistent on this point as is the judgment of the world’s scientific community (those who specialize in radiation and its health effects). Sadly, the anti-nuclear movement also remains consistent in trying to use the tragedy of 2011 to stir up baseless fears. I’m not sure which of Emerson’s categories they would fall into, but I have to acknowledge their consistency, even when the facts continue to oppose them.

The post A foolish consistency appears on ScienceWonk, FAS’s blog for opinions from guest experts and leaders.

The Mexican radiation accident (Part II)

radioactiveA highly respected colleague and friend of mine says he no longer refers to “lessons learned” but, rather, to “lessons recognized” because he has noticed that we don’t always learn our lessons. It’s not too early to recognize some lessons from the Mexican accident of the other week, but the fact that this accident happened at all suggests that we have failed to learn from past accidents. In this posting I’d like to go over some past radiation accidents (as opposed to nuclear accidents) and the lessons that we should have learned from them, as well as devoting a few paragraphs to the issue of radioactive materials security.

Goiania Brazil, 1987

Goiania Brazil is a big city that had over a million inhabitants in 1987. Most large cities make extensive use of radioactivity and radiation in medicine and Goiania was no exception. But things were a little lax in the 1980s, and when a cancer therapy clinic closed in 1987 the radioactive therapy source was simply abandoned instead of being transferred to a disposal facility. Thus, when scrap metal scavengers broke into the clinic they were able to walk out with a radiation therapy unit, including a high-activity (almost 1500 curies) Cs-137 source. Not knowing what they had found the scavengers opened the irradiator head and the source itself. Impressed with the pretty blue talcum powder-like filling, they took it home with them to show to family and friends. When all was said and done, four people had died of radiation sickness and over a hundred were exposed to enough radiation or contamination to require medical attention.

Like in Mexico, the thieves in Goiania were unaware of what they had stolen and, also like the Mexican theft, an underlying problem was a relative paucity of good security. We can also infer scanty regulatory controls in both cases, to permit the Brazilian source to be abandoned and the Mexican source to be transported without properly packaging or securing the source during shipping. Unlike the recent incident, the Goiania source was filled with easily dispersible Cs-137 as opposed to Co-60, which is typically found as a solid chunk of metal; this contributed to the wide spread contamination in Goiania compared to the relatively “clean” Mexican incident. The health toll of the Mexican accident is not yet known, although it seems likely that whoever removed the sources from the irradiator head would have received enough radiation to cause severe radiation sickness or death.

New Delhi, India, 2010

In 2010 the University of Delhi became aware of a cobalt irradiator that had been in storage for over a quarter century. Cobalt-60 has a half-life of only 5.27 years; after 5 half-lives the amount of radioactivity had decayed to only about 3% of the original activity. But 3% of a large number can still be significant – when the university decided to simply sell the entire irradiator off as scrap metal there were still about 20 curies of activity remaining; enough to be deadly under the right circumstances.

In this case, over 100 pieces of radioactivity were scattered through a number of scrap metal yards in the Delhi area and other pieces were given to eight workers at the scrap metal yard. One worker received a dose of over 300 rem and died of radiation sickness; two other workers developed radiation sickness but eventually recovered. After being informed of the incident the Indian government scoured the scrap metal yards, recovering (they think) all of the radioactivity. Interestingly, though, a few years later some contaminated metal products made of Indian stainless steel showed up in the US (I wrote about this incident in two earlier postings to this blog). This suggests that either additional pieces remained at large or that there was the loss of another Indian Co-60 source that was not reported. Either way, this is another incident in which radioactive materials were disposed of improperly and without adequate checks (not to mention without proper radioactive materials security).

 

Lessons recognized

There are more. A source was lost in Mexico in 1984 that ended up melted with scrap metal – it was found  when a load of contaminated metal was picked up in the US. In Bolivia an industrial radiographer was unable to retract a source into its shield and, instead of measuring radiation levels to confirm the location of the source, he simply bundled everything up and put it in the cargo area of a bus, exposing the passengers to (luckily) low doses of radiation. And other radiation incidents happened in every continent for the last half-century and more. But there are some common threads woven through most of these incidents that are worth trying to tease out to see if we can recognize the lessons.

One of these is that most of the lost sources were not properly secured. Had the Mexican source, for example, been properly guarded the truck might not have been stolen; had it been shipped in an appropriate container it could not have been opened by the thieves and there would have been no exposure. Similarly, the Goiania source was simply left behind in an abandoned building, making it easy pickings for the scrap metal scavengers. Proper attention to securing radioactive sources would have saved lives.

Another common theme is that many sources were being used by personnel who neglected to perform proper radiation surveys. This might not have made a difference in Mexico earlier this month, but a simple radiation survey would have shown the people at the Delhi University that the cobalt in its irradiator had not yet decayed to stability – this would have saved at least one life and would have spared the remaining victims their radiation sickness. Radiation surveys would also have shown that sources had become unshielded in accidents that occurred in Iran, Bolivia, Turkey, and elsewhere. Part of the problem here is that many of those tasked with using or safeguarding these sources were not radiation safety professionals, who would have understood the risks posed by high-activity sources and would almost certainly have performed surveys that would most likely have averted these tragedies.

The final commonality among the incidents noted here and others that have taken place is the relative paucity of effective regulatory oversight. While a great many nations adhere (on paper) to standards developed by the International Atomic Energy Agency, they may lack the ability or the trained personnel to enforce their regulations. In fact, I have visited some nations in which radioactive materials users had never seen a government inspection; even some nations in which the users were unaware that their nations had radiation regulations at all (in one case, I visited an industrial radiographer who was using an aged copy of our own American regulations, being unaware that his nation had adopted IAEA standards). In spite of my own disagreements with regulators from time to time these accidents and my own experiences have convinced me that regulatory oversight is essential, if only to keep licensees on their toes.  The lack of such oversight makes it all too easy for minor errors to turn into something potentially (or actually) life-threatening.

One of the things I found in the Navy is that most accidents are the result of multiple failures and that the process leading up to an accident can be interrupted at any of these steps. In the most recent accident, the use of a proper shipping container, proper security procedures, and appropriate regulatory oversight were all lacking – attend to any one of these factors appropriately and the accident would not have occurred. In a safe system a single failure should not put lives or health at risk. At this point it’s too late to help the people who were presumably exposed in Mexico, and too late to help the others exposed in India, Brazil, Iran, and so many other nations. But one can hope that other nations in which potentially dangerous radioactive sources are in use (virtually every nation on Earth) will not only recognize these lessons, but will learn from them as well. We have over a century of experience in working with radiation and we know how to do so safely – how to manage the risks so that nobody need be harmed. It would be a shame if others in coming years were to be harmed by something that is relatively easily controlled, simply because the lessons of past mistakes were recognized – but not learned.

Final note: Because of the holidays, there will be no new posting here until the second week of 2014 as I’ll be out of town with family. But stay tuned because there’s a lot more to discuss – claims that Fukushima’s spent fuel poses a threat to the West Coast, concerns that an India/Pakistan nuclear exchange could launch a nuclear winter, killing up to 2 billion people, and more. For those of you who feel as though two weeks off is more than you can handle, there are a number of my early postings that you might not yet have read – feel free to peruse and post your comments on those if you feel it appropriate. And to everyone, whatever end-of-the-year holiday you prefer, I hope it’s a happy one for you and for those you care about.

The post The Mexican radiation accident (Part II) appears on ScienceWonk, FAS’s blog for opinions from guest experts and leaders.

The Mexican radiation accident (Part I)

Source and truckMost news stories involving radiation are, to be blunt, overblown. Radiation can be dangerous, but the risk it actually poses is usually far lower than what the media stories would have us believe. So my first inclination when I hear about another story involving “deadly radiation” is to be skeptical. And then every now and again there’s the exception – a story about radiation that’s not overblown and an incident in which there is a very real risk; sometimes an incident in which lives are put at risk or even ended. Last week we had the latter sort of radiation story, and it’s worth a little discussion.

First, a short recap. A cancer therapy clinic in Tijuana Mexico was shipping a highly radioactive radiation therapy source to Mexico’s radioactive waste disposal facility near the center of the nation – at the time of the theft the source consisted of over 2500 curies of cobalt-60. Auto theft is common in Mexico – the truck driver claims he was sleeping in the truck at the side of the road when armed thieves ordered him out of the truck and stole it, source and all. There is every indication that the thieves were unaware of the source itself – that they were after the truck. And recent history bears this out since there have been a number of similar thefts (albeit with lower-activity sources) in recent years. Anyhow, the thieves seem to have removed the source from the back of the truck; it was found at the side of the road several miles from where the abandoned truck was located. From here things get a little speculative – a Mexican official feels it likely that at least a few of the thieves were exposed to fatal doses of radiation, and a half-dozen people came forward to be tested for radiation sickness (the tests came back negative). At the present time, the source was under guard by the Mexican military with a perimeter about 500 meters (a little over a quarter mile) away. So with this as a backdrop, let’s take a look at the science behind all of this.

Dose and dose rates

First, let’s think about the radiation dose rates and doses – the most important question in any radiation injury situation is how much dose a person received.

Radiation dose is a measure of the amount of energy deposited in a receptor – in this case, the receptor would be the thieves, but it could just as easily be a radiation detector. Cobalt-60 has two high-energy gamma rays; one curie of Co-60 gives off enough energy that it will expose a person to a dose rate of 1.14 R/hr at a distance of a meter (about arm’s length). So 2500 curies of activity will give a radiation dose of 2850 R/hr a meter away. A radiation dose of 1000 rem is invariably fatal, so a person would receive a fatal dose of radiation in a little over 20 minutes. Without medical treatment a dose of 400 rem is fatal to half of those who receive it – a person would receive this dose in eight minutes a meter away. And radiation sickness, which takes only about 100 rem, would start to appear in only 2-3 minutes (although it might not manifest itself for a few weeks). No two ways about it – this was a very dangerous source.

Radiation dose rate drops off with the inverse square of one’s distance from a source, so doubling your distance reduces the dose rate by a factor of four (and tripling your distance, by a factor of nine). This means that distance is your friend – take a long step away and a source that can be fatal in 20 minutes at arm’s length will take 80 minutes to have the same impact – still dangerous, but a little less immediately so. At a distance of 100 meters dose rate will be almost 0.3 R/hr – about the same dose in one hour that most of us will receive in an entire year from natural sources. The perimeter was set up at a distance of 500 meters – the dose rate from an unshielded source here will be about 12 mR/hr – at least 500 times normal environmental radiation levels, but well within the realm of safety. I have some radiation detectors that will accurately measure radiation dose rates that are only slightly higher than natural background levels – to get to the point at which the stolen source would fail to show up on these more sensitive detectors I’d have to be close to ten miles away.  This doesn’t mean that the radiation is dangerous at these distances – just that it would be detectable.

Why Co-60?

Of course, a good question to ask is why there was cobalt-60 on the truck in the first place. And this gets a little more involved than one might think, going back over a century.

It didn’t take long for people to realize that radiation can burn the skin – within the first decade after its discovery there was anecdotal evidence of its ability to cause harm, which was confirmed by experiments. And it didn’t take much of a leap of imagination to figure out that, if radiation can burn healthy skin then it can also be used to burn out unwanted tissue – such as cancers. So doctors began experimenting, settling quickly on radium as a cancer therapy. Radium, though, has its own problems, including the fact that it decays to radioactive progeny nuclides – with the advent of the nuclear age scientists found they could produce a highly radioactive nuclide of cobalt that emitted high-energy gammas that were ideal for reaching even those cancers buried deep within the body. Other nuclides were also discovered – Cs-137 and Ir-192 are among them – but cobalt does a great job.

For over a half-century these artificial radionuclides ruled the roost in radiation oncology, joined by iodine (I-131) for treating cancers of the thyroid. But radionuclides have their own problems, chief among them being that they can never be turned off (so they always pose a risk) and that they require a costly radioactive materials license. As technology improved many of the more advanced nations began using linear accelerators to produce more finely tuned beams of radiation – today Co-60 is rarely used for cancer therapy in the US, Japan, or Western Europe. On the other hand, linear accelerators are expensive and they need a fairly high level of infrastructure to maintain the precise power requirements these touchy machines require. So we still find cobalt irradiators in much of the developing world.

Mexico (among other nations) is in the process of swapping out their irradiators for linear accelerators, including the Tijuana cancer clinic where this source originated. But with a half-life of 5.27 years it’s not advised to just let the cobalt decay to stability, a process that could take two generations or longer. So at some point these obsolete sources must be shipped for disposal – that was (and apparently still is) the fate in store for the Tijuana source.

But wait – there’s more!

There’s more to this story than what I’ve gone into here, but space keeps me from getting into all the questions it raises. In particular, there have been a number of incidents over the last half-century or so in which radioactive sources such as this one have cost lives, contaminated consumer products, and they’ve contaminated scrap metal mills. Next week we’ll talk about some of these incidents as well as the risk posed by these sources should they go accidentally or deliberately astray. At the same time we’ll talk about radioactive materials security and what protective actions make sense.

The post The Mexican radiation accident (Part I) appears on ScienceWonk, FAS’s blog for opinions from guest experts and leaders.

That Fracking Radon

frackingAlthough there continues to be a great deal of comment-worthy material about Fukushima (including the latest idiotic suggestion that a collapse of the spent fuel storage in Unit 4 might call for the evacuation of California) I’d like to take a bit of a break from the apparent never-ending story. Partly I’d like to cover topics other than Fukushima (although the continuing scientifically ill-informed silliness does make it a fertile field), partly because I need more time to research some of the Fukushima stories, and partly because, with winter upon us, it seems a good time to look at where an increasing amount of our natural gas comes from and whether or not it brings with it any radiological hazards. In this case, the question is whether or not fracking (hydraulic fracturing) releases the torrents of radon that many claim it does. But first, a little background.

Where the gas and radon come from

Life first appeared on Earth about four billion years ago and as soon as it appeared organisms began to die and they drifted to the bottom of whatever sea they lived in where they were covered with silt and clay. Over time these accumulated to form sizeable deposits; over a longer time they became deeply buried. The Earth’s geothermal gradient is about 25˚ C per km of depth (away from the margins of tectonic plates and away from hot spots); bury something deeply enough and it begins to cook. Heat up organic material to about 100˚ C (equivalent to burial to 4 km), subject it to high pressures, and cook for a few tens of millions of years and the rocks start to fill with natural gas. Petroleum forms at lower temperatures; heat the rock too high and the hydrocarbons are cooked away altogether. Also contained in the rock are huge quantities of brine; water from the ancestral sea in which the original organisms grew and died.

So that’s where the gas comes from; the next part of the question is how the radioactivity gets into the gas. And this part is pretty interesting.

Uranium chemistry is about as complex as any of the natural elements – one aspect is that uranium, while soluble in oxygen-saturated water, is insoluble in waters that lack oxygen. During the first few billion years of Earth’s history the Earth’s atmosphere was largely anoxic and uranium was fairly immobile in the environment; after that time oxygen began to build up in the atmosphere and to dissolve into the seawater. At about this time uranium began to mobilize and move through the environment. And when it entered regions that contained the decaying remains of the early organisms it precipitated out of solution. With time the uranium decayed, forming radioactive progeny which, themselves, decayed – after over a dozen such steps the uranium finally turned into stable lead. But it’s the intermediate steps that are important because they include radium and radon – over the eons, the natural gas deposit accumulated radioactivity and if we fast-forward to the present we find that virtually every natural gas deposit on the planet (oil and coal as well) contains radioactivity. Recovering natural gas not only liberates the gaseous radon contained in the oil, but radium and other radionuclides are also dissolved in the concentrated brine – they precipitate out of solution and contaminate the scale that lines natural gas pipelines and settles out as sludge in the holding tanks. And this is important to remember – every natural gas deposit contains this radioactivity, not only the gas recovered by fracking.

Radon in the Marcellus Shale

Getting gas out of a formation is not as easy as just drilling a hole and letting it flow – if the rock is porous then this will happen, but many rocks just aren’t all that porous, and shale is a particularly “tight” rock. But a huge percentage of natural gas formed in rocks that derived from the mud and silt that covered the ancient organisms – sediments that formed fairly impermeable shale. To get appreciable amounts of gas from these tight deposits we have to find a way to break them up – by forcing fluid in under high pressures and by forcing sand into the formation as well to prop open the cracks formed by the high-pressure fluids. This particular posting is not the place to discuss all of the issues of this controversial topic – all that I’ll tackle is the question of radon.

Among the concerns raised by drilling into shale for natural gas recovery is the concern about radon entering the natural gas. As I mentioned above, there’s radon in all natural gas so the question isn’t so much whether or not there’s radon in the gas so much as is there more radon in gas that originates in shale formations than there is in other natural gas and, if so, whether or not this poses a health risk. In January 2012 a report authored by anti-nuclear activist Marvin Resnikoff suggested that using natural gas from the Marcellus Shale (a rock formation that extends through much of New York and Pennsylvania) would release enough radon to cause tens of thousands of deaths annually. Resnikoff’s conclusions were refuted by a July 2012 report written by Lynn Anspaugh, a respected radiation scientist who has served on a large number of highly respected national and international radiation advisory bodies (a complete list is included in his resume which is appended to the report linked to above).

The crux of Resnikoff’s argument is his claim that natural gas from the Marcellus shale is extraordinarily rich in radon, that this radon will be incorporated in the gas when it reaches homes in New York City, and that this extra radiation exposure places New Yorkers at risk. Resnikoff calculated that there could be as many as 30,000 additional annual cancer deaths from this radiation exposure. But, having read Resnikoff’s report I have to say that I don’t place much credence in his conclusions. Here’s why.

Resnikoff makes three crucial errors in his report:

  1. He failed to actually measure radon concentrations in the natural gas at any point from the wellhead to the customer’s home. Instead he relied on a series of calculations based on shaky information found in preliminary studies performed a number of years ago.
  2. He vastly over-estimates the amount of radon in the Marcellus Shale natural gas in his report, compared to actual radon concentrations that have been measured.
  3. He overestimates the risk from exposure to low levels of radon, ignoring the advice of the EPA and of both national and international radiation advisory bodies.

Anspaugh points out that, in addition to these mistakes, Resnikoff’s calculations are based on a series of parameters for which he provided no basis – Resnikoff provides no reference for any of the values he uses, and neither does he account for the inevitable variability and uncertainty in these values. This is contrary to the normal scientific methodology. And, as Ansbaugh notes, Resnikoff also failed to make a single radon measurement that could have either supported or refuted his argument – he never measured the actual radon concentrations in either the natural gas supply or in the homes he was concerned about.

When radiation dose calculations are based on actual radon concentrations it turns out that the added radiation dose is trivial – on the order of a few tens of microSieverts (a few millirem) annually. It’s only when these trivial doses are multiplied by millions of people and extended over a lifetime that they seem to become significant. But this logic is flawed – ten million people exposed to 10 µSv annually (we are typically exposed to about 3000-4000 µSv annually from natural radiation) are no more likely to develop cancer than would ten million people who each have a 1-gram rock thrown at them. True – the cumulative dose might be 100 Sv (or 10 tons), which is certainly enough to cause harm. But what we’re interested in is the dose to the individual. Throw a small pebble at each of ten million people and you’ll have a bunch of irritated folks, but not a single crushing death in spite of the cumulative “dose.” Similarly, a dose of 10 µSv is a trivial dose of radiation no matter how many people receive it. According to the International Commission on Radiation Protection “Collective effective dose is not intended as a tool for epidemiological risk assessment, and it is inappropriate to use it in risk projections. The aggregation of very low individual doses over extended time periods is inappropriate, and in particular, the calculation of the number of cancer deaths based on collective effective doses from trivial individual doses should be avoided.” Resnikoff either ignored or was unaware of this guidance.

There are plenty of concerns about the use of hydraulic fracturing to extract natural gas from shale formations, just as there are plenty of reasons why this technique was developed and is being used. But the risk of radiation exposure to the users of this natural gas is a specious argument that tends to obscure, rather than to illuminate, this question.

The post That Fracking Radon appears on ScienceWonk, FAS’s blog for opinions from guest experts and leaders.

Death by polonium?

po clearanceLast year I posted a piece that, in addition to 2 other short bits, briefly discussed the possibility that Yassar Arafat might have been poisoned with polonium-210 (Po-210) in 2004, including the apparent finding of elevated levels of the nuclide in Arafat’s remains. At the time I was dubious that there would be enough Po-210 left after so many years to reliably detect or to quantify the amount that might have been administered (or even to differentiate between the Po-210 that is naturally a part of our biochemistry versus what might have been administered as a poison. It looks as though I might have been wrong to dismiss the idea – here’s the latest news on the topic.

Earlier this week Al Jazeera America published a story on their website discussing the further testing that was done on Arafat’s belongings, his remains, and his gravesite. Al Jazeera included a link to the forensic report by the University Center of Legal Medicine in Lausanne and Geneva in Switzerland. To cut to the chase, the authors of the report felt that their “results moderately support the proposition that (Arafat’s) death was the consequence of poisoning with polonium-210.” The work appears to have been undertaken with care and appears to have no flaws that would invalidate the authors’ conclusions – what I’d like to do in this week’s posting is to discuss some of the tests that were run, what was found, and the science behind them.

Where Po-210 comes from and where it’s used

For starters, Po-210 is found in nature – as U-238 decays towards stable lead (Pb-206) it passes through a number of intermediate radionuclides. Radium (Ra-226) is one, radon (Rn-222) is another, and Po-210 is in the decay chain as well as the decay product of Pb-210. So any piece of rock or dirt will have some Po-210 – but that’s not where commercial Po-210 comes from. There’s far too little Po-210 in nature to make it worthwhile to go to the trouble of separating it out; instead, a chunk of bismuth (Bi-209 to be precise) is placed in a nuclear reactor where it captures a neutron and forms Bi-210. With a half-life of only about 5 days, the Bi-210 decays by beta emission to form Po-210.

World-wide production of Po-210 is about 100 grams annually (about 16,600 TBq or nearly 450,000 curies), almost all produced in Russia. Although it used to be used as a neutron source, as a source of heat to keep delicate electronic components warm, and as a power source for satellites (polonium is so intensely radioactive that a 1-gram source will heat itself up to over 500˚ C, producing up to 140 watts of energy in a radioisotopic thermal generator, or RTG). Today, however, Po-210 is used only as a static eliminator for industries that manufacture thin films, strands, and powders.

There is also Po-210 naturally in our bodies – we all inhale traces of dust as well as radon (which decays to form Po-210), not to mention bits of dust that drift into our food. Minute traces of Po-210 are also found in fruits and vegetables that we eat – the bottom line is that everybody has some level of polonium in his or her body. Smokers have even more polonium than the rest of us because tobacco (and other broad-leaved plants) contains elevated amounts of the nuclide. So smokers will have higher levels of Po-210 in their bodies; their excreta (urine and feces) will also have higher levels of polonium. Why all of this is important is that we have to remember that Arafat’s body would be expected to have traces of Po-210 from natural sources and, if he smoked, it would be expected to have even more. Thus, the question the Swiss scientists had to answer was not “Did we find Po-210 in Arafat’s body” but, rather, “Did we find significantly more than can be accounted for from natural uptake (inhalation and ingestion), from radon that might have deposited Po-210 in the grave, or from smoking or other sources?” And, of course, “Is there enough Po-210 left to even measure?”

Measuring Po-210

Measuring Po-210 is fairly straightforward – radiochemistry is a fairly well-established discipline and radiochemists are very good at extracting polonium from samples and identifying it. They also do a great job with other nuclides, including Pb-210 that can act as a parent to the Po-210. There’s a laboratory term – the lower limit of detection (LLD) – that refers to the lowest level of any nuclide that can be unambiguously identified in laboratory testing. Any detection that’s less than the LLD is problematic; it could be instrument or laboratory error, or even a statistical anomaly while any detection higher than this level is probably an actual detection.

The Swiss scientists sampled Arafat’s belongings – clothing that he wore in his last weeks of life as well as a few other items – and they did find traces of Po-210 in items that came in contact with his body fluids. They compared these levels to similar items bought new in the store to see if these items (cotton clothing for example) normally had elevated levels of polonium. They concluded that the levels of polonium in Arafat’s personal effects were higher than normal – that they were apparently contaminated with polonium. Not only that – and more important – the levels they found were higher than the LLD.

In addition to measuring Arafat’s clothing, which could have been contaminated somehow, the scientists wanted to find out how much polonium was in Arafat’s body. To do this they had to exhume his body. But they also had to account for the polonium naturally present in the soil as a possible source of any polonium they might find; before even opening the grave they sampled for radon and calculated how much this might have added to the remains they were about to sample. Only with this information could they determine whether or not what they found was more than expected. And what they found was that there appeared to be higher-than-expected levels of polonium in Arafat’s body at the time he died. Smokers can have higher levels of polonium in their bodies, but not enough to account for the samples – and Arafat didn’t smoke at the end of life anyway. The bottom line is that the examination of Arafat’s body and grave revealed more Po-210 than could be explained by natural radioactivity. Based on their analytical results the scientists estimated that Arafat might have been dosed with approximately 1 GBq (about 27 mCi) of Po-210 – comparable to the 1-3 GBq with which Alexander Litvenenko, the former Russian spy who was poisoned with Po-210, is estimated to have ingested prior to his death in November 2006.

 

Medical symptoms

So laboratory results suggest that Arafat was dosed with a large amount of polonium – it’s reasonable to wonder if his symptoms were consistent with what would be expected from this level of polonium ingestion.

The answer isn’t really clear-cut. Arafat came down with acute nausea a few hours after eating a meal – nausea is one of the symptoms of acute radiation sickness. Over the next few weeks his platelet counts dropped steadily which is also consistent with radiation sickness. The last week or so, Arafat showed signs of liver and kidney damage as well as damage to the gastrointestinal tract; all of these are also consistent with what the medical literature reports for polonium poisoning. Finally, Arafat died a little more than a month after the proposed poisoning date – similar to the length of time Litvenenko survived after his poisoning.

On the other hand, other blood cell counts didn’t change in the manner expected of a person who died of radiation sickness. He also kept his hair, unlike Litvenenko – these are inconsistent with polonium poisoning. The bottom line is that the medical examinations and Arafat’s symptoms could point towards polonium poisoning, but there could be other explanations as well.

 

The report’s conclusions

The Swiss team considered a number of factors, some of which seem to support the possibility of foul play and some of which did not. They also considered a number of possible explanations for the symptoms and lab results that seemed to support the hypothesis of polonium poisoning. After considering all of the laboratory information and medical symptoms they concluded that “the results moderately support the proposition that (Arafat’s) death was the consequence of poisoning with polonium-210.”

I will not weigh in on the group’s conclusions because they were the ones who performed the study and who evaluated the evidence. And I certainly won’t speculate on who might have planned and done the deed – if that’s what happened – I’m a scientist and, as such, it’s best that I avoid speculating about criminal, terrorist, or political motivations. All I am really competent to comment on is the science and the manner in which it was used.

What I can say is that there didn’t seem to be any gaping holes in their approach, in their work, or in their interpretation of the data. It doesn’t seem as though they missed anything or that they steered the results towards a pre-determined conclusion. As of now, I guess you’d say that the jury is still out – the Swiss report doesn’t conclude definitively that Arafat was poisoned, just that it seems more plausible than the alternative. Having said that, it’s possible that they made a mistake somewhere – there are two reports yet to come in (one Russian and one French) and we’ll have to see what they conclude.

 

The post Death by polonium? appears on ScienceWonk, FAS’s blog for opinions from guest experts and leaders.

Once more into the breach

Don-QuixoteI’d been planning on waiting a little longer before returning to the topics of Fukushima and radiation health effects, but a particularly egregiously bad New York Times op-ed piece deserves some attention. So once more into the breach.

Writing in the October 30 New York Times, pediatrician and anti-nuclear activist Helen Caldicott used the nuclear reactor accident in Fukushima as an opportunity to express her concerns about nuclear energy – a calling she has followed since the Three Mile Island reactor accident. Unfortunately, Caldicott included a number of errors in her editorial that are sufficiently serious as to invalidate her conclusions. I’d like to take an opportunity to take a look at these mistakes and to explain the science behind them.

In the first paragraph of her article, Caldicott states that “the mass of scientific and medical literature…amply demonstrates that ionizing radiation is a potent carcinogen and that no dose is low enough not to induce cancer.”

To the contrary, even the most conservative hypothesis (linear no-threshold) holds that low doses of radiation pose very little threat of cancer. Using a slope factor of 5% added risk of cancer fatality per 1 Sv (100 rem) of exposure, the risk of developing cancer from 1 rem of radiation is about 0.05% (5 chances in 10,000). This risk is far lower than the risk of developing cancer as a habitual smoker, from working with a number of solvents (e.g. benzene), working with a number of laboratory chemicals, and so forth. Epidemiologists have noted no increase in cancer rates among people living in areas with high levels of natural background radiation, as well as among the lowest-dose groups of atomic bomb survivors (in fact, people living in the states with the highest levels of natural radiation have lower cancer rates than do those who live in the lowest-dose rate states). Not only that, but age-adjusted cancer rates have dropped steadily (with the exception of smoking-related cancers) over the last century, in spite of dramatic increases in medical radiation exposure. In the words of respected radiation biologist Antone Brooks, these observations show us that “if (low levels of) radiation cause cancer it’s not a heavy hitter.” The bottom line is that, if even the lowest doses of radiation can cause cancer (which has not yet been shown to be either correct or incorrect), radiation is a weak carcinogen – not the “potent carcinogen” that Caldicott would have us believe.

In the second paragraph of her article, Caldicott states that “Large areas of the world are becoming contaminated by long-lived nuclear elements secondary to catastrophic meltdowns: 40% of Europe from Chernobyl, and much of Japan.”

This is a difficult statement to parse because it is such a nebulous statement. If, by “contaminated,” Caldicott means that radionuclides are present that would not otherwise be there, she is wrong – in fact, you can find traces of artificial radionuclides across virtually every square mile of Europe, Asia, and North America as opposed to the 40% she claims. But all that this means is that we can detect trace levels of these nuclides in the soil – doing the same we can also find traces from the atmospheric nuclear weapons testing in the 1940s through the 1960s. And for that matter, we can find lead contamination over virtually the entire world as well from the days of leaded gasoline. But lead contamination goes much deeper as well – scientists found traces of lead in Greenland glaciers that date back to the Roman Empire. But nobody is getting lead poisoning from the Ancient Romans’ pollution, just as nobody is getting radiation sickness (or cancer) from the minute traces of Cs-137 and Sr-90 that can be found across the Northern Hemisphere. But Caldicott can’t really comment on the fact that artificial nuclides have contaminated the world for nearly 70 years because this would shatter her claim that radioactive contamination from Fukushima and Chernobyl is causing death and destruction in Europe and Japan.

In the third paragraph, Caldicott states that “A New York Academy of Science report from 2009 titled ‘Chernobyl’ estimates that nearly a million have already died from this catastrophe. In Japan, 10 million people reside in highly contaminated locations.”

Caldicott is correct that the NYAS reported over a million deaths from Chernobyl. However, this report itself was highly criticized for being scientifically implausible – the NYAS is a respected organization, but in this case their conclusions are at odds with the reality noted on the ground by the World Health Organization. Specifically, the WHO concluded that in the first 20 years, fewer than 100 people could be shown to have died from radiation sickness and radiation-induced cancers and they further concluded that, even using the worst-case LNT model, fewer than 10,000 would eventually succumb from radiation-induced cancer as a result of this accident. This is not a trivial number – but it is less than 1% of the one million deaths the NYAS claims. And in fact the actual number is likely to be far lower, as physician Michael Repacholi noted in an interview with the BBC. In fact, even the WHO’s International Agency for Research on Cancer acknowledges that “Tobacco smoking will cause several thousand times more cancer in the same population.” Even if contamination from Chernobyl and Fukushima are sufficient to cause eventual health problems, we can do far more good to the public by devoting attention to smoking cessation (or, for that matter, to childhood vaccinations) than by spending hundreds of billions of dollars cleaning up contamination that doesn’t seem to be causing any harm.

In the fourth paragraph of her piece, Caldicott notes that “Children are 10 to 20 times more radiosensitive than adults, and fetuses thousands of times more so; women are more sensitive than men.”

To the contrary – the National Academies of Science published a sweeping 2006 report that summarizes the state of the world’s knowledge on the “Health Risks from Exposure to Low Levels of Ionizing Radiation” in which they conclude that children are between 2-3 times as sensitive to radiation as are adults – more sensitive as adults, but a far cry from Caldicott’s claim.

The reproductive effects of radiation are also well-known – fetal radiation exposures of less than 5 rem are incapable of causing birth defects according to our best science, and the Centers for Disease Control flatly states that exposure to even higher radiation doses is not a cause for alarm under most circumstances. This conclusion, by the way, is based on studies of hundreds of thousands of women who were exposed to radiation from medical procedures as well as during the atomic bombings in Japan – it is based on a tremendous amount of hard evidence.

This claim of Caldicott’s, by the way, is particularly egregious and has the potential to do vast harm if it’s taken seriously. Consider – in the aftermath of the Chernobyl accident it is estimated that over 100,000 women had abortions unnecessarily because they received poor medical advice from physicians who, like Caldicott, simply didn’t understand the science behind fetal radiation exposure. There are estimates that as many as a quarter million such abortions took place in the Soviet Union, although these numbers can’t be confirmed.

But even in this country we see this level of misinformation causing problems today – during my stint as a radiation safety officer I was asked to calculate nearly 100 fetal radiation dose estimates – primarily in pregnant women who received x-rays following serious traffic accidents – and many of the women were seriously considering therapeutic abortions on the advice of their physicians. When I performed the dose calculations there was not a single woman whose baby received enough radiation to cause problems. And it doesn’t stop there – we also had parents who refused CT scans for their children, preferring exploratory surgery and its attendant risks to the perceived risks from x-ray procedures. The bottom line is that this sort of thinking – that children and developing babies are exquisitely sensitive to radiation – can cause parents to choose needless abortions and places children at risk; by espousing these views, Caldicott is transgressing the Hippocratic oath she took to “first do no harm” and she should be taken to task for doing so.

Finally, in the last paragraph of her tirade, Caldicott claims that “Radiation of the reproductive organs induces genetic mutations in the sperm and eggs, increasing the incidence of genetic diseases like diabetes, cystic fibrosis, hemochromatosis, and thousands of others over future generations. Recessive mutations take up to 20 generations to be expressed.”

All that I can say to this is that Caldicott decided to go out with a bang. The fact is that there is not a single case in the medical or scientific literature in which birth defects or genetic disease is linked to pre-conception radiation exposure. This is not my conclusion – it’s the conclusion of Dr. Robert Brent, who knows more about this topic than anyone else in the world. Eggs and sperm might be damaged, but Dr. Brent notes that there is a “biological filter” that prevents cells that are damaged from going on to form a baby. Another line of reasoning supports Brent’s claim – areas with high levels of natural radiation also have no increase in birth defects compared to areas with lower levels of natural radiation. Caldicott’s claim that low levels of radiation exposure cause long-term genetic damage are simply not supported by the scientific or medical literature or by any observations that have been made.

Caldicott’s claim that radiation is also responsible for a host of genetic diseases is similarly dubious. The world’s premier radiation science organizations (the International Council on Radiation Protection, the United Nations Committee on the Effects of Atomic Radiation, and the National Council on Radiation Protection and Measurements) all agree that, if radiation contributes to multi-factorial disease then the effect is very weak indeed – possibly too weak to be distinguished from natural sources of these diseases. Specifically, UNSCEAR calculated that – if pre-conception radiation exposure can cause these problems – exposing the population of each generation to 1 rem of radiation each might lead to an additional 100 cases of dominant genetic disease per million births per generation and 15 cases of recessive genetic disease (ICRP calculated similar, but lower rates). This is far lower than the background incidence of genetic disease in the population as a whole. Oh – UNSCEAR also determined that “multifactorial diseases are predicted to be far less responsive to induced mutations than Mendelian disease, so the expected increase in disease frequencies are very small” – a statement with which the ICRP is in agreement. In other words, Caldicott’s claim runs contrary to the best work of the most-respected scientific organizations that specialize in radiation health effects.

With respect to the length of time required for genetic effects – if any – to manifest themselves, I honestly don’t know where Caldicott pulled the number of 20 generations from. This is a number I haven’t seen anywhere in the scientific literature, nowhere in any of the genetics classes I took in grad school, and nothing I ever calculated or saw calculated. As near as I can tell, she is either repeating something she heard somewhere or she made the number up to impress the reader.

Conclusion

The bottom line is that  Caldicott’s editorial is grounded more on invective than on scientific or medical fact. The Fukushima accident was bad, but it pales in comparison to the natural disaster that set it off. The aftereffects of the accident are bad enough – thousands of families displaced, hundreds of thousands of Japanese who were evacuated from their homes, along with the stress, anxiety, and depression they have been suffering. TEPCO and the Japanese government will have to spend billions of dollars tearing down the plant and billions more cleaning up the contaminated area – in many cases, cleaning up places not because they pose a genuine risk to life and health but because contamination levels exceed an arbitrary level. Things are bad enough, and Caldicott is trying to score cheap points by making claims that have no connection to scientific or medical reality, simply in order to advance her anti-nuclear agenda. Her article does nothing to advance the debate – it only serves to use the tragedy in Japan to inflame the public’s fears.

The post Once more into the breach appears on ScienceWonk, FAS’s blog for opinions from guest experts and leaders.

The dose makes the poison

radiation_dna_damage_bigOne of the most potent arguments against all things nuclear is the idea that even a vanishingly small amount of radiation exposure has the chance to cause cancer. Even if that risk is incredibly low there’s still a risk, and if a huge number of people are exposed to even a small risk then there could be a significant number of deaths. Say, for example, that the entire population of the US were exposed to something that carried a risk of one in a million – nearly 400 people could die nationally.

We can debate whether or not we could “see” these deaths using epidemiology (for example, with over 500,000 cancer deaths annually even as many as 400 additional cancer deaths crammed into a single year would represent an increase of less than one tenth of one percent) but that’s not the point of this posting – rather, the point is to discuss two fascinating papers that discuss the origins of the hypothesis that any incremental amount of radiation exposure can increase our risk of developing cancer, and that this added risk increases linearly with the amount of exposure; what is known as the Linear No-Threshold (LNT) hypothesis. Specifically, the author of these papers, respected University of Massachusetts toxicologist Edward Calabrese, presents a compelling case that the acceptance of this hypothesis as the basis of global radiation regulations is the result of a deliberate campaign that ignored a great deal of scientific evidence to the contrary. But first let’s back up a little bit to discuss what LNT is and how it’s used before digging into this matter and what it might mean.

When ionizing radiation passes through a cell there’s a chance that it will interact with the atoms in that cell – it can strip electrons from neutral atoms, creating an ion pair. Where once there was a happy electrically neutral atom there are now two ions, one with a positive charge (the atom) and a negative electron ejected by the radiation. Once formed the ions might recombine, in which case the story is over. But the ions can also interact with other atoms and molecules in the cell, forming free radicals that can then go on to interact with DNA in the cell’s nucleus. Sometimes these interactions cause DNA damage.

Of course, damaging DNA is only the first step in a process that might lead to cancer, but it’s most likely that nothing will happen. It could be that the damage is repaired by one or more of our exceptionally capable DNA repair mechanisms, and it’s also possible that any unrepaired damage will be in a stretch of “junk” DNA or in a gene that’s inactive in the affected cell. This is described in greater detail in an earlier posting in this series – for the purpose of this one, it’s safe to skip to the end, which is that the overwhelming majority of DNA damage is either repaired or has no impact on the organism (damage to junk DNA or to an inactive gene can’t go on to cause cancer). It’s only the unrepaired (or mis-repaired) DNA damage – and only damage that’s in one of very few specific genes – that can progress to a cancer.

There’s more to the whole matter than this. For example, our cells are always experiencing DNA damage at quite substantial rates – one estimate is that each cell is subject to several million DNA-damaging events per year – and the damage due to radiation is indistinguishable from that caused by other agents. So for us to decide how damaging a particular dose of radiation might be, for us to try to calculate a risk from a particular dose of radiation we’ve got to first understand how much DNA damage this dose will cause, then to determine how much of this damage goes unrepaired (or mis-repaired), to compare this level of damage to the background damage that is always afflicting our cells, and finally to figure out whether or not that damage will affect one of the few genes that can progress towards cancer. The important part of this is that DNA damage due to radiation doesn’t occur in a vacuum – it adds to the damage that is already occurring. It takes a dose of about 100 rem to double the amount of damage that occurs in a year – a dose that will increase a person’s lifetime cancer risk by about 5% according to the current thinking. This relationship is well-accepted at radiation doses in excess of about 10 rem (over a lifetime – 5 rem if the exposure takes place in a very short period of time); the question is whether or not it remains constant at any level of radiation exposure, no matter how slight. This is where we get to Calabrese’s recent work.

To use a simple analogy, think of the DNA damage in our cells as a variant on the bathtub problems we all got to solve in middle school algebra – the accumulation of DNA damage from whatever source is the water filling the tub and the repair of this DNA damage (or the damage that occurs in inert sections of DNA) is the drain. If the rate of removal is the same as the rate of accumulation then there’s no net impact on the health of the organism. So the question is whether or not the normal rate of accumulation is enough to max out our DNA damage repair mechanisms or if our cells have residual repair capacity. And, on top of that, if, when any residual capacity gets fired up, it repairs the same amount of damage that was inflicted, a little bit more, or a little bit less. To use the tub analogy, if you have the faucet turned on full and water level in the tub is holding steady, will pouring an additional stream of water into the tub cause it to overflow? If the drain is just barely keeping up with the influx then it will start to fill up and will eventually overflow; otherwise the tub can accept a little more water without making a mess. So here’s the question – if we don’t know in advance the capacity of the drain and if the answer is potentially a matter of life and death then what should we assume – the worst case or the best? Obviously the answer is, in the absence of any firm information, it makes sense to assume the worst and in the case of radiation risk this would be LNT. But when further knowledge is available it makes sense to adapt our hypothesis to make use of the new information.

This is precisely what Calabrese says some of the earliest researchers in this field failed to do – in fact, there seems to be evidence that they willfully ignored evidence that could have led to some significant revisions to the use of the LNT hypothesis – the question is whether or not “willfully ignored” means that the scientists chose not to include data that they felt were flawed, if they omitted studies simply because the results contradicted their own results, if the scientists omitted results to try to mislead the scientific community, or something else. In other words, did these scientists set out to deceive the scientific community (for whatever reason)?

At this point, with all of the early scientists dead, we can only guess at their intent or their motives. Calabrese lays out his case – quite convincingly – in two papers, summaries of which can be found online in the two pages linked to here. And for what it’s worth, while I’ve reached my own conclusions on this matter, I’m not sure whether or not I can approach the matter objectively, so rather than relate them here I think it’s better to simply refer you to Calabrese’s work so that you can draw your own conclusions from the information he lays out.

So what have we got? Well, for starters we have the issue of intellectual honesty. Did scientists overlook crucial research or did they make a conscious decision to omit scientific research that contradicted what they believed – or what they wanted – to be the truth? Did they make a mistake, did they deceive themselves, did they deceive others? Or were they right, but instead of arguing their case they chose to leave out information they felt to be irrelevant. But regardless of which of these possibilities is correct – even if those who first came up with the LNT hypothesis were correct – we have to ask ourselves if any them is completely intellectually honest. The only option that gets these authors off the hook is if they were simply unaware of studies that contradicted the hypothesis that they came up with. But even here they fall short because it’s the scientists’ job to know about – and to discuss – these contrary studies, if only to demonstrate why the contrary studies are wrong. After reading Calabrese’s papers I find myself wondering about the intellectual honesty of the early scientists who developed the LNT hypothesis.

The other question we have to ask ourselves is whether or not it matters. Sometimes it doesn’t. Fibbing about the discovery of a new species of insect, for example, might not have much of an impact on our society. But the risk from low levels of radiation is different – it affects how we think about the risks from medical radiation, from nuclear power, air travel, from airport x-ray screening, from radiological terrorism, and more. The use of radiation permeates our society and the manner in which we control the risks from radiation is based on our perception of the risks it poses. Radiation protection is not cheap – if our radiation safety measures are based on a hypothesis that’s overly conservative then we are wasting money on protective measures that don’t gain us any added safety. It’s important to know if the hypothesis – LNT – is accurate and it’s just as important to know whether or not it stands on solid intellectual foundations.

The post The dose makes the poison appears on ScienceWonk, FAS’s blog for opinions from guest experts and leaders.

Where does the plutonium come from?

new_horizonsLast week I wrote about how the shortage of Pu-238 might impact the exploration of the outer Solar System, but I didn’t much get into where the plutonium comes from. After all, while there are trace amounts of natural plutonium, there certainly isn’t nearly enough to fuel a space probe. So this week it seemed as though it might be worth going over where we get our plutonium, if only to understand why NASA (or DOE) needs tens of millions of dollars to produce it.

On the Periodic Table plutonium is two spots above uranium – uranium has an atomic number of 92 (that is, it has 92 protons) and plutonium is at 94. To make plutonium we somehow have to add two protons to a uranium atom. The way this happens is sort of cool – and there are different routes depending on the plutonium isotope that’s being produced.

To make Pu-239, the nuclide used in nuclear weapons, it’s a fairly simple process. Natural uranium is over 99% U-238, which doesn’t fission all that well. Put the U-238 (which makes up a minimum of 95% of the reactor fuel) into the middle of a reactor, which is seething with neutrons from uranium fission, and it will capture a neutron and turn into U-239. The U-239, in turn, decays by emitting a beta particle to neptunium-239, which gives off another beta particle. Since each beta decay turns a neutron into a proton, these two beta decays suffice to turn a uranium atom into one of plutonium. Thus, a single U-238 atom absorbing a single neutron and being allowed to sit long enough to undergo two beta decays (a few weeks or so) will turn into a single atom of Pu-239. Making heavier plutonium nuclides is just as easy – when Pu-239 captures additional neutrons it turns into Pu-240, Pu-241, Pu-242, and more. Not only is it fairly easy, but it happens all the time in any operating nuclear reactor.

OK – so we can see how simple neutron capture and patience can give us plutonium nuclides heavier than U-238, but this really doesn’t help us to make the Pu-238 needed to power a spacecraft. Making the lighter nuclide is a little more roundabout.

Remember that, through neutron capture, a reactor produces Pu-241. It turns out that Pu-241 also decays by beta emission, creating Am-241 – the stuff that’s used in smoke detectors (among other things). Am-241 is an alpha emitter and it decays to a lighter variety of neptunium (Np-237) which, when subjected to neutron irradiation, captures a neutron to become Np-238. One final transformation – a last beta decay – is the last step to producing Pu-238. This is the reason why Pu-238 is so expensive – making it requires two bouts of irradiation (the first long enough to produce the Pu-241), enough time for all of the radioactive decays to transform plutonium into americium and the americium into neptunium, and several steps of chemical processing to isolate the various elements of interest that are formed.

Although it sounds convoluted (well, I guess it is convoluted), making Pu-238 is fairly straight-forward. The science and engineering are both well-known and well-established, and its production certainly breaks no new scientific or technical ground. But the politics…that’s another matter altogether.

As I mentioned last week, the American Pu-238 production line shut down over two decades ago. Since then we’ve been buying it from the Russians, but they’ve got their own space program and have limited stocks to boot. So this option is not going to work for much longer, regardless of the future of US-Russian international relations.

A recent blog posting by Nuclear Watch suggested that the US might be able to meet its Pu-238 needs by dismantling nuclear weapons and by digging into its inventory of scrap Pu-238 – it notes that the Los Alamos National Laboratory (LANL) documents indicate that over 2000 RTGs’ worth of the nuclide can be recovered from nuclear weapons alone. But I’m not sure if I can accept this assertion, primarily because putting this nuclide into a nuclear weapon makes absolutely no sense. I can’t comment on the “scraps” of Pu-238 that LANL is said to have lying around, and unfortunately Nuclear Watch didn’t provide a link to the LANL documents they cited, making it difficult to check or to comment further. But if there is a Pu-238 stockpile at LANL it would certainly be nice to tap it for space exploration – not to mention the savings in disposal costs.

Yet another way to make Pu-238 is in a liquid fluoride thorium reactor (LFTR) – a reactor that uses naturally occurring thorium (Th-232) to breed U-233, which fissions quite nicely. Additional neutron captures can turn U-233 into Pu-238, which can be chemically separated from the fuel. There’s a lot more to the topic than this, but I covered the topic of thorium reactors fairly thoroughly last year (the first of these posts is at this URL, and there are three others in the same series) and it’s also covered on the Thorium Energy Alliance’s website. There are a lot of nice things about thorium reactors in addition to their being able to produce Pu-238, and it’s a technology that’s been worked out and tested – but the US shows no sign of building any of them anytime soon. India and China might develop extensive thorium reactor systems – but what these nations might do a decade or two in the future won’t do much for NASA in the next few years. The bottom line is that, however promising they might be for future needs, thorium reactors aren’t likely to help us send more spacecraft to the outer Solar System anytime soon.

So here’s where we stand. The US stopped producing the Pu-238 needed to run our deep-space probes and we’ve pretty much used up our stocks of the material. In the intervening years we’ve been buying Russian Pu-238, but that won’t be available for much longer, leaving us high and dry. There may be scraps of the material – possibly even stockpiles – at various DOE facilities, but dismantling nuclear weapons is probably not going to do the job. Over the long run thorium-cycle reactors might be a great way to make it, but these reactors aren’t operating anywhere in the world today and there are no American plans to build any of them anytime soon. That would seem to leave us with only three options – re-start our Pu-238 production line, find another way to make (or obtain) the material, or confine ourselves to the inner Solar System. As I mentioned last week, I sincerely hope we don’t go the last route. So let’s see what we can come up with – and let’s hope we don’t leave the solution (and decisions) too long.

The post Where does the plutonium come from? appears on ScienceWonk, FAS’s blog for opinions from guest experts and leaders.

Yucca Mountain: Questions and Concerns

mythbusters signsYucca Mountain raises a lot of controversy – let’s face it; if it didn’t then a 4-part series of blog postings would hardly be necessary. Part of the reason for the controversy is that there are a number of worries about the impact of spent fuel disposal on the environment and on the health of people living and working in the area and along the transportation routes. So to close this series out I’d like to tackle some of these concerns to see which hold water and which might be over-stated. One good place to find a number of these concerns is a website put together by the State of Nevada in 1998.

Radiation levels from the spent fuel will be dangerously high for millennia

This is true, but not really relevant to the question of safe disposal at Yucca Mountain because no person will ever come in contact with the spent fuel. As I mentioned in the last posting, the spent fuel will be locked away inside of heavy-duty casks that are designed to shield the radiation, reducing it to less than 10 mR/hr at a distance of 2 meters from the cask. As long as the fuel remains inside the casks the fact that the fuel itself is intensely radioactive doesn’t make a difference – nobody can be harmed by radiation to which they’re not exposed. And with regards to the spent fuel remaining inside the casks – remember that the casks are rugged; they’re designed to survive hits from speeding locomotives, and once they’re in place in the ground they’ll not even face that risk. Finally, while the fuel remains radioactive for millennia, the radiation levels fall off fairly quickly over time – after several decades (far less than the design lifetime for the waste site or for the casks) radiation dose rates are down to a fraction of the original levels.

Spent fuel contains intensely toxic plutonium

Also true, but again not as dire as it sounds. Yes – spent fuel contains plutonium because plutonium is created when uranium-238 atoms capture neutrons during nuclear fission. And yes – plutonium is a very toxic heavy metal. But plutonium is hardly the most toxic element known to man – a toxicologist I used to work with could name a dozen substances that are more toxic (including shellfish toxins and fungal toxins). In fact, plutonium was administered to humans to help puzzle out how it acts and moves within the body and those to whom it was administered remained alive and well (and yes, many of these tests would be considered unethical today and they have generated a ton of controversy – but that doesn’t change the fact that the testing did not harm those who were tested).

And let’s think for a moment about what has to happen for the plutonium in the fuel to reach a person who might be harmed by it. Groundwater would have to percolate down through the hundreds of feet of rock to reach the spent fuel containers. Then it would have to penetrate through the casks by corroding the metal and soaking through the concrete layers. Once inside the casks it would have to dissolve the fuel elements – including the highly insoluble plutonium – and would then have to escape again. Finally, it would have to carry the dissolved plutonium through another several hundred feet of rock to the water table and, once there, would have to carry it however many miles to the nearest human with a well sunk into the aquifer. Possible? Yes. Plausible – especially in time measured in millennia? Not really.

Geologic events – earthquakes or volcanic eruptions – can cause the casks to fail, speeding the release of radioactivity to the environment.

Let’s take the easy one first. Yucca Mountain is made of volcanic rocks and there have been volcanic eruptions in the American Southwest within the last several thousand years. So it is plausible to think that there might be more such eruptions in the next few tens of thousands of years. But there are two primary types of eruptions – those with lava and those without. The style of eruption in the American Southwest has historically been ashfalls rather lava – such an eruption would only serve to entomb the spent fuel even more deeply, and the ash itself is too cool to melt the spent fuel casks. Being immersed in lava is more likely to damage the casks, but the lava itself isn’t likely to flow as far as the Las Vegas suburbs. To expose people to elevated levels of radiation the lava would have to immerse the casks long enough to melt them and then continue flowing and carrying the fission products with it – and continue far enough to expose people. There have been lava flows that have covered hundreds of miles, but not within millions of years. So while it is plausible to think that volcanic eruptions might be able to release radioactivity to the environment, the debris or lava are more likely to bury the waste even further than they are to release it to the environment.

Earthquakes are a little more problematic – the concerns here are that an earthquake could open up new fractures, speeding the flow of water from the surface to the casks and from the casks to the water table. Another concern would be that an earthquake could rupture the casks and release radioactivity. Both of these are plausible – we know that earthquakes fracture rock and can disrupt groundwater flows and they can certainly do that in Yucca Mountain. And we know that they can fracture rock, so they can certainly fracture spent fuel casks. So it is plausible to think that an earthquake could cause radionuclides to be released from the spent fuel casks. But we also have to think about the odds that an earthquake will open a fracture that passes through the very rock – the exact part of the rock – that the casks are sitting in. It’s certainly possible, but the odds are against it.

Plutonium might leak out of the canisters and accumulate in a critical mass in the environment and explode

This is one of my favorites. Not only do we have to get the plutonium out of the casks (water leaking into the waste repository, penetrating into the casks, dissolving plutonium, making its way into the environment), but then enough of the plutonium has to precipitate out of solution in the same place – and under the correct conditions – to form a critical mass. And it’s important to understand that a critical mass is not something that will explode but simply something that will sustain a fission chain reaction under the right circumstances. Going through the steps to even get plutonium into the environment is challenging enough and not likely to happen. Precipitating the plutonium out of solution in a critical mass adds to the unlikelihood. And putting together something that could blow up is well-nigh impossible.

Putting all of the spent fuel – which contains plutonium – in one spot makes a tempting target for terrorists and is a proliferation risk

Putting all of the spent fuel in one place certainly increases the amount of plutonium in this one location. On the other hand, we also have to wonder if it’s better to have only one location at risk or the 50+ that exist today. We can make a good argument that it’s easier to guard and make impregnable a single location than to try to secure every reactor facility in the nation.

With regards to non-proliferation, anyone trying to make a nuclear weapon would first have to get to the spent fuel casks and would then have to either steal some very large and heavy casks or would have to open them up at the waste site and remove the fuel from them – actions that would be hampered by high radiation levels anytime in the next several decades. And did I mention getting the spent fuel offsite and out of the country? Then the putative terrorists (or infiltrators from a prospective nuclear power) would have to remove the fuel, chop it up and dissolve it in acid, and chemically process it to remove the plutonium. The bottom line is that no terrorist group has the resources to pull this off, and neither do most nations. So…possible? Well – maybe, from the standpoint that winning the lottery by buying a single ticket is possible. Plausible – nope. This is another one that just doesn’t pan out.

——————

I could go on and on – there have been tons of arguments raised about why spent reactor fuel shouldn’t be disposed of at Yucca Mountain. Some of these arguments – the one about plutonium leaking out, forming a critical mass, and going boom comes to mind – are either deliberately specious or are a sort of worst-case wishful thinking; they will certainly not happen in the real world. Others – the possibility of an earthquake rupturing the spent fuel casks is one – are plausible, but the odds are very much against their happening. But here’s what it comes down to – we will be able to come up with lengthy lists of arguments both for and against putting spent reactor fuel just about anywhere. At some point we have to say one of three things:

  • We’re happy with the current situation and are going to stick with it forever,
  • We’re going to suck it up and put the waste in a location where our best science tells us it will be safe from any reasonable set of circumstances, or
  • We’re going to give up nuclear power and find some other way to produce 20% of our electrical needs.

The bottom line is that the entire nation benefits from the use of nuclear energy – again, 20% of our power is nuclear. There are ample places where the waste from these reactors can be stored with minimum risk to the environment or to people. We may never find a single location that we can certify as being “best” and the nit-pickers among us will always be able to find arguments – however irrational, specious, or ill-informed – that seem to mitigate against any particular site. But at some point the nation will need to find a place that, while perhaps not perfect, is good enough to meet our needs because it meets all reasonable criteria for waste disposal in the real world.

At some point, whether the nation is going to continue using nuclear energy or not, we are going to have to find a spot to put the spent reactor fuel that has accumulated and that is currently being stored across the nation. It makes sense that this location be someplace that is dry and under-populated, that is convenient to major transportation routes, and that is geologically and hydrogeologically suitable for isolating the waste while it is dangerous. These sites exist and Yucca Mountain is one of them. I would suggest that the technical problems of long-term radioactive waste disposal are relatively minor – the natural nuclear reactor at Oklo has shown us that even wet and fractured rock can retain radioactive waste for eons – it is the political problems that have thus far proven insurmountable. But let’s not deceive ourselves – the seemingly scientific objections to Yucca Mountain are pretexts for the underlying political objections. It is politics, not science or engineering that’s holding up our waste disposal solution. And until we can resolve these political problems we will continue to store our waste in a host of vulnerable locations scattered around the US.

The post Yucca Mountain: Questions and Concerns appears on ScienceWonk, FAS’s blog for opinions from guest experts and leaders.

Yucca Mountain – Packaging and Storing Radioactive Waste

t1larg.casks.nrcSo – thus far we’ve gone over a little of the history of the Yucca Mountain project and how both geology and hydrogeology can affect waste disposal. What I thought could be interesting today would be to talk a little about how the spent reactor fuel is packaged – both for transport and for disposal – because this is a third factor that has a profound impact on how well the waste can be isolated from the environment. Then, for the last installment in this series (next week) I’ll try to examine some of the claims both for and against the site to see how well they hold up.

To recap a little bit – fissioning a uranium atom splits it into 2 radioactive fission products. These accumulate as the reactor operates – adding more radioactivity as time goes on. As the reactor operates, though, the fuel “burns” up the uranium – after a few years the concentration of fissionable atoms drops to the point where it’s time to swap out the spent fuel rods for new ones. The spent rods are intensely radioactive so they’re normally stashed in spent fuel pools until they can cool off a bit – and in this case, “cooling off” means thermally as well as radiologically since the energy given off by the decaying fission products causes the spent fuel to heat up. But after a long enough time the fuel will cool off to the point at which it can be removed from the water and placed into huge casks that are placed in storage yards at the reactor sites – this is called dry cask storage.

At some point – if Yucca Mountain or some other high-level waste repository opens up – either the dry casks will be used for transport or the spent fuel will be transferred to transport casks that will be loaded onto rail cars or trucks and relocated to their final resting place. It’s these casks that will also be the penultimate barrier between the radioactivity within and the environment so they warrant a description.

First of all the things are huge. I saw some in Lithuania about a decade ago and they looked to be at least 10 feet tall and 5 feet in diameter. And since the physics of uranium fission are the same around the world (reactor design changes somewhat from place to place, but not enough to make a huge difference for commercial reactors) the characteristics of spent reactor fuel are reasonably similar as are the characteristics of the casks. In other words, the spent fuel casks in the US are huge as well.

In addition to providing protection to the spent fuel they are also designed to reduce radiation dose rates to an acceptable level – low enough to pose no risk to those sharing the road with the casks if they are transported by truck. But there’s a lot more to safely shipping waste than keeping rad levels down – the spent fuel casks must also be able to protect the waste while it’s in transit to the final disposal site, not to mention protecting it during its long millennia in storage. We’ll tackle these one at a time.

Spent fuel casks have to meet some stringent requirements to ensure that they don’t release highly radioactive fission products while they’re in transit to the final disposal site. Casks must be able to pass these tests without suffering a failure:

  • A 9-meter (30 foot) fall onto a hard surface
  • Puncture test where the container falls 1 meter onto a 6” steel rod
  • 30 minutes of being engulfed in an 800 degree C (1475 degree F) fire
  • 8 hours of immersion beneath 3 feet of water
  • 1 hour of immersion beneath 200 meters (655 feet) of water

These requirements are more than theoretical – in the 1970s Sandia National Laboratories tested some spent fuel containers with full-scale crashes to confirm that what looked good on paper and in the laboratory would work in real life. The most dramatic test was running a locomotive engine into a flatbed truck carrying a cask on it – the locomotive was pretty much destroyed while the cask, while damaged, survived and would not have leaked radioactivity into the environment. There’s a nice video on YouTube showing the locomotive test and others – these videos alone ought to allay any doubts about the ability of these casks to protect spent fuel while it’s en route to the disposal site.

Physical ruggedness is nice, but there’s more to keeping radioactive waste safe than protecting it from collisions – once delivered to the site the casks have to help keep the waste isolated from the environment for up to a million years and that takes a lot more than strength. Rust and corrosion will attack the strongest container – all they need are the right conditions and enough time to work. Not only that, but metals behave differently (and chemical reactions proceed more quickly) at higher temperatures – such as those produced by the decay of fission products. So the thermal effects also have to be factored in when designing the things.

So here’s the bad news about long-term disposal of spent reactor fuel – and the containers meant to hold it. Nobody knows how a container is going to hold up over even 100,000 years, let alone a million years (the time span required by EPA). We can do our best to design something with the lowest possible corrosion rate and we can do our best to design in a high level of structural strength – but no matter how we try to artificially age these materials in the lab we can only guess at their long-term performance. Let’s face it – all of human history is only about 5000 years and the Pyramids are younger than that. We can assert the longevity of our designed structures all we want, but we have no direct experience with anything so long-lived. Of course we can put other barriers in place as well – and likely will – but anything artificial suffers the same drawback, that all of human history is far shorter than the period of time for which we’re hoping to isolate the waste.

On the other hand, the engineered packages aren’t the only barrier between the radioactive waste and the environment – and we actually do have one data point about the ability of rock to hold radioactive waste for prolonged periods of time. In fact, what we have is the remnants of a natural nuclear reactor that achieved criticality in what is now the nation of Gabon (in Western Africa) about two billion years ago. The details of how the reactor (called the Oklo reactor) formed and operated are fascinating, but there’s not enough room in this posting to go into the details. For the purposes of this, let it suffice to say that in two billion years, virtually all of the fission products have remained in place. This is in spite of the reactor zone being located in fractured and porous sandstone that was below the water table more often than not – in fact, if the reactor zone were not completely saturated with water the reactor could never have operated. So – remembering the last two posts – porous and water-saturated rock are not well-suited for waste disposal. But in spite of this, the fission products have remained in place for two billion years. This bodes well for the ability of Yucca Mountain (or whatever location ends up with the spent fuel repository) to safely isolate the waste until it decays to stability.

So here’s the bottom line with regards to the waste containers. First, they certainly seem capable of safely storing spent reactor fuel for the length of time that they’re stored at the reactor plants and multiple tests have shown they can protect the waste while it’s en route to wherever it will be disposed of. But no matter how well we design the containers – no matter how convincing our computer models and calculations might be, there’s no guarantee that they’ll last the million years that is the current standard for the waste site. But that doesn’t mean that Yucca Mountain is incapable of storing radioactive waste safely for that length of time – the natural nuclear reactor in Oklo shows that even radioactive waste that’s stored in porous and water-saturated sandstone can remain in place for the eons. This bodes well for the Yucca Mountain site’s ability to retain our radioactive waste for a paltry million years or so.

The post Yucca Mountain – Packaging and Storing Radioactive Waste appears on ScienceWonk, FAS’s blog for opinions from guest experts and leaders.