The dose makes the poison

By October 8, 2013

radiation_dna_damage_bigOne of the most potent arguments against all things nuclear is the idea that even a vanishingly small amount of radiation exposure has the chance to cause cancer. Even if that risk is incredibly low there’s still a risk, and if a huge number of people are exposed to even a small risk then there could be a significant number of deaths. Say, for example, that the entire population of the US were exposed to something that carried a risk of one in a million – nearly 400 people could die nationally.

We can debate whether or not we could “see” these deaths using epidemiology (for example, with over 500,000 cancer deaths annually even as many as 400 additional cancer deaths crammed into a single year would represent an increase of less than one tenth of one percent) but that’s not the point of this posting – rather, the point is to discuss two fascinating papers that discuss the origins of the hypothesis that any incremental amount of radiation exposure can increase our risk of developing cancer, and that this added risk increases linearly with the amount of exposure; what is known as the Linear No-Threshold (LNT) hypothesis. Specifically, the author of these papers, respected University of Massachusetts toxicologist Edward Calabrese, presents a compelling case that the acceptance of this hypothesis as the basis of global radiation regulations is the result of a deliberate campaign that ignored a great deal of scientific evidence to the contrary. But first let’s back up a little bit to discuss what LNT is and how it’s used before digging into this matter and what it might mean.

When ionizing radiation passes through a cell there’s a chance that it will interact with the atoms in that cell – it can strip electrons from neutral atoms, creating an ion pair. Where once there was a happy electrically neutral atom there are now two ions, one with a positive charge (the atom) and a negative electron ejected by the radiation. Once formed the ions might recombine, in which case the story is over. But the ions can also interact with other atoms and molecules in the cell, forming free radicals that can then go on to interact with DNA in the cell’s nucleus. Sometimes these interactions cause DNA damage.

Of course, damaging DNA is only the first step in a process that might lead to cancer, but it’s most likely that nothing will happen. It could be that the damage is repaired by one or more of our exceptionally capable DNA repair mechanisms, and it’s also possible that any unrepaired damage will be in a stretch of “junk” DNA or in a gene that’s inactive in the affected cell. This is described in greater detail in an earlier posting in this series – for the purpose of this one, it’s safe to skip to the end, which is that the overwhelming majority of DNA damage is either repaired or has no impact on the organism (damage to junk DNA or to an inactive gene can’t go on to cause cancer). It’s only the unrepaired (or mis-repaired) DNA damage – and only damage that’s in one of very few specific genes – that can progress to a cancer.

There’s more to the whole matter than this. For example, our cells are always experiencing DNA damage at quite substantial rates – one estimate is that each cell is subject to several million DNA-damaging events per year – and the damage due to radiation is indistinguishable from that caused by other agents. So for us to decide how damaging a particular dose of radiation might be, for us to try to calculate a risk from a particular dose of radiation we’ve got to first understand how much DNA damage this dose will cause, then to determine how much of this damage goes unrepaired (or mis-repaired), to compare this level of damage to the background damage that is always afflicting our cells, and finally to figure out whether or not that damage will affect one of the few genes that can progress towards cancer. The important part of this is that DNA damage due to radiation doesn’t occur in a vacuum – it adds to the damage that is already occurring. It takes a dose of about 100 rem to double the amount of damage that occurs in a year – a dose that will increase a person’s lifetime cancer risk by about 5% according to the current thinking. This relationship is well-accepted at radiation doses in excess of about 10 rem (over a lifetime – 5 rem if the exposure takes place in a very short period of time); the question is whether or not it remains constant at any level of radiation exposure, no matter how slight. This is where we get to Calabrese’s recent work.

To use a simple analogy, think of the DNA damage in our cells as a variant on the bathtub problems we all got to solve in middle school algebra – the accumulation of DNA damage from whatever source is the water filling the tub and the repair of this DNA damage (or the damage that occurs in inert sections of DNA) is the drain. If the rate of removal is the same as the rate of accumulation then there’s no net impact on the health of the organism. So the question is whether or not the normal rate of accumulation is enough to max out our DNA damage repair mechanisms or if our cells have residual repair capacity. And, on top of that, if, when any residual capacity gets fired up, it repairs the same amount of damage that was inflicted, a little bit more, or a little bit less. To use the tub analogy, if you have the faucet turned on full and water level in the tub is holding steady, will pouring an additional stream of water into the tub cause it to overflow? If the drain is just barely keeping up with the influx then it will start to fill up and will eventually overflow; otherwise the tub can accept a little more water without making a mess. So here’s the question – if we don’t know in advance the capacity of the drain and if the answer is potentially a matter of life and death then what should we assume – the worst case or the best? Obviously the answer is, in the absence of any firm information, it makes sense to assume the worst and in the case of radiation risk this would be LNT. But when further knowledge is available it makes sense to adapt our hypothesis to make use of the new information.

This is precisely what Calabrese says some of the earliest researchers in this field failed to do – in fact, there seems to be evidence that they willfully ignored evidence that could have led to some significant revisions to the use of the LNT hypothesis – the question is whether or not “willfully ignored” means that the scientists chose not to include data that they felt were flawed, if they omitted studies simply because the results contradicted their own results, if the scientists omitted results to try to mislead the scientific community, or something else. In other words, did these scientists set out to deceive the scientific community (for whatever reason)?

At this point, with all of the early scientists dead, we can only guess at their intent or their motives. Calabrese lays out his case – quite convincingly – in two papers, summaries of which can be found online in the two pages linked to here. And for what it’s worth, while I’ve reached my own conclusions on this matter, I’m not sure whether or not I can approach the matter objectively, so rather than relate them here I think it’s better to simply refer you to Calabrese’s work so that you can draw your own conclusions from the information he lays out.

So what have we got? Well, for starters we have the issue of intellectual honesty. Did scientists overlook crucial research or did they make a conscious decision to omit scientific research that contradicted what they believed – or what they wanted – to be the truth? Did they make a mistake, did they deceive themselves, did they deceive others? Or were they right, but instead of arguing their case they chose to leave out information they felt to be irrelevant. But regardless of which of these possibilities is correct – even if those who first came up with the LNT hypothesis were correct – we have to ask ourselves if any them is completely intellectually honest. The only option that gets these authors off the hook is if they were simply unaware of studies that contradicted the hypothesis that they came up with. But even here they fall short because it’s the scientists’ job to know about – and to discuss – these contrary studies, if only to demonstrate why the contrary studies are wrong. After reading Calabrese’s papers I find myself wondering about the intellectual honesty of the early scientists who developed the LNT hypothesis.

The other question we have to ask ourselves is whether or not it matters. Sometimes it doesn’t. Fibbing about the discovery of a new species of insect, for example, might not have much of an impact on our society. But the risk from low levels of radiation is different – it affects how we think about the risks from medical radiation, from nuclear power, air travel, from airport x-ray screening, from radiological terrorism, and more. The use of radiation permeates our society and the manner in which we control the risks from radiation is based on our perception of the risks it poses. Radiation protection is not cheap – if our radiation safety measures are based on a hypothesis that’s overly conservative then we are wasting money on protective measures that don’t gain us any added safety. It’s important to know if the hypothesis – LNT – is accurate and it’s just as important to know whether or not it stands on solid intellectual foundations.

The post The dose makes the poison appears on ScienceWonk, FAS’s blog for opinions from guest experts and leaders.

Categories: Basic Science, Calabrese, Health, LNT, Public Safety, Radiation, radiation dose response, Risk, ScienceWonk Blog