How dangerous is a little radiation

“There is always danger for those who are afraid."
--George Bernard Shaw--

I strongly believe that the Linear No Threshold (LNT) radiation risk model is as right as needs be for the vast majority of folk (nuclear power scientists, medical physicists and public health officials all included). Basically, unless your a radiobiologist it is almost certainly fine for you. Alas, both anti and pro nuclear folk play fast and loose with facts when it comes to the LNT model pressing it into service in ways in which it is not intended or attacking it for not covering effects that aren't relevant to it.

Let's start with the basics. The interaction of radiation with matter on an atomic scale is stochastic (random) in nature. In even one living cell the biological processes these stochastic interactions can kick start are numerous and immensely complex. In multiple cells this is further complicated by inter cell signaling. Even more complex effects can be triggered at the level of tissues, organs or organisms. This complexity coupled with the inherent variability of living organisms mean it is currently impossible to predict the precise biological result of a low dose of radiation even on a simple organism from the "ground up".

However, epidemiology (the study of the distribution and determinants of health-related stuff) and retrospective analysis comes to rescue us from our scientific ignorance. In the pas , various groups of people have been accidentally or deliberately (atomic bomb survivors and radiotherapy patients) exposed to radiation in various amounts. If we compare these exposed groups to similar unexposed "control" groups it is reasonable to claim the radiation is responsible for any difference in health effects. There are hundreds of these studies and this is long enough without going through them but to summarize their outcomes into 4 points.

  1. Radiation increases risk of occurrence of cancer. 
  2. The dose response is linear, ie twice the dose is twice the risk. 
  3. No effect can be observed below 100 mSv. 
  4. Above a few Sv the linearity breaks down as deterministic effects become the main factor and the death rate becomes very high. 

The simplest model to fit these results to is a straight line (linear being the L in the LNT). As no dose must have no effect we restrain this line to ensure it passes through zero (no threshold being the NT in LNT). The result is something like that shown below

A plot of make up data to demonstrate how the LNT is formed
Now there are many limits on this not least being the temporal and spatial distribution of the radiation is ignored. Clearly a massive dose to your foot over 30 yrs is somewhat less dangerous to a similar exposure delivered as an intense short blast to your lung. Points 3 and 4 refer to the extreme limits of the model. Point 4 is somewhat obvious and rarely contested. For example 10 Sv may have a 95% chance of killing you but 20 Sv won't have a 190% chance.*

Point 3 is the contentious one as people argue over whether an effect is still there and hidden in the statistics or just simply does not exist. Many groups have proposed and demonstrated (at least in cells) various effects which call into question the validity of the LNT model at very low doses. Of the top of my head and by no means a comprehensive list of such effects:

  1. Bystander (not irradiated cells dying solely as they happen to be near irradiated cells) 
  2. Hormesis (cells becoming more resistant to radiation if primed with previous radiation) 
  3. Low-dose hypersensitivity (increased cellular apoptosis ie cell death seen at low doses) 
  4. DNA repair (the natural ability of cells to fix sufficiently sparse DNA damage) 

All these could alter the dose response shape away from the simple shape predicted by the LNT. However none of these effects have been conclusively proven to affect the cancer induction risk of humans exposured to low doses of radiation. To show why and give an example of how complex all this is to unravel consider the effect of DNA repair.

Low levels of damage to the DNA caused by radiation are generally repaired by cellular mechanisms. That is a good thing and protects you from cancer right? It does but repair is not perfect** if it was cancer occurrence rates would not increase as we age, but they do. So the very slight increase in repair rate caused by low radiation doses could mean a very slight increase in the rate of repair mistakes any of whom could ultimately lead to cancer.

What about the bystander effect, this seems like a bad thing, more cells dying than are actually exposed. Well it is bad if you consider cellular radiation damage but is not necessarily bad from a cancer induction risk viewpoint. The bystander effect may actually have evolved to signal to cells that they are at risk of DNA damage and thus they better die rather than risk DNA misrepair and becoming cancerous.

I am simplifying a massive field here and both those points can be argued either way a lot more forcefully. However, the take home point is that the radiobiology really is not simple. Sure you can shout about your cell lines based results that show low dose damage is perfectly fixed in a few hours. Unfortunately this does not prove anything. A small percentage of perfectly healthy unirradiated cells will become cancerous given enough time due to the accumulated damage from imperfect repair of DNA damage. You would have to grow hundreds of generations of cells to have even a chance of seeing this. Even if you did undertake such an experiment the confounding factors associated with cell culturing would make translating the results to human risk almost impossible.

Radiobiology experiments can hint that you might want to look into something but it is only epidemiology that can definitively prove or dismiss a worry. Unfortunately epidemiology has an severe inherent weakness. If you look at the previous image you will notice that the error bars become much bigger than the plotted values as the dose rate drops. As any effect gets smaller you would ideally have the error bars scale downwards accordingly. However the error bars in epidemiology are set by the error (or uncertainty) associated with the statistical sampling which is given roughly by the square root of the number of individuals in the analysed population. For a population of 100 the sampling uncertainy comes out at 10 people.

Say you selected 100 people at random from the UK population and counted the number with blue eyes. If you counted 50 people you could then estimate the whole UK population's blue eye rate to be 50/100 or 50%. Your samplying uncertainty of 10/100 or 10% would then need to be accounted for giving you a final estimated result of 50+/-10%. If you repeated this experiment this time randomnly selecting 10,000 people you might count 5,340 or 53.4% blue eyed folk. The sampling uncertainty is 100/10,000 or 1% this time so your final result would be 53.4+/-1% A million people might work out at 53.73+/-0.1% and so on. Of course in any real measurement other factors and more subtle statistical effects may make the error/uncertainty different but this method provides a good rule of thumb.

Hence the smaller the effect you want to measure the bigger your analysed group needs to be
. To distinguish an effect that affects 1 in 1000 requires you to have a measurement with statistical uncertainty below this. This requires a sample population of at least a million. Thus to definitively state the dose response down to a level of 1 induced cancer per 1000 (The LNT predicts this at 10 mSv) we would need to track 2 million (half are controls and half are exposed) individuals for their whole lives. Even then this is a lower limit and in practice you generally need larger groups***.  Even then you have to be careful if you want to allow randomization to take care of the much more influential lifestyle factors. For example Iran has higher background radiation than Sweden. However even if we tracked every individual in both countries a simple cancer induction rate comparison would be swamped by socio-economic effects which would likely have much greater effect and uncertainty than any effect due to background radiation.

Some may counter that maybe an animal experiment could address this problem. It could and the strictly controlled habitat could certainly reduce some of the lifestyle complications. However a lab animal is not a good model of a human with regards to risk of long term cancer induction (many cancers are induced at time periods exceeding animal life spans). Given this induction risk is the only mechanism that can cause death at low dose rates that's somewhat terminal to this experiment. Even if you could somehow correct for this (perhaps by using a different surrogate measurement of damage) tracking two million mice for a year at current lab prices would cost at a minimum something like £100 million. Never mind the fact that no ethics group worth a damn would authorize such an extreme experiment. In a nutshell an animal experiment based test of the LNT is too difficult, too costly, too slow and not interesting enough to perform.

So is this currently unfalsiable LNT model inherently unscientific. In a way yes but all other low dose models will encounter exactly the same problem. You simply can not select which is valid based on the results available. Consider the difference between the LNT, LT (linear with threshold), Hyper sensitivity and Hormesis based low dose risk response models shown below.

Demonstration that the statistics are insufficient to resolve differing models

You can see by looking at the error bars that at any measurement you would need a minimal of 100,000 participants to have any chance of testing for a difference. Even then that is because I made the models vary so drasticially, more subtlely different models might need a sample size of a million or even 10 million. In practice even statistically combining all the currently available measurements we do not have enough data to discriminate low dose response models. It is hard to envisuage any measurement or situation (barring global thermonuclear war) that would change this.  

As such we are left with various highly reasonable physics and biology based low dose response models with absolutely no way of verifying which is right or wrong. Occams razor would certainly lean us towards LNT but the only accurate statement is that below 100 mSv we simply don't know the shape of the dose response. However whilst the shape of the dose curve is unknown it is not unrestrained. We may not be able to confirm the shape of the dose response but we know it's small as otherwise it would be detectable.

This is the crucial point I think people need to remember. By the time the exposure has got low enough that the validity of the LNT is questionable (<100 mSv) the cancer induction risk predicted is absolutely tiny (<1 in 100). Compare this added risk to the "natural" cancer induction rates (~33 in 100) and you see immediately it is neglible and is totally swamped by other lifestyle factors. To be frank 2/3rds of a million people die from non radiation related cancer every month. Worrying about cancer induction rates from low levels of radiation when other effects like:

  1. Smoking tobacco 
  2. Having a bad meat based diet 
  3. Not doing enough exercise 
  4. Being obese

are simply ignored is not only illogical it is deeply harmful. The time and money spent on removing or protecting against negligible radiation could be better spend on any of the above factors or buying improved cancer treatment equipment. When and IF the background risk of cancer death is reduced to the level at which the low dose LNT predictions are relevant then maybe we should look at them. We are no where near that situation right now.

I am a proponent of nuclear power (for environmental reasons) and I am "pro" using the LNT. If that strikes you as odd in any way reread what I wrote. Being pro LNT should translate as I understand the LNT's validity and limits and also fully accept its predictions are pointless at low doses. If you are vehemently anti LNT and want to argue about the usefulness of the LNT my simple response is "show me any epidemiological evidence that shows your proposed (more complex) model is better." 

* The numbers of healthy individuals exposed to these high doses since the advent of modern medicine are happily so low you can debate the exact numbers. However the dose needed to kill 50% of people is generally accepted to be about 4.5 Sv
** To be fair it is pretty damn close to perfect. If it wasn't we would all get cancer quite quickly as 2-3% of all metabolised oxygen releases DNA damaging free radicals.
*** If you do the full calculation you find you need to track 5 million but the point remains the same.