ACHRE Report

Introduction


The Atomic Century

Before the Atomic Age: "Shadow Pictures," Radioisotopes, and the Beginnings of Human Radiation Experimentation

The Manhattan Project: A New and Secret World of Human Experimentation

The Atomic Energy Commission and Postwar Biomedical Radiation Research

The Transformation in Government - Sponsored Research

The Aftermath of Hiroshima and Nagasaki: The Emergence of the Cold War Radiation Research Bureaucracy

New Ethical Questions for Medical Researchers

Conclusion

The Basics of Radiation Science

What Is Ionizing Radiation?

What Is Radioactivity?

What Are Atomic Number and Atomic Weight?

Radioisotopes: What Are They and How Are They Made?

How Does Radiation Affect Humans?

How Do We Measure the Biological Effects of External Radiation?

How Do We Measure the Biological Effects of Internal Emitters?

How Do Scientists Determine the Long-Term Risks from Radiation?

How Do Scientists Determine the Long-Term Risks from Radiation?

Where did the risk estimates in this report come from?

Throughout this report, the reader will find numerous statements estimating the risks of cancer and other outcomes to individuals exposed to various types of radiation. These estimates were obtained from various scientific advisory committees that have considered these questions in depth.[107] Their estimates in turn are based on syntheses of the scientific data on observed effects in humans and animals.

How are risk estimates expressed?

Epidemiologists usually express the risk of disease in terms of the number of new cases (incidence rate) or deaths (mortality rate) in a population in some period of time. For example, an incidence rate might be 100 new cases per 100,000 people per year; a mortality rate might be 15 deaths per 100,000 people per year. These rates vary widely by age, conditions of exposure, and various other factors. To summarize this complex set of rates, government regulatory bodies often consider the lifetime risk of a particular outcome like cancer. When relating a disease, such as cancer, to one of its several causes, a more useful concept is the excess lifetime risk expected from one particular pattern of exposure, such as continuous exposure to 1 rad per year.

It is well established that cancer rates begin to rise above the normal background rate only some time after exposure, the latent period, which varies with the type of cancer and other factors such as age. Even after the latent period has passed and radiation effects begin to appear, not all effects are due to radiation. The excess rate may still vary by age, latency, or other factors, but for many cancers it tends to be roughly proportional to the rate in the general population. This is known as the constant relative risk model, and the ratio of rates at any given age between exposed and unexposed groups is called the relative risk. Many advisory committees have based their risk estimates on models for the relative risk as a function of dose and perhaps other factors. Other committees, however, have based their estimates on the difference in rates between exposed and unexposed groups, a quantity known as the absolute risk. This quantity also varies with dose and other factors, but when this variation is appropriately accounted for, either approach can be used to estimate lifetime risk.

What are the types of data on which such estimates are based?

Human data are one important source, discussed below. Two other important sources of scientific data are experiments on animals and on cell cultures. Because both types of research are done in laboratories, scientists can carefully control the conditions and many of the variables. For the same reason, the experiment can be repeated to confirm the results. Such research has contributed in important ways to our understanding of basic radiobiological principles. It also has provided quantitative estimates of such parameters as the relative effectiveness of different types of radiation and the effects of dose and dose rate. In some circumstances, where human data are limited or nonexistent, such laboratory studies may provide the only basis on which risks can be estimated.

Why are human data preferable to data on animals or tissue cultures for most purposes?

Most scientists prefer to base risk estimates for humans on human data wherever possible. This is because in order to apply animal or tissue culture data to humans, scientists must extrapolate from one species to another or from simple cellular systems to the complexities of human physiology. This requires adjusting the data for differences among species in life span, body size, metabolic rates, and other characteristics. Without actual human data, extrapolation provides no guarantee that there are no unknown factors also at work. It is not surprising that there is no clear consensus as to how to extrapolate risk estimates from one species to another. This problem is not unique to radiation effects; there are countless examples of chemicals having very different effects in different species, and humans can differ quite significantly from animals in their reaction to toxic agents.

How have human data been obtained?

There are serious ethical issues with conducting experiments on humans, as discussed elsewhere in the report. However, most of the human data that are used to estimate risks, not just risk from radiation, come from epidemiologic studies on populations that already have been exposed in various ways. For radiation effects, the most important human data come from studies of the Japanese atomic bomb survivors carried out by the Radiation Effects Research Foundation (formerly the Atomic Bomb Casualty Commission) in Hiroshima. Other valuable sources of data include various groups of medically exposed patients (such as radiotherapy patients) and occupationally exposed workers (such as the uranium miners, discussed in chapter 12).[108]

Why is it necessary to compare exposed populations with unexposed populations?

Unlike a disease caused by identifiable bacteria, no "signature" has yet been found in cancerous tissue that would link it definitively to prior radiation exposure. Radiogenic cancers are identical in properties, such as appearance under a microscope, growth rate, and potential to metastasize, to cancers occurring in the general population. Finding cancers in an exposed population is not enough to prove they are due to radiation; the same number of cancers might have occurred due to the natural frequency of the disease. The challenge is to separate out the effects of radiation from what would otherwise have occurred. A major step in this direction is to develop follow-up (or cohort) studies, in which an exposed group is followed over time to observe their disease rates, and these rates are then compared with the rates for the general population or an unexposed control group.[109]

Why is the analysis of epidemiologic data so complicated?

Simply collecting data on disease rates in exposed and control populations is not enough; indeed, casual analysis may lead to serious errors in understanding. Sophisticated data-collection techniques and mathematical models are needed to develop useful risk estimates for several reasons:

  1. Random variation due to sample size.

  2. Multiple variables.

  3. Limited time span of most studies.

  4. Problems of extrapolation.

In addition, individual studies may also be biased in their design or implementation.

What is random variation?

The observed proportion of subjects developing disease in any randomly selected subgroup (sample) of individuals with similar exposures is subject to the vagaries of random variation.

A simple-minded example of this is the classic puzzle of determining, in a drawer of 100 socks, how many are white and how many are black, by pulling out one sock at a time. Obviously, if we pull out all the socks, we know for certain. In most areas of study, though, "pulling out all the socks" is far too expensive and time-consuming. But if we pull only 10, with what degree of confidence can we predict the color of the others? If we pull 20, we will have more confidence. In other words, the larger the sample, the greater our confidence. Using statistical techniques, our degree of confidence can be calculated from the size of the entire population (in this case 100 socks) and the size of the actual sample. The result is popularly called the margin of error.

The most common examples of this in everyday life are the public opinion polls continually quoted in the news media. As can be seen in the simple example of the drawer of socks, the highest degree of confidence can be achieved simply by pulling all the socks out of the drawer. For public opinion polls, this would be far too expensive; instead, a small sample is selected at random from the population. Nowadays it is common to report not only the actual results, but also the sample size and the margin of error. The margin of error depends not only on the sample size, but also on how high a degree of confidence we desire. The degree of confidence is the probability that our sample has provided a true picture of the entire population. For example, the margin of error will be smaller for 80 percent degree of confidence than for 95 percent. Even where a study covers an entire exposed population, such as the atomic bomb survivors, the issue of random variation remains when we wish to generalize the findings to other populations.

What are multiple variables?

The effects of radiation will depend upon, or vary, with the dose of radiation received. However, these effects also may vary with other factors--other variables--that are not dependent upon the radiation dose itself. Examples of such variables are age, gender, latency (time since exposure), and smoking. Data on these other variables must be collected as well as data on the basic elements of radiation dose and disease. The challenge is to then distinguish between disease rates due to radiation and those due to other factors. For example, if the population studied were all heavy smokers, this might explain in part a higher rate of lung cancer. Much of the science of epidemiology is devoted to choosing what factors to collect data on and then developing the multivariate mathematical models needed to separate out the effect of each variable. Radiation effects vary considerably across subgroups and over time or age. Because of this, direct estimates of risk for particular subgroups would be very unstable. Mathematical models must be used. These models allow all the data to be used to develop risk estimates that, while based on sufficiently large estimates to be stable, will be applicable to particular subgroups.

A more subtle problem is misspecification of the model finally chosen to calculate risks. The model may weigh selected factors in a manner that best fits the data from a statistical viewpoint. This model, while fitting the data, may not actually be a "correct" view of nature; another model that does not fit the data quite as well may actually better describe the as-yet-unknown underlying mechanisms.

Why does a limited time span reduce the value of a study?

The most pronounced effects of large exposures to radiation manifest themselves quickly in symptoms loosely termed radiation sickness.

However, another concern is understanding the effects of much lower levels of radiation. Unlike the more acute effects of large exposures, these may not appear for some time. Some cancers, for example, do not appear until many years after the initial exposure. These latent effects may continue to appear in a population throughout their entire lifetimes. Calculating the lifetime risk of an exposure requires following the entire sample until all its members have died. Thus far, none of the exposed populations have yet been followed to the ends of their lives, although the radium dial painter study for the group painting before 1930 essentially has been completed, and the follow-up has been closed out.[110]

Why does extrapolation among human populations pose problems?

As discussed earlier, extrapolating results from one species to another is problematic due to differences in how species respond to radiation.

Even though humans are all members of the same species, there are similar problems when extrapolating results from one group of humans to another group. Within the human species, different groups can have different rates of disease. For example, stomach cancer is much more common and breast cancer much rarer among Japanese than among U.S. residents.

How then should estimates of the radiation-induced excess of cancer among the atomic bomb survivors be applied to the U.S. population? Assumptions are needed to "transport" risk estimates from one human population to another human population that may have very different "normal" risks.

Why does extrapolation from high to low doses pose problems?

Acquiring high-quality human data on low-dose exposure is difficult. Past studies indicate that the effects of low doses are small enough to be lost in the "noise" of random variation. In other words, the random variation due to sample size may be greater than the effects of the radiation. Thus, to estimate the risks of low doses, it is necessary to extrapolate from the effects of high doses down to the lower range of interest. As with extrapolation among species or among human populations, assumptions must be made.

The basic assumption concerns the dose effect. Is the effect of a dose linear? This would mean that half the dose would produce half the effect; one-tenth of the dose would produce one-tenth of the effect, and so forth. Nature is not always so reasonable, however. There are many instances in nature of nonlinear relationships. A nonlinear dose effect, for example, could mean that half the dose would produce 75 percent of the effects. Or, going in the other direction, a nonlinear dose effect could mean that half the dose would produce only 10 percent of the effect. Reliable data are too sparse to settle the issue empirically. Much of the ongoing controversy over low-dose effects concerns which dose effect relationship to assume. Concerning dose response, most radiation advisory committees assume that radiation risks are linear in doses at low levels, although these risks may involve nonlinear terms at higher doses.

Another assumption concerns the effect of dose rate. It is generally agreed that the effect of high-dose x rays is reduced if the radiation is received over a period of time instead of all at once. (This reduction in acute effects, due to the cell's ability to repair itself in between exposures, is one of the reasons that modern protocols for radiotherapy use several fractionated doses.) The degree to which this also happens at low doses is less clear. There are few human data on the effect of dose rate on cancer induction. Most estimates of the effect come from animal or cell culture experiments. There is also evidence of quite different dose-rate effects for alpha radiation and neutrons.

How can a specific study be biased?

When applied to an epidemiologic study, the term bias does not refer to the personal beliefs of the investigators, but to aspects of the study design and implementation. There are several possible sources of bias in any study.

What is called a confounding bias may result if factors other than radiation have affected disease rates. Such factors, as mentioned earlier, might be a rate of smoking higher than the general population.

A selection bias may result if the sample was not truly a random selection from the population under study. For example, the results of a study that includes only employed subjects might not be applicable to the general population, since employed people as a group are healthier than the entire population.

An information bias may result from unreliability in a source of basic data. For example, basing the amount of exposure on the memory of the subjects may bias the study, since sick people may recall differently than healthy people. Dose, in particular, can be difficult to determine when studies are conducted on populations exposed prior to the study, since there usually was no accurate measurement at the time of exposure. Sometimes when dose measurements were taken, as in the case of the atomic veterans, the data are not adequate by today's standards.[111]

Finally, any study is subject to the random variation discussed earlier, which depends on how large the sample is. This is more important for low-dose than for high-dose studies, since the low-dose effects themselves are small enough to be lost amid random variations if the sample is too small.

To summarize, multiple studies may produce somewhat different results because there is an actual difference in the response between populations or because studies contain spurious results due to their own inadequacies. In addition, it must be recognized that the entire body of scientific literature is itself subject to a form of bias known as publication bias, meaning an overreporting of findings of excess risk. This is because studies that demonstrate an excess risk may be more likely to be published than those that do not.

In view of all these uncertainties, what risk estimates did the Committee choose?

Despite all these uncertainties, it must be pointed out that more is known about the effects of ionizing radiation than any other carcinogen.

The BEIR V Committee of the National Academy of Sciences estimated in 1990 that the lifetime risk from a single exposure to 10 rem of whole-body external radiation was about 8 excess cancers (of any type) per 1,000 people. (This number is actually an average over all possible ages at which an individual might be exposed, weighted by population and age distribution.) For continuous exposure to 0.1 rem per year throughout a lifetime, the corresponding estimate was 5.6 excess cancers (that is, over and above the rate expected in a similar, but nonexposed population) per 1,000 people. It is widely agreed that for x rays and gamma rays, this latter figure should be reduced by some factor to allow for a cell's ability to repair DNA, but there is considerable uncertainty as to what figure to use; a figure of about 2 or 3 is often suggested.[112]

The estimates of lifetime risk from the BEIR V report have a range of uncertainty due to random variation of about 1.4- fold. The additional uncertainties, due to the factors discussed earlier, are likely to be larger than the random variation.

In comparison, for most chemical carcinogens, the uncertainties are often a factor of 10 or more. This agreement among studies of radiation effects is quite remarkable and reflects the enormous amount of scientific research that has been devoted to the subject, as well as the large number of people who have been exposed to doses large enough to show effects.