At its July
2003 meeting, the Council discussed with Daniel Callahan
the following chapter from his forthcoming book, "What Price
Better Health? Hazards of the Research Imperative," (University
of California Press, Berkeley, October 2003). The ideas contained
in this draft chapter are the author's own, and do not represent the
official views of the Council or of the United States Government.
Chapter 3
Is Research a Moral Obligation?
Plagues, Death and Aging
By Daniel Callahan
In 1959, Congress passed a “Health for Peace” bill,
behind which was a view of disease and disability as “the
common enemy of all nations and peoples.”1
In 1970, President Nixon declared a “war” against cancer.
Speaking of a proposal in Great Britain in 2000 to allow stem cell
research to go forward, Science Minister Lord Sainsbury said that
“The important benefits which can come from this research
outweigh any other considerations,” a statement that one newspaper
paraphrased as outweighing “ethical concerns.”2
Arguing for the pursuit of potentially hazardous germ-line therapy,
Dr. W. French Anderson, editor-in-chief of Human Gene Therapy,
has written that “we as caring human beings have a moral mandate
to cure disease and prevent suffering…”3
A similar note was struck in an article by two ethicists who held
that there is a "prima facie moral obligation” to carry
out research on germ cell gene therapy.4
As if that was not enough, in 1999, a distinguished group of scientists,
including many Nobel laureates, issued a statement urging federal
support of stem-cell research. They said that because of its “enormous
potential for the effective treatment of human disease, there is
a moral imperative to pursue it.”5
Two other ethicists said much the same, speaking of “the moral
imperative of compassion that compels stem cell research,”
and adding that at stake are the “criteria for moral sacrifices
of human life,” a possibility not unacceptable to them.6
The Human Embryo Research Panel, created by NIH, contended in 1994
that federal funding to create embryos for research purposes should
be allowed to go forward “when the research by its very nature
cannot otherwise be validly conducted,” and “when the
fertilization of oocytes is necessary for the validity of a study
that is potentially of outstanding scientific and therapeutic value.”7
The flavor of these various quotations tells its own story. The
proper stance toward disease is that of warfare, seeking an unconditional
surrender. Ethical objections, when they arise, should give way
to the likely benefits of research, even if the benefits are still
speculative (as with stem cell and germ-line research). The notion
that a good reason to set aside ethical considerations is that research
could not otherwise be “validly conducted” is particularly
striking. It echoed an objection of many researchers much heard
during the1960s to the imminent regulation of human subject research:
regulations would cripple research. That kind of reasoning is the
research imperative in its most naked--and hazardous--form, the
end unapologetically justifying the means. I am by no means claiming
that most researchers or ethicists hold such views. They are still
in the minority, but they are among the “shadows” this
book is about.
What should be made of this way of thinking about the claims of
research? How appropriate is the language of warfare, and how extensive
and demanding is the so-called moral imperative of research? In
this chapter I want to begin by exploring those questions and then
move on to the wars against death and aging, two fundamental, inescapable
biological realities so far—and two notorious and clever foes.
The Metaphor of “War”
Since at least the 1880s--with the identification of bacteria as
agents of disease--the metaphor of a “war” against illness
and suffering has been popular and widely deployed. Cancer cells
“invade” the body, “war stories” are a feature
of life “in the trenches” of medicine, and the constant
hope is for a “magic bullet” that will cure disease
in an instant.8 Since there are
surely many features of medicine that may be likened to war, the
metaphor is hardly far-fetched, and it has proved highly serviceable
time and again in the political effort to gain money for research.
Less noticed are the liabilities of the metaphor, inviting excessive
zeal and a cutting of moral corners. The legal scholar George Annas
has likened the quest for a cure of disease to that of the ancient
search for the Holy Grail: “Like the knights of old, a medical
researcher’s quest of the good, whether that be progress in
general or a cure for AIDS or cancer specifically, can lead to the
destruction of human values we hold central to a civilized life,
such as dignity and liberty.”9
“Military thinking,” he has also written, “concentrates
on the physical, sees control as central, and encourages the expenditure
of massive resources to achieve dominance.”10
The literary critic Susan Sontag, herself a survivor of cancer,
has written that “We are not being invaded. The body is not
a battlefield….We--medicine, society--are not authorized to
fight back by any means possible…. About that metaphor, the
military one, I would say, if I may paraphrase Lucretius: Give it
back to the war-makers.”11
While some authors have tried to soften the metaphor by applying
just war theory to the war against disease, a sensible enough effort,
the reality of warfare does not readily lend itself to a respect
for nuanced moral theory. Warriors get carried away with the fight,
trading nasty blow for nasty blow, singlemindedly considering their
cause self-evidently valid, shrugging aside moral sensitivities
and principles as eminently dispensable when so much else of greater
value is thought to be at stake. It is a dangerous way of thinking,
and all the more so when--as is the case with so much recent research
enthusiasm--both the therapeutic benefits and the social implications
are uncertain.
Is Research A Moral Obligation?
Yet if the metaphor of war is harmful, lying behind it is the notion
of an insistent, supposedly undeniable moral obligation. Nations
go to war, at least in just wars, to defend their territory, their
values, their way of life. They can hardly do otherwise than to
consider the right to self-defense to be powerful, a demanding and
justifiable moral obligation to protect and defend themselves against
invaders. To what extent, and in what ways, can we be said to have
an analogous moral obligation to carry out research aiming to cure
or ameliorate suffering and disease, which invade our mind and body?12
Historically, there can be little doubt that an abiding goal of
medicine has been the relief of pain and suffering, and it has always
been considered a worthy and highly defensible goal--as well it
should. The same can be said of medical research that aims to implement
that goal. It is a valid and valuable good, well deserving of public
support. As a moral proposition it is hard to argue with the idea
that, as human beings, we should do what we can to relieve the human
condition of avoidable disease and disability. Research has proved
to be a splendid way of doing that.
So the question is not whether research is a good. Yes, surely.
But the important questions are these. How high and demanding a
good is it? Is it a moral imperative? Are there any circumstances
when ethical safeguards and principles can be set aside if they
stand in the way of worthy research? And how does the need for research,
and the good of research, compare with other social needs and goods?
The long-honored moral principle of beneficence comes into play
here, that of a general obligation to help those in need when it
is possible for us to do so.
Philosophically, it has long been held that there are perfect
and imperfect obligations. The former entail obligations
with corresponding rights: I am obliged to do something because
others have a right to it, either because of contractual agreements
or because my actions or social role generate rights that others
can claim against me. The latter obligations, imperfect in nature,
do not have corresponding rights. They are non-specific in the sense
that no one can make a claim that we owe to them a special duty
to carry out some particular action on their behalf.13
Medical research has historically fallen into that latter category.
There has long been a sense that beneficence requires that we work
to relieve the medical suffering of our fellow human beings, as
well as a felt obligation to pursue medical knowledge to that end.
But it is inevitably a general, imperfect obligation rather than
a specific, perfect obligation: no one can claim a right to insist
that I support research that might cure him of his present disease
at some point in the future. Even less can it be said that there
is a right on the part of those not yet sick, but who someday might
be (e.g., those at risk of cancer), to demand that I back research
that might help them avoid getting sick. Nor can a demand be made
on a researcher that it is his or her duty to carry out a specific
kind of research that will benefit a specific category of sick people.
This is not to say that, if someone takes on the role of researcher,
and has particular knowledge and skills to combat disease, she has
no obligation to do so. On the contrary, the choice of becoming
a researcher (or doctor, or fireman, or lawyer) will create what
are often called role obligations, and it would be legitimate to
insist that medical researchers have a special duty to make good
use of their skills toward the cure of the sick. But even here it
is an imperfect obligation because no one can claim a right to demand
that research be carried out by a particular researcher to work
on his specific disease. At most, there is an obligation to discharge
his moral role as someone who ought to make a responsible use of
his skills and training to work on some disease or another. Even
here, however, we probably would not call a researcher who chose
to carry out some basic research, but with no particular clinical
application in mind, an irresponsible researcher.
These are no mere ethical quibbles or hair-splitting. If the language
of an “imperative” is to be used, we can reasonably
ask who exactly has the duty to carry out that imperative and who
has the right to demand that someone do so. If we can not give a
good answer to those questions, we might still want to argue that
it would be good (for someone) to do such research and that it would
be virtuous of society to support it. But the language of a “moral
imperative” could no longer meaningfully be used. We ought
to act in a beneficent way toward our fellow citizens, but there
are many ways of doing that, and medical research can claim no more
of us than many other worthy ways of spending our time and resources.
We can be blamed if we spent a life doing nothing for others, but
it would be unfair to blame us if we chose to do other good works
than support, much less personally pursue, medical research. It
is thus a distortion of a main line of western moral philosophy
to claim that there is any kind of research--such as medical research--
that carries with it some prima facie obligation to be supported
and advanced.
The late philosopher Hans Jonas has put the matter as succinctly
as anyone: “Let us not forget that progress is an optional
goal, not an unconditional commitment, and that its tempo in particular,
compulsive as it may become, has nothing sacred about it. Let us
also remember that a slower progress in the conquest of disease
would not threaten society, grievous as it is to those who have
to deplore that their particular disease be not conquered, but that
society would indeed be threatened by the erosion of those moral
values whose loss, possibly caused by too ruthless a pursuit of
scientific progress, would make its most dazzling triumphs not worth
having.”14 In another
place he wrote that “The destination of research is essentially
melioristic. It does not serve the preservation of the existing
good from which I profit myself and to which I am obligated. Unless
the present state is intolerable, the melioristic goal is in a sense
gratuitous, and this not only from the vantage point of the present.
Our descendants have a right to be left an unplundered planet; they
do not have a right to new miracle cures.”15
In the category of the “intolerable” would surely be
rapidly spreading epidemics, taking thousands of young lives and
highly destructive to the social life and viability of many societies.
AIDS, and some classic earlier plagues, assault society as a whole,
damaging and destroying their social infrastructure. But though
they bring terrible individual suffering, few other diseases--including
cancer and heart disease--can be said now to threaten the well being
and future viability of any developed society as a society. They
do not require the kind of obsession with victory that invites a
ruthlessness and mocking of morality that at times surfaces in the
present war against disease, a war that requires “setting
aside,” in Lord Sainsbury’s words, “any other
consideration.”
Jonas should not by any means be construed as an enemy of research.
He was writing in the context of the 1960s debate on human subject
research, and was holding only that the cost in time lost because
of regulatory safeguards to protect the welfare of research subjects
was a small, but necessary, price to pay to preserve important moral
values and to protect the good name of research itself. But he was
also making a larger point about absolutizing disease, as if no
greater evil existed, of a kind that legitimized an unbounded assault.
Not everything that is good and worthy of doing, as is research,
ought to be absolutized. That view distorts a prudent assessment
of human need, inviting linguistic hyperbole and excessive rationalization
of dubious or indefensible conduct.16
Moreover, as well as any other social good, medical research has
its own opportunity costs (as an economist could put it); that is,
the money that could be spent on medical research to improve the
human condition could also be spent on something else that would
bring great benefits as well, whether on public health, education,
job-creating research, or on other forms of scientific research,
such as astronomy, physics, and chemistry.
Health may indeed be called special among human needs. It is a necessary
pre-condition of the other goods of life. But at least in developed
countries, with high general levels of health for most people for
most of their lives, that pre-condition is now largely met at the
societal, if not the individual, level; other goods may legitimately
compete with it. With the exception of plagues, no disease or medical
condition can claim a place as an evil that must be erased, though
many would surely be good to erase.
Hardly anyone in medical research is likely to deny the truth of
those assertions, but it is not something that is much said in public.
One way in which medical research gets absolutized, and then abused,
is through turning the evils it aims to erase into nasty devils,
evil incarnate. A standard way in which good and just wars often
descend to nasty and immoral wars is by demonizing the enemy, made
worthy only of eradication (“gooks” in the Vietnam War).
The weapons of war, including those brought to bear against disease,
are then treated as indispensable, as if no other choice is available.
What I have tried to do so far is to indicate why the language of
war, or moral imperative, can be hazardous to use, giving too high
a moral and social place to overcoming death, suffering, and disease.
It becomes “too high” when it begins to encroach upon,
or tempt one to put aside, other important values, obligations,
and social needs. Nonetheless, there is a way of expressing a reasonable
moral obligation that need not run those dangers. It is to build
upon and incorporate into thinking about research, the most common
arguments in favor of universal health care, that is, the provision
of health care to all citizens regardless of their ability to pay
for that care.
There are different ways of expressing the underlying moral claim:
as a right to health care, which citizens can claim against the
state; as an obligation on the part of the state to provide health
care; and as a commitment to social solidarity. The idea of a “right”
to health care has not fared well in the United States, which is
one reason the 1984 President’s Commission used the language
of a government obligation to provide health care17
A characteristic way of putting the rights or obligations is in
terms of justice. As one of the prominent proponents of a just and
universal care system, Norman Daniels, has put it, “…by
keeping people close to normal functioning, healthcare preserves
for people the ability to participate in the political, social,
and economic life of their society.18
To this I would add: and to participate in the family and private
life of communities. The aim in Daniels’ view is that of a
“fair equality of opportunity.” The concept of “solidarity,”
rarely a part of American political thought, is strong in Canada
and Western Europe. It focuses on the need of those living together
to support and help each other, to make of themselves a community
by putting in place those health and social resources necessary
for people to function as a community. The language of rights and
obligations characteristically focuses on the needs of the community.19
The language of solidarity is meant to locate the individual within
a community and, with health care, to seek a communal and not just
individual good.
It is beyond the scope of this book to take up in any further details
those various approaches to the provision of health care. It is
possible, however, to translate that language and those approaches
into the realm of medical research. We can ask whether, if there
is a social obligation of society to provide health care--meaning
those diagnostic, therapeutic, and rehabilitative capabilities that
are presently available—there is a like obligation to carry
out research to deal with those diseases and medical conditions
that at present are not amenable to treatment or to poorly effective
treatment only. “Fair equality of opportunity,” it can
plausibly be argued, should not extend only to those whose medical
needs can be met with available therapies. Those who are not in
that lucky circle can make a case that justice requires they be
given a chance as well and that research is necessary to make that
possible.
Three important provisos are necessary. First, there is the need
to understand that rationing must be a part of any universal health
care system: no government can afford to make available to everyone
everything that might meet their health care needs. There will of
necessity be resource limitations that will require the setting
of priorities—and this will be as true of the availability
of research funds as well as health care delivery funds. Second,
it would seem unjust for money to be invested in research that would
knowingly end in treatments or therapies that could not be afforded
by government trying to cover all citizens or available only privately
to those with the money to pay for them. (See chapters 9 & 10
for a further development of these two points.) Third, it will be
important to understand that neither medical research nor health
care delivery are the only determinants of health: social and economic
and environmental factors have a powerful role as well.
Instead of positioning medical research as a moral imperative it
can be understood as a key part of a vision of a good society. A
good society is one interested in the full welfare of its citizens,
supportive of all those conditions conducive to individual and communal
well being. Health would be an obviously important component of
such a vision, but only if well integrated with other components:
jobs, social security, family welfare, social peace, and environmental
protection. No one of those conditions, and others that could plausibly
be posited, is both necessary and sufficient; each is necessary
but none is sufficient. It is the combination that does the work,
not the individual pieces in isolation. Research to improve health
would be a part of the effort to achieve an integrated system of
human goods. But neither perfect health nor the elimination of all
disease is a prerequisite for a good 21st century society--it is
a prerequisite only to the extent that it is a major obstacle to
pursuing the other goods. It is wonderful for medical researchers
and the public that the budget of the NIH has usually outstripped
other science and welfare budgets in its annual increases. But it
may be a misperception of the full range of our social needs, which
might benefit from giving programs devoted to them comparable increases.
In the remainder of this chapter, I want to turn to two purported
evils, that of death and aging, both fine case studies for examining
the research imperative. I begin with them because, throughout human
history, they have been looked upon as evils of a high order, at
least for individuals, and because they are most frequently brought
forth as stark examples of evil that research should aim to overcome.
Two of my closest friends died of cancer and one of stroke during
the year in which I wrote this book, so it should be understood
that I do not separate what I write here from my own reflections.
I wish they were still alive. Those who are not dying are getting
old, and--like me--have some mixed feelings about that, not all
of them optimistic.
I choose death and aging as my starting point not only because they
have long been understood as fixed inevitabilities, but also because
they have two interesting differences. Death is treated as an evil
in and of itself with no redeeming features (unless, now and then,
as surcease from pain). With aging, by contrast, the evil is there--who
wants it and who needs it?--but the flavor is one of annoyed resignation,
of an evil which (probably) can’t be avoided but which, if
we are allowed to let our imaginations to roam, might be understood
differently and might even be fought in some successful manner.20
Put another way, the fight against death is seen as imperative,
while the fight against aging seems worthy and desirable, even if
not quite in the heavyweight class of death. But they both have
a special status in medicine: they have been seen as biological
inevitabilities, a fixed part of the human condition, while particular
diseases that afflict people have been understood--at least in principle--to
be open to cure.
The War Against Death
It is not too far-fetched to say that the most important war conducted
by modern medicine is the war against death. A decline in mortality
rates from various diseases is celebrated as the greatest of medical
victories, and it is no accident that the National Institutes of
Health has provided the most research money over the years to those
diseases that kill the most people, notably cancer and heart disease.
Oddly enough, however, the place of death in human life, or the
stance that medicine ought, ideally or theoretically, take toward
death in medicine has received remarkably little discussion. The
leading medical textbooks hardly touch the topic at all other than
(and only recently) the care of the terminally ill.21
While it is no longer the case that death is not talked about, Susan
Sontag was right to note that it is treated, if at all, as an “offensively
meaningless event”—and, I would add, “fit only
to be fought.”22
Now of course it is hardly difficult to understand why this attitude
is so strong. Few of us look forward to our death, most of us fear
it, and almost all of us find it difficult to know what to make
of it, how to give it some plausible meaning, whether philosophical
or religious. For the individual, death is the end of our consciousness
and our experience, of any future (worldly) hopes and dreams, and
of our relationship with other people. Unless we are over-burdened
with pain and suffering, there is not much good that can be said
about death for individual human beings, and most people are actually
willing to put up with much suffering rather than give up life altogether.
Death has been feared and resisted and fought, and that seems a
perfectly sensible response.
Yet it is a response that does not clearly tell us how medicine
should look upon death. Death is, after all, a fact of biological
existence and, since humans are at least organic, biological creatures,
it might seem evident that it needs to be accepted. Death is just
there, built into us, waiting only for the conditions necessary
for it to declare and express itself. Why, then, should medicine
treat it as an enemy, particularly a medicine that works so hard
to understand how the body works and how it relates to the rest
of nature? Some articulate reasons have been offered to make biological
sense of death. Death, the late physician-essayist Lewis Thomas
once wrote, “is a natural marvel. All of the life of the earth
dies, all of the time, in the same volume as the new life that dazzles
us each morning, each spring…In our way, we conform as best
we can to the rest of nature. The obituary pages tell us of the
news that we are dying away, while the birth announcements in finer
print, off at the side of the page, inform us of our replacements.”23
Many biologists and others have pointed out the importance of death
as a means of constantly replenishing the vitality and freshness
of human life as a species. New people come into the world and thereby
open the way for change and development; others die and thus facilitate
the new and the novel. Moreover, is it not the case that our recognition
of the finiteness of our lives, the brute fact that they come to
an end, itself sharpens our appreciation of what we have and what
we might do to make the most of it? If, say, we had bodily immortality
in this world, would not the danger of boredom and tedium be a real
possibility? “Nothing less will do for eternity,” Bernard
Williams has written, “than something that makes boredom unthinkable.”
And Williams believes it exceedingly difficult to come up, even
imaginatively, with an unendingly satisfying model of immortality.24
Jonas has well caught what seems to me the essence of the ambivalence
about death when he speaks of mortality as, in some inextricable
way, both a burden and a blessing: “…the gift of subjectivity
only sharpens the yes-no polarity of all life, each side feeding
on the strength of the other. Is it, in the balance, still a gain,
vindicating the bitter burden of mortality to which the gift is
tied, which it makes even more onerous to bear?”25
His answer is yes, in part because of the witness of history to
the renewal that new lives bring and the passing of the generations
makes possible, and in part because it is hard to imagine that a
world without death would be a richer biological and cultural world,
more open in its possibilities than the world we now have.
Part of the problem is simply that we know nothing beyond what we
already know: that life, when it is good, is good. We cannot imagine
(save for a religious vision of immortality) anything much better;
and surely not the nothingness of death. Hence, we hold tight to
what we know. Even so, simply extending life is no guarantee that
the good we now find in life, at younger ages, would continue indefinitely
into the future; boredom, ennui, the tedium of repetition may well
weigh us down. Nor is there any guarantee that our bodies would
remain free of frailty, late-late onset dementia, failing organs.
Even under the best prospects, there would be hazards, physical
and mental, to be run for a prize that might hardly be worth winning.
My own conclusion is this: while it makes sense for medicine to
combat some causes and forms of death, it makes no sense to consider
death as such the enemy. It distorts the goals of medicine
to give it a permanent priority, taking some money from research
that could improve the quality of life. Moreover, it is evident,
there is no end to the amount of money that can be spent to combat
death, which so far in human history always wins in the long run.
There will always be what I have elsewhere called the “ragged
edge of progress”--that point where our present knowledge
and technology run out, with illness and death returning; and however
much progress is made, there will always be such a point.26
No matter how far we go, and how successful we are in the war on
death, people will continue to die, and they will die of some lethal
disease or other, that disease research has yet to master. So far
as I can determine, no reason has ever been advanced why death should
become the permanent enemy, and it is not evident what would constitute
a good reason. If it is possible to doubt some of the reasons offered
in favor of mortality it is far more difficult to make an overpowering
case why medicine should make mortality itself the ultimate enemy.
The most serious questions are how much emphasis research
should place on the forestalling of death, and just which kinds
of death.
Medicine’s Schism About Death
There is an immediate problem in trying to answer those questions.
At the heart of modern medicine is an important though implicit
schism about the place and meaning of death in human life. It is
a conflict that pits the research imperative to overcome death against
the newly emergent (even if historically ancient) clinical imperative
to accept death as a part of life in order to help make dying as
tolerable as possible. This schism is possibly inescapable, but
it nonetheless has some untoward consequences for the setting of
medical research priorities and for understanding the appropriate
stance of medicine toward death in the care of patients. It bespeaks
a fundamental ambivalence about the way death should be interpreted
and dealt with. My question is this: if this schism is truly present,
and if it creates serious clinical problems and the setting of research
priorities, are there some ways to soften its impact, to lessen
the friction, and to find a more coherent understanding of death?
In the Western world, death was not considered the enemy of ancient
medicine. It could not be helped. Only with the modern era, and
the writings of Rene Descartes and Francis Bacon in the 16th and
17th centuries, did the goal of a medical struggle against death
emerge.27 Prior to that time,
the cultural and religious focus was on finding a meaning for death,
on giving it a comprehensible place in human experience, and on
making the passage from life to death as comfortable as possible.28
The post-Baconian medicine put aside that search. Death was declared
the enemy. Karl Marx once said that the task of philosophy is not
to understand the world but to change it. Modern medicine, to paraphrase
Marx, has seemed in effect to say that its task is not to understand
death but to eliminate it. The various “wars” against
cancer and other diseases in recent decades reflect that mission.
For what is the logic of an unrelenting war against all lethal disease
other than a kind of trench warfare against death itself?
The Mixed Record of Reform
Given that background, it is possible to better understand why the
various efforts over recent decades to improve care at the end of
life have proved so frustrating, only fitfully successful. They
have tried to promote a different outlook on death, and that has
not been easy. During the early- to mid-1970s, three major reform
efforts were initiated, and they have been pursued ever since. The
first was the effort to introduce advances directives into patient
care, a strategy designed to give patients some choice about the
kind of care they receive when dying. The second was the hospice
movement, pioneered by Cicely Saunders in Great Britain and introduced
at the Yale-New Haven Hospital in 1974. The third effort was to
improve the education of medical students and residents on care
at the end of life.
Of the three, hospice is probably the most successful, caring for
over 500,000 patients a year out of some 2.3 million annual deaths.
But hospice services have been mainly effective with cancer patients,
even though there have been recent efforts to extend it to other
lethal conditions, heart disease and Alzheimer’s in particular.
There is general agreement, moreover, that many terminally ill patients
come to hospice much too late, sometimes just a few days prior to
their deaths. Neither families nor physicians are always ready to
accept death. Advance directives have had at best a mixed record.
Despite considerable publicity for 25 years, probably no more than
15% of the population have such directives. Even worse, as a number
of studies have shown, patients having them are by no means guaranteed
to get what they want.29
Death is still denied, evaded, and in the case of many clinicians
fought to the end, bitter or otherwise for patients. As for the
educational efforts, they have surely given the issues more salience
in medical schools, but what students learn in didactic courses
or seminars is often not reinforced by their experience during their
clinical years, where the technological imperative--to aggressively
use the available life-sustaining technologies--can still reign
supreme. A recent survey of medical textbooks found the subject
of death strikingly absent and offering little guidance to physicians
in the care of dying patients.30
An important thread running through each of the struggling reform
efforts has been the ambivalence toward death symptomatic of the
schism I have tried to characterize; patient and physician confusion
about how best to understand and situate death in human life; an
unwillingness to accept the coming of death, and the persistence
of the turn to intensified technology in response to uncertainty
about death. The great improvement in, and the new prominence of,
palliative care is a powerful antidote to that pattern, representing
both a return to older traditions of care and a fresh, less troubled
response to death.
This record of mixed success has of late been met with a renewed
effort at analysis and education. The “Project on Death in
America” program of the Soros Foundation, and the “Last
Acts Campaign” of the Robert Wood Johnson Foundation, have
contributed most generously to that work. It is too early to tell
what this new round of initiatives, though most welcome, will achieve.
If at the heart of the problem is a profound schism within medicine
about the stance that should be taken toward death, then their success
is likely to remain limited.
A basic question must be asked: is death to be accepted or fought?
There is a conventional answer to that question, even if rarely
articulated in any precise way: (a) every effort possible to save
life should be undertaken, until that moment when (b) treatment
becomes futile, at which point (c) care should be switched from
a therapeutic to a palliative mode.
What’s wrong with that model? For all of its seeming reasonableness
it is beset by two confounding elements. One of them, as recent
debates have made clear, is the difficulty of determining when treatment
is truly futile.31 Constant
technological advances mean that there is almost always something
more that can be done for even the sickest patient, one more last,
desperate intervention. The other is the psychological naiveté
in thinking that physicians—not to mention patients and their
families—can suddenly and at just the exactly right moment
switch from an interventionist to a palliative mode.32
That is often like attempting to stop large trains, which go a long
distance down the track before the brakes take hold.
Eliminating Death, Disease by Disease
The tacit message of the research imperative is that, if death itself
can not be eliminated--no one is so bold as to claim that--then
at least the diseases that cause death can be done away with; and
that amounts to the same thing. As William Haseltine, chairman and
chief executive officer of Human Genome Sciences breathtakingly
put it, “Death is a series of preventable diseases.”33
From this perspective, the researcher is like a fine sharpshooter
who will pick off the enemy one by one: cancer, then heart disease,
then diabetes, then Alzheimer’s, and so on. The human genome
effort, the latest contender offering eventual cures for death,
will supposedly get to the genetic bottom of things, radically improving
the aim of the sharpshooter.34
I mentioned above that the “logic” of the research enterprise
seems to make death itself the enemy. I use the word “logic”
to suggest that, if it is the aim of research to eliminate all the
known causes of death, then it would seem that the ultimate enemy
must be death itself, the final outcome of that effort. Yet most
researchers and physicians do not see themselves as attempting to
eliminate death itself, even if they would like to see the causes
of death understood and overcome. They know that death is now, and
will remain, part of the human condition; medicine is not chasing
immortality. Even so, the struggle against the causes of death continues,
as if it must and will continue until those causes are eliminated.
Perhaps this tension, or contradiction, is best undertstood as an
expression of an ideal of research confronting a biological reality:
the spirit of the research enterprise is to eliminate the causes
of death, even as it is understood that death itself will not be
eliminated. It might be likened to the fight against poverty, where
it is understood that there will probably always be some poverty
but that the fight is valuable, even if there are some core realities
it may never overcome. We might, then, think of the struggle against
death as an ideal that may never be achieved, or a dream that may
never be realized. However we might best understand this phenomenon,
it has its effect at the clinical level.
But even if there is such a dream, why should this affect the care
of those who are dying, having passed beyond the limits of effective
help? For one thing, as already mentioned, it has turned out to
be very difficult, medically and psychologically, to find a bright
line (as a lawyer might put it) between living and dying. The increased
technological possibility of doing just a little bit more, and then
some little bit more again, to sustain life means that it’s
getting harder and harder to tell just where that line is. Moreover,
the thrust of the research drive against death is to turn death
itself into a contingent, accidental event. Why do people keep dying?
Listen to the now-common explanations: They die because they did
not take care of their health, or because they had genetically unhealthy
parents, or because their care was of a low quality, or because
the available care is inequitably distributed, or because this year’s
technologies don’t sufficiently sustain life (but not necessarily
next year’s), or because research has not yet (but will eventually)
find cures for those diseases currently killing us. No one just
dies anymore, and certainly not from something as vague as “old
age.” They die from specific causes, and that can be changed.
Death, in that sense, has been rendered contingent and accidental.
The Clinical Spillover
What difference does all of this make at the bedside for the clinician?
Such is the pervasive power of the research imperative (even of
a benign kind)--rooted in a vision of endless progress and permeating
modern (and particularly American) medicine--that it can easily
be understood to lead clinicians to think and act as if the death
of this patient at this time is accidental or a failure,
not inevitable. The feeling of guilt on the part of many clinicians
when a patient dies, even if they have done everything possible
to keep the patient alive, is perhaps one spillover effect of the
research stance toward death: maybe more could have been done and
even should have been done--if we had only known what it was. The
technological imperative is still another spillover effect, bespeaking
a belief that, understood narrowly, if technology is well used,
this patient need not die at this time; and, understood broadly,
that technological innovation is the royal road to cure.
In the United States, the research imperative to fight death stands
foursquare against fatalism, against giving up hope, and against
thinking that nature can not be brought to heel. Should we be surprised
that such a way of thinking influences clinical medicine as well,
introducing profound uncertainties about the appropriate stance
toward death? Can we really expect the various reform efforts in
clinical care to be as successful as they might so long as the research-induced
uncertainty about the inevitability of death is so powerful?
At this point two skeptical thoughts are sure to arise. One of them
is a point of logic: the biological inevitability of death does
not entail that death at any given point in life is equally inevitable.
It may be fixed in nature that we will die, but just how and when
is not at all determined. Death is possible at any time by any means,
coming faster or slower, brought about by one disease rather than
another. In that sense death is, then, contingent. It has no predetermined,
fixed time in a person’s life. Since this is true, is progress
possible simply by substituting later for earlier death, faster
for slower, peaceful for painful. Correct? Not quite. If “later”
is always assumed to be better, then the war against death admits
of no victory and the research imperative against it admits of no
limits. If, however, the wiser goal should be that a faster and
more peaceful death is better--admitting of potential success in
a way that an all-out struggle against death does not--then a more
useful research agenda is possible.
The second skeptical thought is more fundamental. Perhaps the clash
between the research imperative (eliminate death disease by disease),
and the clinical imperative (accept death as an unavoidable biological
reality), is inescapable and insoluble. Perhaps it is one of those
cases, of which life presents many, when we want incompatible goods
that admit of no happy reconciliation. Perhaps we just have to live
with the contradiction, conceding its force but remaining helpless
to get beyond it. Even though most of us can think of some elderly
people who have found a resolution of the conflict--working to stay
alive, yet ready at any moment to die--not everyone can find such
an accommodation; and it is all the rarer when facing a premature
death. We want to live but know we must die, an ancient and wrenching
clash.
Even if we are prepared to accept death for ourselves, it seems
wrong to accept the death of others: we lose something of great
value, they lose something of great value, and society loses something
of great value. Is it not, then, contradictory to argue that death
should not be seen as the ultimate evil without compromising
the value of life itself, which is what makes it possible for us
to be at all, and which no less is the foundation for the value
of life for others? I believe it foolish to think there is some
easy, or available, way out of this dilemma. I find a quotation
of the theologian Gilbert Meilaender (though not, I think, a theological
statement) to be helpful, even if not fully satisfactory: “We
can say death is no enemy at all, or we can say that death is the
ultimate enemy. Neither of these does justice to what I take to
be the truth: that death is an enemy because human life is a great
good, but that since continued life is not the highest good, death
cannot be the greatest evil.”35
Is a longer life necessarily a better life? A shorter life eliminates
the possible of experiencing the goods of life that a longer life
might make possible. But on that view (assuming continued good health)
nothing less than an indefinitely continued life will suffice; there
would always be more goals to be had. But in other aspects of our
life, and human experience more generally, the fact that good things
end does not subtract their value: poems end, music ends, pleasant
vacations end, good parties end, the beautiful sunset disappears.
Do not those experiences suggest that the length of a life, or anything
of value in a life, does not determine its worth or entails that
it is diminished by ending? If finitude is not inherently evil,
then neither is a finite life span.
Ameliorating the Conflict
There may be some truth in that perspective, but surely also it
would be helpful if there were some ways to ameliorate the conflict.
If it is a conflict that enormously complicates and in some ways
even undermines the goal of better end-of-life care, the clinical
mission, then an effort at amelioration seems urgently needed. How
might we proceed? The clinical side of that conflict has undergone
nearly 30 years of analysis and reform efforts. One of its key findings
is that a peaceful death requires an acceptance of death by both
physician and patient. The acceptance may be affirming or grudging
or simply acquiescent, but it has to be there. Death just is and
must be given its due. This means that it is the research drive,
and the message that death may simply be an accident not to be accepted,
that must be confronted.
Several strategies can be suggested:
1. The idea that research should focus only on premature
death. Research ought not, even implicitly, or in its underlying
logic, have the eradication of death itself as its goal. There are
other goals no less important, including the relief of suffering
and the promotion of health (even if death will eventually come).
Not only is eradication of death an unattainable goal, it also promotes
the idea among the public and physicians that death represents a
failure of medicine, one that research will eventually overcome.
It is, however, reasonable for medicine to seek to reduce premature
death. The federal government now defines a “premature death”
as one that occurs before the age of 65. That standard should probably
now be changed, raised a few years, but what should not be changed
is the concept of a premature death. An implication of this strategy
is that, when the average age of death from a disease comes later
than the prematurity standard, there should be a reduction of (not
an elimination of) research funds to combat it; the money saved
should be switched to diseases where most deaths come before the
prematurity line. By this standard, and in light of the fact that
it is increasingly a disease of the elderly, the NIH cancer budget
could be reduced, not constantly expanded. Understood this way,
cancer remains an important research target, but one whose priority
would gradually lower over time, giving way to more pressing needs.
2. Give the “compression of morbidity” a research
status equivalent to that now given to saving and lengthening life.
The notion of a compression of morbidity--a shortening of the period
of poor health prior to death--has been around at least since the
time of the French philosophe Condorcet 200 years ago.
It seemed only a pipe dream. But in recent years evidence has begun
to accumulate that it might be achieved. The common adage of “longer
life, worse health” can now be falsified to some extent. The
new evidence indicates that, for those who have good health habits
and an adequate socioeconomic foundation to their lives, there can
be a significantly lessened chance of a premature death and an old
age burdened by illness and disability.36
It is not death that is the enemy, but a painful, impaired,
and unhealthy life before death. Research on health promotion and
disease prevention requires much greater financial support, as does
research designed to improve the quality of life within a finite
life span.
3. Persuade clinicians that the ideal of helping a patient
achieve a peaceful death is as important an ideal as that of averting
a patient’s death. I contended above that one clinical
spillover effect of the research war against death is a purveying
of the notion that death is an accidental, contingent biological
phenomenon. For the clinician that message has meant that the highest
duty is to struggle against death and that such a struggle need
not (with the help of research) be in vain. In that context, helping
patients achieve a peaceful death will always be seen as the lesser
ideal, what is to be done when the highest ideal—continuing
life—can not be achieved.
The two goals should be given equal value. In practice, this would
mean, with critical illness and death was on its way, that the physician
would be as anxious that a patient might die a poor death as he
or she would that the patient not die at all. The two ideals would
always be in tension with each other, rarely admitting of a wholly
comfortable resolution. This tension would help to weaken the influence
of the values inherent in the research imperative against death,
by giving it a meaningful competitor; and it would also help to
improve palliative care medicine and good patient care at the end
of life. Palliative care would be understood as aimed at all of
us, because we will all die, and not just for the losers, those
whom medicine could not save. And research on improving palliative
care should be given an increased budget; that goes without saying.
4. Redefine medical “progress.” The
crown jewel of medical progress is now most commonly understood
to be the conquest of lethal disease and an increase in life expectancy.
Hardly any triumph is more trumpeted than a declining mortality
rate, whether from heart disease, cancer, or AIDs. No doubt that
trumpeting will continue and, with premature deaths, it should.
But medical progress should increasingly be understood as the avoidance
of illness and disability, as the success of medicine in rehabilitating
those who have succumbed to disability, as the reduction in conditions
that do not kill but otherwise ruin lives (such as serious mental
illness), and in helping people better understand how to take care
of their own health. Death remains an enemy, but it is only one
item in a list of many enemies of life--and not in the long run
the most important.
Modern medicine, at least in its research aspiration, seems to have
thought that the best strategy in dealing with death is to make
it Public Enemy Number 1. It is not, at least not any longer in
developed countries, when average life expectancies are approaching
80. The enemy now is lives blighted by chronic illness and an inability
to function successfully. Death will always be with us, pushed around
a bit to be sure, with death from one disease being superseded by
death from some other disease. That can not and will not be changed.
But we can change the way people are cared for at the end of life
and we can significantly reduce the burden of illness. It is not,
after all, death that seems most feared by the public, and certainly
not in old age, but a life poorly lived. Something can be done about
that, and research has much to contribute.
Aging and Death
Death has, in aging, a twin, if not an identical twin, then one
sharing many traits: both have seemed inevitable, both are marked
by decline, and both have been feared. If aging has not, as I suggested
above, been perceived as terrible an evil as death, it has nonetheless
been considered bad enough to merit the laments of poets, writers,
ordinary people, and the medically inclined. For centuries, the
notion of conquering aging, or rendering its burdens less harsh,
has been a part of every culture’s reflection on human fate,
joining the struggle against aging with that of the struggle against
death. There is another linking characteristic: unless someone dies
a premature or accidental death, aging is now more than ever understood
to be the main biological gateway to death. With the decline in
infant and child mortality--and the reality of death beyond 65 for
a majority of people in developed countries--it is harder than ever
to cleanly distinguish between aging and death. It is thus difficult
to think about eliminating or ameliorating death without also thinking
about aging; or to think about improving old age without doing something
also about death.
The ancient world took death to be a harsh, but unavoidable, reality,
simply a burden to be endured. The modern world has been more hopeful.
A softer view, going back to the Italian Renaissance, envisions
an old age marked by wisdom, a delight in the simple pleasures of
life, and an effort to soften its sharper edges. Still another picture,
ever more common, was suggested some years ago by Gerald J. Gruman,
one that joins the Enlightenment optimism of Condorcet to that of
modern individualism. The elderly ought to reject what Gruman called
“medical mortalism” in favor of a scientific attack
on aging and death.37 No less
important is a kind of living for oneself, a rejection of communal
notions of a self-sacrificial life in favor of personal creativity
and self-assertion. Specifically rejected are idle musings about
“central questions of meaning and value,” which are
endlessly “open for future resolution.” This is not
far from another look into the future, one that sees the scientific
conquest of aging and added years of youth as bringing “the
transformation of our society from a pattern of war and struggle
to an era of utopian peace…[allowing] adequate time to uncover
the secrets of the natural universe…that could serve as the
foundation for a civilization of never-ending progress.”38
Aging as “Disease”
But where does aging stand as an object of scientific research?
Is it a disease like other physical pathologies or is it, like death,
a “natural” biological inevitability? The strongest
case for its inevitability is that, unlike other pathologies, none
of which is inevitable in every person, all humans are subject to
it, as is every other organic creature. Aging is predictable, that
is, in a way that nothing else ordinarily classified as a disease
is. We may, or may not, get cancer or heart disease or diabetes,
but we will surely get old and die. At the same time it is evident
that much of the decline associated with age, particularly the increase
in chronic disease and disability, is accessible to cure or amelioration.
Even many of the other biological indices of aging--decline of hearing,
rise of blood pressure, bone mineral loss, reduced muscle mass,
failing eyesight, decreased lung function--are open to compensatory
intervention though not at present to complete reversal.
In short, if aging is in many respects something other than disease,
it has enough of the characteristics of disease to invite, and respond
to, medical tinkering and improvement. There is for that reason
not much gain in trying to classify it is as “natural,”
if by that is meant that nothing should be done about it. On the
contrary, it can be—and has in fact been—treated effectively
as if it is a disease, not by combating old age as such
but treating the undesirable conditions associated with it.39
That route is one possibility, while the other is to take on the
biological process of aging itself as a research target. Timothy
Murphy has suggested two pertinent questions here. Instead of asking
“is aging a disease?” we should ask, first, is “aging
objectionable such that its prevention and cure ought to be sought?”
Second, can a convincing argument be developed in favor of a “cure”
for aging to show that “human significance warrants [it] and
possibly seeks such a cure and that the social costs of curing aging
are morally acceptable?”40
Is aging “objectionable”? Well, it is hard to find many
people who welcome it, at least the advanced phases of aging, where
the decline is steep and the disabilities crippling. But does the
fact that we don’t like it show that it is inherently objectionable,
some kind of offense against human dignity? That is a harder case
to make, especially since many ways have been found by a variety
of cultures to allow the aged to accept their aging, and to treat
the aged with dignity. It is a trivialization of the idea of dignity
to make it dependent upon the state of our bodies or minds. If that
is done, then dignity becomes nothing but an accident of biology,
with some people lucky to have it and others not. That is a corruption
of the idea of human dignity, the essence of which is not to reduce
value of people to a set of acceptable characteristics, such as
the proper race, or sex, social class, or bodily traits, but to
ascribe dignity to them apart from their individual characteristics.
There is another way of looking at aging. While it is possible to
situate the place of death within evolution, and to see its value
in endlessly renewing human vigor and possibility, that is not so
easy to do with aging. It seems to serve no useful biological function
other than as a prelude to death; and for just that reason might
itself be understood as part of the same biological process. Can
aging successfully be distinguished from the decline that brings
death such that the former can sensibly be resisted while not equally
resisting death? We might then agree that, while aging is not incompatible
with human dignity, it is objectionable enough to merit serious
scientific attention. The collective “we” of evolution
may need it together with its twin, death, but the “we”
of living cultures could do with considerably fewer of its burdens
and downward slopes.
Aging and Its Longevity
Does that mean we need to find a “cure” to bring that
about? An immediate difficulty here is that it is not clear what
a “cure” of aging might look like. If death is the final
outcome of aging, for all biological creatures, does it begin at
birth or in adulthood? How that question is answered scientifically
might then lead us to ask whether a cure would look to a perpetual
youth or a perpetual adulthood (and then young or old adulthood).
Or might we envision a slowing of the aging process to a snail’s
pace, not exactly a clean cure but an indefinite forestalling of
the worst of the present consequences of aging and its final outcome,
death?
There are three meaningful possibilities for the cure or amelioration
of aging, and I will put them in categories (to be used also in
the next chapter in another context).
a. Normalizing life expectancy, The aim here is
to work to bring everyone up beyond what would be considered a premature
death to what is now the average life expectancy in the most developed
countries of the world, e.g., Japan, which is slightly over 80 for
women (and bring men up a few years to a life expectancy equal to
females). This trend is already underway (though not in all poor
countries), driven by improved public health standards, better education,
housing, diets, and economic status. Normalization must, however,
be accompanied by improved standards in the quality of life, and
much of that can be accomplished through research and technological
innovation. The cure or amelioration of osteporosis, arthritis,
Alzheimer’s disease and other dementias, and improved methods
of dealing with loss of hearing and sight, would be high on any
list of valuable research goals.
My characterization of normalization retains the idea of a premature
death. There are at least four ways of defining a premature death,
each of them arbitrary to a considerable degree. There is a death
that comes earlier than the average life expectancy in a population;
that might be called the statistical definition. There is the cultural
definition, which is that age when, in general, people are classified
as young or old for various social or political purposes. The age
of 65 has, since the Bismarckian welfare programs of the late 19th
century in Germany, been widely used as the dividing line. Then
there is what I think of as the psychological meaning of a premature
death, which might be described at that age at which people begin
thinking of themselves as old. Finally, there is what can be called
the biographical definition, that is, that stage in life when the
main tasks and goals of a life have been accomplished: work, procreation,
parenthood, education, travel, and whatever else people have liked
to achieve.
Each of these definitions is arbitrary in the sense that each is,
and always will be, variable and moving. The statistical definition
will change as average life expectancy increases (most places) or
decreases (Russia and many sub-Saharan countries). The cultural
definition will move as more people go into old age in good health,
are capable of remaining active even if not employed, and are seen
as still part of the productive, non-dependent segment of society.
The psychological definition will be influenced by the cultural
of course, but not entirely; people do vary in their own sense of
age and aging. And the biographical definition will be influenced
by varying life goals.
Despite the variables in each of the definitions, they remain useful
for establishing social programs, for creating conventions and expectations
of behavior at different ages (it is, for many, a relief to become
old, officially relieving them of many earlier responsibilities),
and for helping to set some targets for biomedical research. Sixty-five
now seems too young for death at that age to be designated not premature,
while age 70 has become increasingly plausible; and the cutoff age
may go up further in the future. There can well be a legitimate
gap between what is culturally thought of as a premature death and
the aim of bringing everyone up to the statistical average. My rationale
for the distinction is that it is possible for most people to have
lived a full and fruitful biographical life prior to age 70, and
thus to feel their loss less than that of a much younger person.
We may also have different reasons for setting the age of various
social programs (e.g., employment possibilities) lower than average
life expectancy (such as Medicare or eligibility for special housing).
Much of the research agenda is already in place for the normalizing
of aging, consisting of what is already known to improve health
and to avoid premature death. It is a mixture of improved public
health programs, decent medical care (with an orientation to health
promotion programs designed to change behavior, and primary care),
healthy life styles, good education, jobs, housing, and a welfare
safety net. Beyond that, research on chronic diseases that lead
to premature death, that create disability in old age, and that
ruin or significantly diminish the quality of life, are appropriate.
It is, moreover, appropriate that government support such research,
contributing as it does to the overall health and well being of
the population as a whole. This will not be true of the next category.
b. Maximizing life expectancy. The purpose of research
efforts to maximize life expectancy would be to bring everyone up
to what are now the historically longest known human life spans,
between 110 and 122 years. If some few people can live that long
(and want to), why not make it theoretically possible for everyone
to get there? There is a certain plausibility to that idea, if only
because the course of evolution has shown that species have acquired
very different life spans; life spans are biologically malleable.
Recent research has, moreover, begun to suggest that there may be
no fixed maximum life expectancy, but enough is not yet known to
bring any certainty to such a conclusion.41
Earlier estimates at the least have again and again been proved
wrong in recent years, often because of extrapolations from past
trends of causes of death or age of death, both of which have been
changing.
The death of a French woman at 122 in the late 1990s and the regularity
with which people living between 105 and 110 are now being reported
can not fail to catch the eye. Prior to 1950 centenarians were rare,
and there may never have been any before 1800. Since 1950, however,
their numbers have doubled every ten years in Western Europe and
Japan (with women outnumbering men by four to one and even more
so with higher ages); and those centenarians now alive live two
years longer on average than a few decades ago. 42
Nonetheless, while the trend is strongly in the direction of more
people who are very old, S.J. Olshansky has presented some strong
data indicating how hard it will be to move everyone far along in
that direction. Working with mortality trends in France and the
United States, he shows that it would take huge reductions in mortality
rates at every age from present levels. To move, for instance, from
an average life expectancy of 77 in France (combining male and female)
to 80 would require an overall mortality decline of 23%; and it
would take a decline of 52% for all ages to move the average to
age 85, and 74% for the average to move up to age 90. Since mortality
rates are already low for younger ages, most of the mortality decline
would have to take place among those over age 50.43
While that is not theoretically impossible, it looks implausible
as a practical matter.
The difficulty of doing that can be envisioned by recalling that
a cure for cancer, the second greatest killer in the United States,
would only bring about a 1.5% overall decline in mortality. In response
to the contention that lifestyle modifications could bring about
changes of the necessary magnitude, a number of studies have suggested
that mortality would not significantly change if the entire population
lived in an ideally healthy way.44
J. W. Vaupel (never citing Olshansky and admitting his own calculations
are rough) is more optimistic. He holds that a decline in mortality
rates in France at the same pace as that which has prevailed for
the past, means that most people can expect to live to 90 in the
not too distant future.45 Whatever
the final truth here, which will take decades to appear, there is
considerable good sense in Kirkwood’s conclusion that “the
record breakers [for individual life span] are important…[but]
the major focus for research must be to address the main body of
the life span distribution, i.e., the general population, and to
improve knowledge of the causes of age-associated morbidity and
impaired quality of life.”46
c. Optimizing life expectancy. Two versions of
optimization have been talked about over the years. One of them
is bodily immortality, and that is the most ancient. The other would
be to move the average life expectancy to, say, more than 150 years.
(Since immortality is not seriously proposed by many, and no clear
scientific theory even exists about how that might be achieved,
I will put that aside here.) As the previous analysis of maximizing
life expectancy suggested, it will be very hard, even if not theoretically
impossible, to get average life expectancy to 85, much less 100.
Most commentators seem able to envision incremental gains within
the limits of present biological and medical knowledge, but agree
as well that only some radical scientific breakthroughs could get
us to and beyond 150. The principal obstacle appears to be the multi-factorial
nature of the aging process; no single magic bullet is likely to
do the job. All of the human organs, including the brain, would
have to benefit simultaneously from the breakthrough for the results
to be anywhere near desirable.
As someone who has been following the scientific developments in
understanding aging for over 30 years, it is remarkable how little
practical progress seems to have been made, even though there has
been a gain in knowledge about the aging process. Earlier theories
have been rejected or called into question, including the notion
that there is a fixed limit to the replication of genes over a life
span, once thought to be 50 times, or that evolution necessarily
requires unalterable programmed death, or that genes age. At the
same time, recent research on telomeres--stretches of DNA and the
proteins that bind them and which protect the ends of chromosomes--has
shown that they become shorter over time as a cell divides, eventually
dying. That reconfirms the notion of a division limit to cells which,
if better understood, could open the way to means of holding off
the accumulated, progressive decline of the cells. It is an extension
of the long-held view that the aging process is one that sees a
gradual breakdown of the genetic mechanisms that preserve life;
and the trick is to find a way to stop or slow them. In this view,
aging is a failure of biological adaptation, which Michael Rose
says is a case of “natural selection abandoning you.”47
Research on telomeres, nutrition, free radicals, anti-oxidants,
apoptosis (cell breakdown), hormonal regulation, cell rejuvenation
and ways of repairing DNA are well underway. Alternatively, the
search is on for those positive genetic factors that protect life,
that have helped individuals flourish in earlier years of life,
and which might be enhanced to continue doing so.48
In sum, the genetic approach to life span extension aims to find
the basic underlying mechanisms of aging, still poorly understood,
and then to discover ways of changing and manipulating them. If
there is to be a radical change in life expectancy, this is currently
the only seriously envisaged way of getting there.49
A medical approach, by contrast, focuses on the various disabilities
and diseases that bring poor health, and eventually death, to the
elderly, and has been (as noted above) the main approach to the
incrementalism of the maximizing strategy, far more limited in its
ultimate possibilities.
A Much Longer Life: Should We Want It? Do We Need It? Can
We Stand It?
Whether a form of rationalization or a higher insight, it is striking
that most of the imaginative literature that has dealt with life
extension over the years has reached a negative conclusion. Whether
because of boredom or the possibility of debilitation, a very extended
life, whether young or old, has been debunked. Even so, the idea
of a fountain of youth or a more up-to-date vision of a long life
in good, vigorous health, hangs on. There are many people--and all
of us know some--who would like to live indefinitely; and almost
everyone, if given a choice, and not in utter misery, would want
to live at least one more day, and one more after that. Even if
I can see some point in the evolutionary benefits of death and a
change of the generations, that is a terribly abstract way of looking
at my own life, a life that is doing well and not too interested
in making its evolutionary contribution in the near future. A longer
life beckons.
I am not alone. The Alliance for Aging Research reported in 2000
that it had discovered 25 firms it labeled “gero-techs”
because of their focus on applied aging research.50
The Alliance itself sees gerotechnology as a viable market possibility
and contributing to the finding of ways—through improved health--to
avoid rationing health care to the elderly in the future. In the
21st century, its Director, Daniel Perry, says that through gerotechnology,
“ the drive to discover the means to produce youthful health
and vitality is no less than a matter of national necessity.”51
It is not, then, just that some people want a longer and healthy
life. Research on aging is now being advanced as a way of avoiding
the economic and other problems of aging societies.
The language of “national necessity” seems to me a variant
way of speaking of a research imperative. An immediate problem comes
to mind. Is the aim to improve the health of the elderly within
the normalization model, that is, working to get everyone up to
the present average life expectancy? If so, that would be consistent
with a goal of compressing morbidity, increasingly feasible. Or
is the aim (of some of those gero-tech firms, for instance) that
of pushing forward the length of life, moving into the maximizing
or optimizing range? In that case, it may be much harder to continue
improving morbidity. Most (though not all) of those over age 100
require significant help with what geriatricians call activities
of daily living, suffer from various chronic conditions, are usually
frail, and will have some degree of dementia; and it all gets worse
after 105. It may well be the case that a reduction in morbidity
can keep pace with a reduction in mortality, but most likely only
if the net result for life expectancy gains move slowly.
Yet the health problems and uncertainties connected with increased
longevity are hardly the only that need to be thought about. What
are the other social consequences of efforts in that direction?
That is a difficult question to answer, if only because so much
must be speculative and because there are different possible directions
(and mixtures of directions) in which future developments could
go. We are already on one path, that of normalization, aiming to
improve the quality of life of the elderly, not directly trying
to lengthen the average life expectancy but getting, without trying,
a gradual movement in that direction.
There is already considerable knowledge about that trend. In the
United States it is known that, within 20-30 years, when the baby
boom generation will be retiring in large numbers and the proportion
of elderly moving from 13% to 18%, there will be serious problems
sustaining the present Medicare program at its present level.52
Comparable, and perhaps even worse, problems will be faced in other
countries (Germany, for example, expects to have nearly 24% of its
population over age 65). The correlative decline in the number of
young people to pay for the health care of the old (the so-called
dependency ratio issue) will only exacerbate the situation, as will
the continued introduction of new, often beneficial, but usually
more expensive technologies; and public demand for ever-better medicine
will not be much help either.
Something will have to give here, and there is already the expectation
of moving forward the age of eligibility for Medicare, from 65 to
67, and further moves in that direction can be expected. The promise
of reduced disability for the aged in the years to come will be
of great help here, but even so some unpleasant policies will probably
have to be pursued: means-testing for the elderly rather than full
and free coverage; rationing of health care, overt or hidden; constraints
on health care providers and hospitals; and constant efforts to
wring greater efficiency out of the system. Some universal system
of health care, not yet on the horizon (and which I support), might
lead to a generally more rational and equitable system, but it would
increase the govermental cost of health care and might not directly
help solve the elderly health care problem. There are some optimistic
voices to be found, but not too many. The Alliance for Aging Research
group, noted above, believes that salvation lies in improved technologies
to bring better health to the elderly; and others believe that some
combination of greater efficiency, more choice on the part of people,
and savings accounts will make it possible to weather the baby boom
era—an era that will eventually end anyway, somewhere in the
vicinity of 2050 or so, bringing a more affordable situation.
Once a move is made toward maximization and beyond, then a larger
range of problems begins to emerge, and much would depend upon the
kind of age extension that research might produce. A longer life
with a concomitant gain in vigor would be one possibility. Another
would be a longer life but at present levels of vigor. Still another
would be a longer life that simply stretched the length of the decline.
And still another would be a longer life with mixed effects, mental
and physical, some good and some bad.
Each of these possibilities would raise its own set of problems,
and I will not try to imagine what they might be. Yet whatever they
might be, even small changes toward any of them would have strong
general effects. Included would be the impact on other age groups
and how their lives might be altered: paying for the extended years,
jockeying for jobs and promotion and positions of leadership, negatively
influencing retirement and social security. How would it all be
paid for, and would the elderly be forced to work more years than
they might like? There would also be a great impact on childbearing
and child-rearing, as different definitions of youth and middle-age
emerged and as the job market for women of childbearing age changed
(and what would that age come to be?); and an impact on social status
and community respect, as a larger and larger portion of the elderly
emerged (and with that emergence the possibility of intergenerational
conflict). Everything, in a word, would have to be changed, depending
on the extent of the average age increase and the various forms
it might take.
Suffice it to say that a society with a much larger proportion of
elderly would be a different kind of society, perhaps good, perhaps
bad; much would depend upon the strategies employed to cope with
all the needed changes and how much time was necessary to put them
in place. If by chance some striking genetic breakthrough should
take place, allowing lives of 150 years, the impact would be all
the more dramatic, and the necessary changes in social policy all
that more the radical.
Is There a Societal Need to Increase Average Life Expectancy?
Do we need as societies a breakthrough to the possibilities of maximizing
and optimizing average life expectancy—and thus a research
drive to achieve it? It is very hard to find any serious argument
that there is a social need for that development, as if
future societies will be inadequate and defective unless everyone
lives longer lives. None of our current social problems--education,
jobs, national defense, environmental protection, etc.--can be blamed
on too low an average life expectancy, or would be solved by longer
lives. Many would be made worse. At most, many individuals have
said they would like to see that happen for themselves, and would
probably be willing to pay for it. But for how much of the total
direct and indirect costs of living out extended life spans? Ought
we to want it for ourselves?
I say “ought” to force ourselves to ask just exactly
what we think we would gain beyond a life that ended on average
at 80. This question should give everyone pause since no one could
know whether he or she would in fact fare well, whether the kind
of extended life span would be one they found acceptable, and what
they would do if it did not turn out as planned. We might agree
that there are many unfortunate features of the present situation
and most of us can think of reasons why we would like more years.
But since there could be no guarantee that more years would give
us a happier, more satisfying life, we might be no better off at
all, and maybe worse off. No clear correlation between a satisfying
life (assuming good health and the avoidance of a premature death)
and length of life has ever been demonstrated. How many people have
any of us known who died at age 80, or 90, but for whom we felt
sorrow because of all the possibilities that lay before them? I
have been to many funerals of very old people and I have yet to
hear anyone lament their loss of future possibilities, even as they
may have been sorry to lose them as friends or relatives.
Some of us may be prepared to take our chances. As a policy matter,
what stance should be taken toward research efforts to deliberately
find ways of extending average life expectancy and individual life
spans? There are three possibilities: to support such research at
the public, government level and encourage the private sector to
pursue it; to refuse public grants for such research but permit
it in the private sector; to refuse public grants and to use considerable
social and economic pressure to discourage it at the private level
(I ignore here the possibility of banning such research, which is
neither likely nor easily possible).
Unless someone can come up with a plausible case that the nation
needs everyone to live much longer, and longer than the present
steady gain of normalization will bring, there is no reason whatever
for government-supported research aimed at maximizing or optimizing
life spans. Longer lives may in any case come about as an accidental
by-product of efforts to improve the elderly’s quality of
life; but there is no reason to court that possibility directly
with targeted research. Nor is there any reason to encourage the
private sector to pursue it either, and for the same reasons.
Yet that sector will undoubtedly do so if promising leads open up,
and if they believe a profit can be made. Should that happen, there
would be every reason to put moral, political, and social pressure
on them not to move on in the research unless they took part in
a major national effort to work through in advance the
likely problems that success might bring them. The matter would
be important enough, the implications grave enough, that it would
be folly to wander in with no forethought or strategies in place
to deal with the economic and social consequences, many of which
can be realistically imagined. To drop a new and far-reaching technology
on our society, or any society, simply because some people will
buy it would be irresponsible. It would instead require the fullest
airing over a decent period of time and in a systematically organized
fashion. The public could then decide what they were likely to see
happen and be in a position to make a considered judgment about
a collective response.
There is no doubt also that a private-sector, age-extending, anti-aging
product would be expensive (most new pharmaceuticals are, and would
not otherwise be developed) and probably not available to everyone
at first, and possibly never. As with many expensive new technologies,
it is probably reasonable to say that they should not be denied
to everyone simply because not everyone could afford them.53
But it would also be an insufficient response to say that, since
there will undoubtedly be disagreement on the matter, research efforts
to extend life expectancy to some optimizing level should simply
be left to the market and private choice as a way of by-passing
some (unlikely) collective community consensus. However that worked
out, it is not difficult to imagine the numerous problems that would
arise for everyone if some had it and some did not (of which inequity
might be the least of them). Would different social security, retirement,
and job arrangements have to be devised for them to live side-by-side
with those who did not choose to take the product? What responsibility
would they bear for the consequences of their choice--a total personal
responsibility, for better or worse, or would some social safety
net be available to help them (paid for by those who did not choose
to go that way)? Those are questions a pure market approach can
not answer, but a failure to do so would be to put not only those
who chose to live longer lives at risk, but the rest of us as well.
The question of research deliberately aimed at extending average
life expectancy, and research aimed at changing the course of aging,
bears directly on the goals of medicine, and the use of medical
research to go beyond the traditional aim of preserving and restoring
health. I have argued that death itself is not an appropriate medical
target, and that there is no social need to greatly extend life
expectancy. But how might we think more broadly about research aiming
not only at health as traditionally understood but also to enhance
human nature and human characteristics?
_________________
ENDNOTES
- Renee C. Fox, "Experiment
Perilous: Forty-Five Years as a Participant Observer of Patient-Oriented
Clinical Research," Perspectives in Biology and Medicine
39 (1996): 206-226, 210.
- Ian Gallagher and Michael
Harlow, "Health Chiefs? Yes to Human Clones," International
Express, 1-7 August 2000, 1 & 10, 10.
- W. French Anderson, "Uses
and Abuses of Human Gene Therapy," Human Gene Therapy
3 (1992): 1-2, 1.
- Ronald Munson and Lawrence
H. Davis, "Germ-Line Gene Therapy and the Medical Imperative,"
Kennedy Institute of Ethics Journal 2 (1992): 137-158,
137.
- ---
- Glenn McGee and Arthur
Caplan, “The Ethics and Politics of Small Sacrifices in
Stem Cell Research,” Kennedy Institute of Ethics Journal
9 (1999): 151-158,152.
- Report of the Human
Embryo Research Panel (Bethesda, Md.: National Institutes
of Health, 1994), 44-45.
- James F. Childress, "Metaphor
and Analogy," in Encyclopedia of Bioethics, Revised
Edition, ed. Warren Thomas Reich (New York: Simon and Schuster
Macmillan, 1995), 1765-1773.
- George Annas, "Questing
for Grails: Duplicity, Betrayal and Self-Deception in Postmodern
Medical Research," Journal of Contemporary Health Law
Policy 12 (1996): 297-324.
- George Annas, "Reforming
the Debate on Health Care: Reform by Replacing our Metaphors,"
The New England Journal of Medicine 332 (1995): 744-747.
- Susan Sontag, AIDS
and its Metaphors (New York: Farrar, Straus, and Giroux,
1989), 95.
- Meilaender, “The
Point of a Ban.”
- Onora O'Neill, "Duty
and Obligation," in Encyclopedia of Ethics, ed.
Lawrence J. Becker and Charlotte B. Becker (New York: Garland
Publishing, 1992); Richard B. Brandt in Encyclopedia of Ethics,
ed. Lawrence J. Becker and Charlotte B. Becker (New York: Garland
Publishing, 1992), 278; and Richard B. Brandt, "The Concepts
of Obligation and Duty," Mind 73 (1964): 374-393.
- Hans Jonas, "Philosophical
Reflections on Experimenting with Human Subjects," Daedalus
98 (1969): 219-247, 245.
- Hans Jonas, "Philosophical
Reflections on Experimenting with Human Subjects," in Philosophical
Essays: From Ancient Creed to Technological Man, ed. Hans
Jonas (Englewood Cliffs: Prentice-Hall, 1974), 105-131, 117.
- See also Meilaender,
“The Point of a Ban.”
- President’s Commission
for the Study of Ethical Problems in Medicine and Biomedical and
Behavioral Research, Securing Access to Health Care,
Vol. 1 (Washington, D.C.: U.S. Government Printing Office, 1983),
22-23.
- Norman Daniels, “Justice,
Health, and Healthcare,” The American Journal of Bioethics
1 ( 2001): 3.
- Ibid. See also Ronald
Bayer, Arthur Caplan, and Norman Daniels, eds., In Search
of Equity (New York: Plenum Press, 1983); and “European
Issue: Solidarity in Health Care,” The Journal of Medicine
and Philosophy 17 (1992): 367-477.
- Ronald Puccetti, "The
Conquest of Death," Monist 59 (1976): 249-263.
- Annette T. Carron, Joanne
Lynn and Patrick Keaney, "End-of-life care in medical textbooks,"
Annals of Internal Medicine 130 (1999): 82-86.
- Susan Sontag, Illness
as Metaphor (New York: Farrar, Straus, and Giroux, 1977),
8.
- Lewis Thomas, The
Lives of a Cell: Notes of a Biology Watcher (New York: Penguin
Books, 1978).
- Bernard Williams, Problems
of the Self (Cambridge, Ma.: University Press, 1976), 94-95;
and Eugene Fontinell, Self, God, and Immortality: A Jamesian
Investigation (New York: Fordham University Press, 2000);
Chapters 7-8.
- Hans Jonas, "The
Burden and Blessing of Mortality," Hastings Center Report
22, no. 1 (1992): 34-40, 37.
- Daniel Callahan, The
Troubled Dream of Life: In Search of a Peaceful Death (New
York: Simon & Schuster, 1993).
- Darrel W. Amundsen,
"The Physician's Obligation to Prolong Life: A Medical Duty
Without Classical Roots," Hastings Center Report
8, no. 4 (1978): 23-30.
- Philippe Aries, The
Hour of Our Death, trans. Helen Weaver (New York: Knopf,
1981).
- E. J. Larson and T.
A. Eaton, "The Limits of Advanced Directives: A History and
Reassessment of the Patient Self-Determination Act," Wake
Forest Law Review 32 (1997): 349-393.
- Carron, Lynn and Keaney,
"End-of-life care in medical textbooks."
- L. Scheiderman and N.
Jecker, Wrong Medicine: Doctors, Patients, and Futile Treatment
(Baltimore: Johns Hopkins Press, 1995).
- Callahan, The Troubled
Dream of Life.
- Quoted in Lawrence M.
Fisher, "The Race to Cash in on the Genetic Code," The
New York Times, 29 August 1999, sec. 3, p. 1.
- William B. Schwartz,
Life Without Disease: The Pursuit of Medical Utopia (Berkeley:
University of California Press, 1998).
- Personal communication.
- Anthony J. Vita et al.,
"Aging, Health Risk, and Cumulative Disability," The
New England Journal of Medicine 338 (1998): 1035-1041.
- Gerald J. Gruman, "Cultural
Origins of Present-Day 'Ageism': The Modernization of the Life
Cycle," in Aging and the Elderly: Humanistic Perspectives
in Gerontology, ed. Stuart F. Spicker et al. (Atlantic Highlands,
N.J.: Humanities Press, 1978), 359-387.
- Robert Prehoda, Extended
Youth: The Promise of Gerentology (New York: Putnam, 1968),
254.
- Arthur L. Caplan, "The
Unnaturalness of Aging-- A Sickness Unto Death?," in Concepts
in Health and Disease, ed. Arthur L. Caplan, H. Tristram
Engelhardt and James McCarthey (Reading, Mass.: Addison-Wesley,
1981), 725-737; and Daniel Callahan, "Aging and the Ends
of Medicine," in Biomedical Ethics: An Anglo-American
Dialogue, ed. Daniel Callahan and G. R. Dunstan (New York:
New York Academy of Sciences, 1988), 125-132.
- Timothy F. Murphy, "A
Cure for Aging?" The Journal of Medicine and Philosophy
11 (1986): 237-255.
- T. B. L. Kirkwood, "Is
There a Limit to the Human Life Span?," in Longevity:
To the Limit and Beyond, ed. Jean- Marine Robine, James W.
Vaupel and Michael Bernard Jeune (Berlin, New York: Springer,
1997), 69-76; and Ali Ahmed and Trygve Tollefshol, “Telomeres
and Telomerase: Basic Science Implications for Aging,” Journal
of the American Geriatric Society 49 (2000): 1105-1109.
- Shiro Horiuchi, “Greater
Lifetime Expectations,” Nature 405 (2000): 744-745.
- S. J. Olshansky, "Practical
Limits to Life Expectancy in France"; in Longevity: To
the Limit and Beyond, 1-10.
- Olshansky, "Practical
Limits to Life Expectancy in France."; and Rogers, Epstein,
and Muldoon.
- James W. Vaupel, "The
Average French Baby May Live 99 or 100 Years," in Longevity:
To the Limit and Beyond, 11-27.
- Kirkwood, "Is There
a Limit to the Human Life Span?" 75.
- Michael R. Rose, "Aging
as a Target for Genetic Engineering," in Engineering
the Human Germline, ed. Gregory Stock and John Campbell (New
York: Oxford University Press, 2000), 54.
- James W. Vaupel et al.,
"Biodemographic Trajectories of Longevity," Science
280 (1998): 855-860; and James R. Carey and Debra S. Judge,
“Life Span Extension in Humans is Self-Reinforcing: A General
Theory of Longevity,” Population and Development Review
27 ( 2001): 411-436.
- E. Timmer et al., Variability
of the Duration of Life of Living Creatures (Amsterdam: IOS
Press, 2000), 161-191.
- Daniel Perry, "The
Rise of the Gero-Techs," Genetic Engineering News 20 (2000):
57-58.
- Ibid., 57.
- See Daniel Callahan,
"Aging and the Ends of Medicine," in Biomedical
Ethics: An Anglo-American Dialogue, ed. Daniel Callahan and
G. R. Dunstan (New York: New York Academy of Sciences, 1988),
125-132; Victor R. Fuchs, "Medicare Reform: The Larger Picture,"
Journal of Economic Perspectives 14 (2000): 57-70; David
M. Cutler, "Walking the Tightrope on Medicare Reform,"
Journal of Economic Perspectives 14 (2000): 45-56; and
Mark McClellan, "Medicare Reform: Fundamental Problems, Incremental
Steps," Journal of Economic Perspectives 14 (2000):
21-44.
- John Harris, "Intimations
of Mortality," Science 288 (2000): 59.
|