Article Text

Download PDFPDF

High hopes and automatic escalators: a critique of some new arguments in bioethics
Free
  1. S Holm1,
  2. T Takala2
  1. 1Cardiff Law School, Cardiff, UK; Section for Medical Ethics, University of Oslo, Norway
  2. 2Centre for Social Ethics and Policy, School of Law, University of Manchester, Manchester, UK
  1. Correspondence to:
 Prof S Holm
 Cardiff Law School, Cardiff University, PO Box 427, Cardiff, UK; holms{at}cardiff.ac.uk

Abstract

Two protechnology arguments, the “hopeful principle” and the “automatic escalator”, often used in bioethics, are identified and critically analysed in this paper. It is shown that the hopeful principle is closely related to the problematic precautionary principle, and the automatic escalator argument has close affinities to the often criticised empirical slippery slope argument. The hopeful principle is shown to be really hopeless as an argument, and automatic escalator arguments often lead nowhere when critically analysed. These arguments should therefore only be used with great caution.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Although only a small number of bioethicists admit to being utilitarians or holding other strict consequentialist views, most bioethicists hold that the consequences of our actions are, to some degree, relevant to their ethical assessment. The difficulties in foreseeing future developments and their implications, however, complicate the analysis. The uncertainty of future developments has led many people working in the field of bioethics to use arguments that are logically problematic in their critical work on the introduction of new technologies. The emergence of environmental ethics brought about the arrival of the precautionary principle, a principle suggesting that if some action can have bad consequences we should not do it. Since then, the principle has been applied in other fields of biomedical ethics too. The so-called slippery slope argument in its many forms influences similar situations. According to this argument, by adopting a practice A (eg, allowing saviour siblings) we will start a slide towards another practice B (eg, allowing designer babies), and whereas A may be morally acceptable, B is undeniably bad. In the stronger form of this argument the connection between the two practices is conceptual or logical, and in another, more frequently used version, the connection is empirical.

People who have been more optimistic regarding the possibilities of the new technologies have often criticised the precautionary principle and the slippery slope argument for their logical and empirical problems. When some of the arguments in favour of modern biotechnologies are studied more closely, however, they seem surprisingly familiar. We will look at two types of protechnology arguments in detail, showing how science-enthusiasts sometimes use the hopeful principle, which in its structure comes close to the precautionary principle, and at other times resort to the automatic escalator argument; an argument that is essentially a reverse version of the empirical slippery slope argument, when trying to make their case.

FROM PRECAUTION TO HOPE

The precautionary principle is sometimes used to ban practices believed to lead to harmful consequences. The structure of this argument can be described in the following way:

P1. We know that A (eg, global warming) is a bad (or extremely harmful) for the environment.

P2. We suspect that x (eg, the emission of greenhouse gases) causes A, but this cannot, for the time being, be scientifically proven.

C1. We conclude that, in the name of precaution, we ought to ban x, even though it may turn out that it is not the cause of A.1

When critically evaluated, it is obvious that the argument is problematic. Perhaps the criticism most often mentioned against the precautionary principle is that it can be used to ban almost anything. If a suspicion of a causal connection is indeed enough to make us choose the precautionary option, we may end up doing nothing. Although less often questioned, there is also the issue of whether the first premise is true and if so, in what sense. That is, do we really know what the long-term consequences of global warming will be? What goods do we prevent from being actualised with our attitude of precaution towards a particular course of action x? Is taking on this moral cost (ie, the loss of a good) justified in the absence of any actual proof or strong evidence of a causal link between x and A? What other practices possibly leading to A are we allowing when concentrating on banning x without tangible evidence? As this principle has had its fair share of criticism in the literature,2 we will now turn our attention to the first of our new principles in bioethics.

Many people believe that new biomedical technologies will provide cures for cancer, Parkinson’s disease and many other medical conditions that are currently shortening our life spans. Such claims have been made for gene therapy, for stem cells, for nanotechnology, and for human subject research. Let us look at some examples:


 In the case of germ-line enhancements, the potential gains are enormous. Only rarely, however, are the potential gains discussed, perhaps because they are too obvious to be of much theoretical interest. […] But if we think about it, we recognize that the promise of genetic enhancements is anything but insignificant. Being free from severe genetic diseases would be good, as would having a mind that can learn more quickly, or having a more robust immune system. Healthier, wittier, happier people may be able to reach new levels culturally. […] On an even more basic level, genetic engineering holds great potential for alleviating unnecessary human suffering. Every day that the introduction of effective human genetic enhancement is delayed is a day of lost individual and cultural potential, and a day of torment for many unfortunate sufferers of diseases that could have been prevented. Seen in this light, proponents of a ban or a moratorium on human genetic modification must take on a heavy burden of proof in order to have the balance of reason tilt in their favor.3
 
 Stem cell science holds out the hope that damaged body parts might be replaced by new tissue that works out properly.’ […] ‘It potentially represents a quantum leap in medical treatment as significant as the introduction of antibiotics.4
 
 While nanotechnology is new, so new that nothing seems impossible, there are certain predictions that may be safely drawn. Though we need to be cautious of both positive and negative hype, some speculative applications of nanotechnology are becoming clear. […] In nanomedicine, there are discussions of sending dendrimer polymers into every reach of the body to dispense drugs in specifically localized cells, and of dispatching diagnostic nanomachines into the body to detect cancer when only a few cancerous cells exist. Moreover, nanotechnology will be used as a tool for genetic information and research, facilitating genome sequencing and nuclear transfer with “smart” nanodevices that have some independence and learning capabilities.5
 
 Since World War II, we have witnessed a dramatic increase in biomedical knowledge and tremendous progress in creating effective treatments for disease. There are benefits that flow from human subject research. We are also aware that we stand on the brink of a cascade of insights into human genetics and the promise of spectacular related advances in biomedical technology.6

With this optimistic attitude towards future developments, a new kind of argument has been evoked in bioethics, an argument based on what we will call the hopeful principle. This argument can be formulated in the following way:

P3. We know that B (eg, a cure for cancer) is a good thing.

P4. We suspect that y (eg, genetic research) will lead to B although this cannot, at the time being, be scientifically proven.

C2. We conclude that, in the name of the hopeful principle, we should promote y (even though it may turn out that it does not lead to B).

In principle, there is nothing wrong in arguing that if something good can follow from a particular practice we should probably do it. Those invoking the hopeful principle are, however, saying much more than that. What they seem to be assuming is that if something good (B) can follow from y it will, and that because of the goodness of B we should accept an unspecified array of economic and moral sacrifices that will follow from allowing y. The problems with this argument are similar to those associated with the precautionary principle. For one, science enthusiasts would not be too impressed, if we were to make the same argument, but replace genetic research by, say, meditation or homeopathy.

Further, although a cure for cancer would indeed be a very good thing, there are arguably other goods in the world. For instance, if the basic presupposition here is that because of our commonly acknowledged duties of not doing harm and, perhaps to a lesser degree, of benefiting others, we have a moral duty to respond to medical need,7 there would surely be other ways of achieving that. Throughout the developed world healthcare professionals working in the public sector are often underpaid and overworked, which leads to less-than-perfect standards of care and mistakes. Scarcity of funds allocated for health services has made priority setting and rationing a reality everywhere. In practice, this means that people’s medical needs are not met even when, in theory, the means to do so are available. Should we then be more interested in responding to the medical need now than investing into something that may be of benefit to someone later? If we move to consider the medical need on a global scale, the fact that every 3.6 s a person dies of starvation8 should arguably invoke duties to prevent this from happening—duties that must be at least as weighty as the duty to pursue promising research.

A cure for cancer is a very good thing for a patient who has access to that cure, but is of little use to a patient who cannot have access to it. As also a person waiting for a kidney, infected by a hospital, dying of a common curable disease or starving to death has little actual benefit from the fact that a cure for cancer exists, although this person could benefit from other interventions. A world where a cure for cancer existed and where at least some would be able to benefit from it would be preferable to the present one (all other things being equal), but so would a world where less people died of starvation, common curable diseases or while waiting for an operation. The hopeful principle does not seem to provide strong reasons for furthering a particular goal, when the multiplicity of morally worthy goals is taken into account.

This is where the precautionary principle differs from the hopeful principle. The precautionary principle is usually resorted to when we are faced with the prospect of losing everything. In the context of environmental ethics, the possibility of the end of all (or at least human) life presents us with a dilemma parallel to Pascal’s famous wager.9,10 If we continue with destroying the environment, all life may die and we would have nothing. Even if it were not the case that the current exploitation of the environment can lead to an end of all life, avoiding greenhouse gases and using renewable sources of energy is surely a small sacrifice to make when we could be losing everything. Similarly, in discussions on modern biotechnologies, the fear is that by proceeding with practices that are thought to be contrary to human dignity, we are risking everything in terms of things that matter morally.

The hopeful principle does not seem to present us with a similar all-or-nothing scenario unless we postulate immortality as the desirable goal that the technology may lead to. Pascal thought that small sacrifices are nothing compared with the possibility of eternal life in heaven. Could those arguing from the presumption of hope be saying that immortality here on earth is the “all” that we eventually stand to gain? Although this could add some strength to the argument, it has various problems. Firstly, many people seem to be of the view that bodily immortality would not be a good thing.11 In this case, the argument has less appeal to begin with. Secondly, arguably, death could still follow from various external reasons that cannot be controlled by future biomedicine. These can vary from accidents that damage the body beyond repair to breakdowns in technologies that are needed and natural disasters. What technology can offer is therefore not immortality, but just a very long life. Thirdly, this idea shares with Pascal’s argument the problem that it works (to the degree that it does) only if you think that there is just one possible way of achieving eternal life (and presumably some sort of happiness).

Although the hopeful principle shares the problems of the precautionary principle, it seems to have the additional problem of not being able to provide us with an all-or-nothing choice against which to evaluate what the right course of action could be.

AUTOMATIC ESCALATOR

The automatic escalator argument, in its turn, bears a close resemblance to the slippery slope argument. A traditional empirical slippery slope argument has the following form:

P5. Introducing practice C is morally acceptable (eg, voluntary euthanasia).

P6. Introducing practice C will, however, as a matter of empirical fact, lead to the introduction of practice D (eg, non-voluntary euthanasia).

P7. Practice D is morally abhorrent.

C3. Therefore, we should not introduce practice C.

The argument tries to convince us that even though C in itself is acceptable, the moral cost of introducing C—that is, the degree of badness of D is so high that it should not be produced. It is well known that empirical slippery slope arguments can be attacked in one of two ways: by questioning whether there will actually be a slide from C to D or by questioning whether D is really as bad as the premise claims.

If the slippery slope argument is accepted and C is not introduced, we are also left with the problem that we will never know whether the moral cost incurred by not allowing people to pursue C is justified, because we will never know whether there is a slide from C to D.

For promoting the new technologies, a new version of this argument has emerged. We call it the automatic escalator argument and it proceeds in the following way,

P8. There are moral costs associated with promoting and pursuing a new technology T (eg, human embryonic stem cell research).

P9. Promoting the new technology T will, however, as a matter of empirical fact, lead to the good U (or the goods U1, U2, U3,..., Un; eg, cure of numerous diseases and considerable extension of life).

P10. U (or U1, U2, U3,…, Un in conjunction) is an unalloyed and great moral good.

C4. Therefore, we should pursue and promote T, despite the moral costs.

This argument, however, suffers from problems similar to those of empirical slippery slope argument. Is P9, for instance, plausible? The automatic escalator argument is often invoked to support a new technological option as a complete package, instead of as a specific solution to a specific problem—for example, supporting stem cell technology totally instead of stem cell treatment for Parkinson’s disease. This creates the problem that the goal U or some important elements of the conjunction U1–Un is in most circumstances also being pursued by other competing research programmes. Similar claims are for instance, being made for gene therapy, stem cell research and nanotechnology. Thus, pursuing T may not really be necessary for obtaining all of U1–Un, and conversely T may not will in the end be the technology that brought us all of U1–Un. Some of the good consequences that are predicted to arise from pursuing T will not arise from pursuing T, simply because some other technology turns out to be better, or is developed first.

Another problem is that most moral costs will usually be incurred long before we know whether the potential of T will in fact be fully or partly actualised. Just like the slippery slope argument, the automatic escalator argument thus invites us to incur certain moral costs for the mere possibility of gaining future moral benefits.

RHETORICAL ISSUES

Both the hopeful principle and the automatic escalator argument are clearly intended to persuade us to pursue a given course of action. The persuasive force of the arguments does not usually rely exclusively on the logical structure of the argument and the truth of the premises but also on the use of rhetorical strategies. Just like the logical structure of the arguments, the rhetorical strategies are also similar to strategies used by proponents of the precautionary principle and slippery slope argument. Three problematic rhetorical strategies can be discerned in the protechnology arguments. Firstly, stating the benefits of pursuing the technology mainly in personal terms—for example, healing sick people even if the predicted eventual use of the technology will mainly be impersonal. Secondly, implying or explicitly stating that major benefits will occur very soon, even when it is highly predictable that they are actually quite distant, even if the development of the technology proceeds at the fastest possible rate—in the healthcare field, there is often a long wait between successful proof of principle experiments and actual use generating widespread benefits. Thirdly, minimising the moral costs, for instance by neglecting justice effects of the introduction of the technology or by underestimating the transition time and costs that have to be incurred before the eventual effective technology is actually in widespread general use.

CONCLUSION

Advances in biomedicine will undoubtedly provide us with new knowledge that can be used to respond to medical need. They will not exhaust this need, nor will they guarantee eternal well-being. The issue of distributing the benefits will continue to be a major problem. The consequences of these actions when we are considering courses of actions to take, the consequences of these actions should be taken into account. Any analysis that argues from consequences must aim at taking on board all the risks of harm and all the possible benefits to all concerned, but it needs to do this in a way that properly acknowledges the uncertainties in predicting the effects of pursuing any technology.

The precautionary principle is problematic as are many uses of empirical slippery slope arguments, and the weaknesses in these arguments are well known. It is therefore surprising to see some of the most ardent critics of these arguments using the hopeful principle and automatic escalator arguments, even though these have similar weaknesses.

We have shown that when critically analysed, the hopeful principle is really of little value as an argument, and that automatic escalator arguments often lead nowhere.

REFERENCES

Footnotes

  • Competing interests: None declared.

Other content recommended for you