Article Text


Medical ethics: principles, persons, and perspectives: from controversy to conversation
  1. K M Boyd
  1. Correspondence to:
 Professor K M Boyd
 College of Medicine and Veterinary Medicine, University of Edinburgh, Edinburgh EH8 9AG, UK;


Medical ethics, principles, persons, and perspectives is discussed under three headings: History, Theory, and Practice. Under Theory, the author will say something about some different approaches to the study and discussion of ethical issues in medicine—especially those based on principles, persons, or perspectives. Under Practice, the author will discuss how one perspectives based approach, hermeneutics, might help in relation first to everyday ethical issues and then to public controversies. In that context some possible advantages of moving from controversy to conversation will be explored; and that will then be illustrated with reference to a current controversy about the use of human embryos in stem cell therapy research. The paper begins with history, and it begins in the author’s home city of Edinburgh.

Statistics from

Medical ethics has a long history, not least in Edinburgh. In 1774, writing to Professor William Cullen, Adam Smith remarked that the “present acknowledged superiority” of the Edinburgh medical faculty was due to the fact that its professors were paid by results—in practice by the number of students who considered their lectures worth attending.1 If, however, the great economist saw the Invisible Hand at work in the classroom, even he had doubts about its reach beyond graduation.

“A degree [he wrote] can pretend to give security for nothing but the science of the graduate; and even for that it can give but a very slender security. For his good sense and discretion, qualities not discoverable by an academical examination, it can give no security at all” (Rosner,1 p 66).

That problem still exercises medical educators and the General Medical Council. How can they secure the “good sense and discretion”, and the ethics generally, of tomorrow’s doctors? Two very different answers were to emerge from 18th century Edinburgh.

The first came from Thomas Percival. Percival was an Edinburgh medical student from 1761 to 1765, when he left to complete his degree at Leyden. Thirty years later, he had become physician to the Manchester Infirmary, where traditional relationships between physicians, surgeons, and apothecaries were strained and fractious. His colleagues invited him to arbitrate, by drawing up a set of rules and regulations to put matters on a more modern and harmonious footing. Percival agreed, and eventually, in 1803, published his Code of Institutes and Precepts adapted to the Professional Conduct of Physicians and Surgeons, under the title of Medical Ethics.2 It was the first English book with that title, and was to be enormously influential, especially in the United States. The American Medical Association—for example, acknowledged that its “Code of Medical Ethics”, issued in 1847, largely derived from Percival’s work (Leake,2 pp 49–50).

Percival’s Medical Ethics was a reinterpretation of the old Hippocratic guild ethos, seen through the eyes of an 18th century medical officer and Christian gentleman, and adapted to meet the needs of a new scientific and industrial, but still hierarchical and often deferential, society. It was a prospectus for the style of professional medical ethics, self regulating, paternalistic, and often benign, which typically prevailed until around the middle of the 20th century.

However, a second, very different answer came from John Gregory. Gregory held the chair of physick at Edinburgh from 1766 until his death in 1773, and during that time delivered and published a series of Lectures on the Duties and Qualifications of a Physician. Like Percival, Gregory held 18th century Common Sense philosophical views, but he was also more radical. In particular, he deplored the stagnant and dangerously unscientific state of much 18th century medical practice. A physician, he believed, had a positive moral duty to be “ready to acknowledge and rectify his mistakes”.3 Too many were not, however, and one reason for this, Gregory argued, lay in the nature of medical practice itself. While “no profession requires so comprehensive a mind as medicine”, he wrote, in medicine also there was “no established authority to which we can refer in doubtful cases. Every physician must rest on his own judgment, which appeals for its rectitude to nature and experience alone” (Gregory,3 p 755 ff).

This problem was compounded moreover by the fact that “the art must necessarily be practised in so private a manner, as renders it difficult for the public to form a just estimate of the physician’s knowledge from the success of his practice”.4

Gregory believed, however, that there was a remedy for this. Medical studies should be open not only to future physicians, but also to laymen, who through this scientific education could “form a just estimate of the physician’s knowledge”. Medicine, he argued, would progress much more rapidly, if physicians were to practise “under the inspection and patronage of men qualified to judge their merit, and who were under no temptation, from sinister motives, to depreciate it”.4

Gregory’s remedy—“to lay medicine open to the public” (Haakonssen,4 p 70) as he called it—thus was very different from Percival’s prescription for professional self regulation. Percival’s prescription was the one taken at the time, but with the benefit of 21st century hindsight, Gregory’s remedy seems more farsighted. During the 19th and 20th centuries, “the inspection and patronage of men qualified to judge” the merit of medical practice, would be increasingly supplied by biomedical, and later social, scientists, contributing to medicine’s ever growing evidence base. And with the wider public in mind, Gregory’ remedy, as applied by his friend William Buchan in his bestselling layperson’s guide, Domestic Medicine, (Haakonssen,4 pp 69–70) would spawn countless imitators, eventually on the internet.

The advantages of Gregory’s remedy over Percival’s prescription were not finally recognised, however, until the middle of the 20th century. The historical reasons for this are probably too well known to need more than a brief reminder. They include the betrayal of scientific medicine’s ideals during the Nazi era, but also dramatic biomedical advances in the postwar period, such as in vitro fertilisation and transplant surgery. Both demonstrated what medicine could do, but also raised questions about what it should do. Many people, both within and outside the medical profession, concluded that medical ethics was no longer something for doctors to discuss in private, but that it had become an area of legitimate public debate.

If, however, doctors and other health professionals were now to practise, as Gregory hoped, “under the inspection and patronage of men qualified to judge their merit”, these men, and women, needed sufficient understanding of the clinical, scientific, and ethical issues involved, to make their own informed choices, and to play a responsible part in public debate and decision making. Doctors, therefore, had to relearn how to communicate both with individual patients, and with public opinion; and had to be prepared to share with patients and the public not only their knowledge, but also their uncertainties. It was in this context that medical ethics emerged as a field of academic study during the later years of the 20th century. I shall say just a few very brief words about that.

Medical ethics is studied productively by many different disciplines and multidisciplinary approaches. Particularly within medical education, however, it has two main tasks. One is concerned with teaching and learning the standards of professional competence and conduct that the medical profession expects of its members—a task it shares with all clinical teaching, since what is taught in the classroom is of little effect unless it is reinforced by example. The other is concerned with teaching and learning how to articulate and analyse the increasing number of ethical problems raised by the practice of medicine, to many of which there are no obvious or agreed answers. So if, in the tradition of Percival, medical ethics is Hippocratic, it is also, in the tradition of Gregory, Socratic.

I shall now outline some of the theoretical approaches to medical ethics that tomorrow’s and indeed today’s doctors may find more—or less—helpful when they try to make some sense of the issues involved, to discern the wood from the trees, and to distil wisdom to inform practice.


There is no shortage of such approaches today. For the present purpose, however, I shall confine myself to three broad categories, based respectively on principles, persons, and perspectives. One way to contrast these three approaches is to say that a principles based approach focuses on the ACT, a persons based approach focuses on the AGENT, and a perspectives based approach focuses on the CASE.


The key question for someone using a principles based approach is whether a particular act, or course of action, is morally right. What makes it right is if it obeys an agreed moral rule, or respects an agreed moral principle. The rule or principle might be deontological (to do with duties and rights) or might be consequentialist (concerned with the act’s consequences). Examples of deontological rules or principles are “Thou shalt not kill”, or the principle of non-maleficence. Examples of consequentialist rules or principles are “Always do what will produce more good than harm”, or the utilitarian principle of the greatest happiness of the greatest number. What all principles based approaches have in common, is that the action or course of action should conform to an agreed moral rule or principle.

An obvious argument in favour of this is that it is better to be principled than unprincipled. But a problem with principles based approaches generally, is that there is no philosophical agreement on which principle, or kind of principle, is the correct one. Consequentialists and deontologists alike have spilt much ink, and now hammer many keyboards, to mount formidable defences of their respective theories. No philosophical theory is invulnerable to counterargument, however: if any one of them were, indeed, moral philosophers would be out of business. The battle between consequentialists and deontologists, moreover, is only one of many on philosophy’s darkling plain, where learned armies clash by night, each calling the other “confused”. Our philosophical colleagues should not be blamed for this, however: the reason why they frequently fail to achieve theoretical agreement is that they think more rigorously and systematically about these things than the rest of us do.

The four principles of bioethics

We must also be grateful to philosophers for one particular deontological approach, which countless health care professionals and students today have grabbed as an ethical lifebelt—the Four Principles of Bioethics.5 Its attraction for health professionals and students is that the four principles—beneficence, non-maleficence, respect for autonomy, and justice—are a handy checklist of the ethical angles to cover when considering morally problematic cases or questions. “What will do good, avoid harm, or at least do more good than harm for the patient concerned?” “Are the wishes of the persons involved being taken seriously?” “What is most equitable, not just for this patient or group of patients but also for others and for society?”

Innumerable ethically problematic clinical cases have now been analysed in terms of this conceptual framework or grid. One problem with this framework, however, is that the four principles, as advertised, are each prima facie—each, in other words, is a principle that should be followed, provided that it does not conflict with another equally important principle. And there’s the rub. According to beneficence and non-maleficence—for example, more good than harm will be done for the patient by immediate treatment: but the patient refuses the treatment, is clearly competent, and thus according to respect for autonomy, the treatment cannot be given. Justice is no help here unless the patient’s refusal seriously endangers some other person’s vital interests. The problem with the four principles, in other words, is that they provide no decision procedure for resolving conflicts or reaching practical conclusions.

Philosophical critics of “principlism” as they call it, sometimes argue that recourse to the four principles, particularly by doctors, is merely “ceremonial”, and that they “serve as slogans that are used to support conclusions that one has reached without really using the principles at all” (Gillon et al,5 pp 251–66). There may be some truth to that. I think that is, however, too ungrateful a view of the four principles. At the very least, they get the ethical conversation started—allowing it to proceed beyond the point at which people tend to say: “It’s all relative”, or “It’s all a matter of personal opinion”. The four principles too, can be seen as what are sometimes called “regulative ideals”6—a constellation of ethical stars by which to navigate—helpful on a clear night to those seeking a port in a storm.


Turning now from principles to persons, another approach, increasingly popular today, but essentially as old as Aristotle, shifts the focus of attention away from the moral act to the moral agent. This approach, often referred to by the misleadingly prim title of “virtue ethics” is concerned less with the right thing to do, than the best kind of person to be. The kind of question that someone taking this approach might ask is: “If I were to do such and such now, would I be acting justly or unjustly (or neither), kindly or unkindly [and so on]”.7 The contemporary virtue ethicist Hursthouse admits that the number of positive “virtue terms” available for this kind of questioning is limited. She points out, however, that the same question may be asked very productively with reference to avoiding the many vices. “If I were to do such and such now, would I be acting” in a way that is “irresponsible, feckless, lazy, inconsiderate, uncooperative, harsh, intolerant, indiscreet, incautious, unenterprising, pusillanimous, feeble, hypocritical, self indulgent, materialistic, grasping, shortsighted” and so on?”8

This can be a helpful approach to moral problems. Or personalising it still further, one might ask, in a tight corner: “What would so and so—an elder of the tribe or someone else whose practical wisdom I greatly respect—do, or avoid doing, in these circumstances”? One objection to this, of course, is that it reminds one of the old problem about role modelling in medical education. What if the role model chosen is a bad one? What if what are perceived as virtues are, or in certain contexts are, actually vices? It is difficult, admittedly, to mount a moral defence for being “pusillanimous” or “feeble”; but there are contexts in which it may be right to be “uncooperative” or even “harsh”. This objection can no doubt be accommodated by virtue ethics theorists, and it is not a reason for rejecting persons based approaches. It suggests, however, that they, like principles based ones, are only part of the ethical story.


A perspectives based approach readily accepts this. Approaches based on principles or persons are among the relevant perspectives from which ethical issues can be addressed. They are not, however, the whole ethical story, or necessarily where to start from. A perspectives based approach begins by focusing attention not on the act or the agent, but the case. Not at this stage, however, even in medical ethics, the clinical case. The first question is about the “case” in its more primitive form, derived from the Latin cadere, “to fall”. What is the case? What has befallen? Or (in the language of Laurel and Hardy) “What kind of fine mess have we gotten into this time?”

The answer is not obvious. A perspectives based approach recognises that a moral problem is not something “out there” or given, like a natural object. A moral problem is an interpretation of events seen from a perspective shaped by history and tradition. In this respect it is like a clinical case. Take the clinical case of EB, an English patient in a French hospital sometime in the 1920s. Under observation by his doctors and nurses, EB’s case is carefully constructed from answers to a series of questions meticulously selected by medical tradition for their diagnostic and therapeutic salience. The clinical case of EB thus is, as far as can be ascertained, the medical truth about EB. It is, however, only part of what is “the case”, or the whole truth, about EB. For all the while, his doctors and nurses also have been under observation—by EB, alias Eric Blair, alias George Orwell, who subsequently produces his own case report, in which we can still read that: “it is a great thing to die in your own bed, though it is better still to die in your boots. However great the kindness and efficiency, in every hospital death there will be some small, squalid detail, something perhaps too small to be told but leaving terribly painful memories behind, arising out of the haste, the overcrowding, the impersonality of a place where every day people are dying among strangers.”9

Like a clinical case then, a moral problem is a construction put on events, as seen from a perspective shaped by history and tradition. The construction is not necessarily, or even often, a conscious one: it is latent in language. When we hear Orwell speaking of a “small, squalid detail”—for example, we are already half way to making a moral judgment—or a moral justification. The fact that moral problems are not simply “out there” or given, is also illustrated by what is called “moral blindness”. Some people simply do not recognise that there is a moral problem—for example, in telling a lie if they can get away with it. Moreover, some of the ethical issues that exercise us today—for example, those around informed consent, were not problems for people two hundred years ago. Cultures differ in what they perceive to be moral problems, and about the right way to resolve those that they do perceive. All of us, conditioned by our history, tradition, training, and experience of life, have our own moral perspectives, which differ, in more or less significant ways, from the perspectives of others. All of us, if you want to put this at its strongest, are prejudiced, one way or another.


But is prejudice a bad thing? That question is asked by the perspectives based approach known as hermeneutics. Originally the art of interpreting ancient texts, hermeneutics is now also interested in the interpretation of behaviour, speech, and institutions. One of its leading exponents, H-G Gadamer, argues for a positive view of prejudice, prejudging, or fore-understanding.10 Without it, he says, we would never understand anything at all. When we begin to listen to another person, our prejudices or prejudgments are already running ahead, anticipating the meaning of what we are being told. Without that initial projection of meaning, we cannot get started, get engaged with a text or a person. Once engaged, however, what matters is whether we are really listening. If we are, we will soon find that the meaning we are anticipating is either confirmed, or corrected, by what we hear the other person actually saying. As further anticipations of meaning in turn are corrected or confirmed, understanding of what is being said to us grows. The “art” in this process, which we all practice, is not to let our prejudices run too far ahead and overwhelm what the other person actually is saying—for if that happens, instead of hearing them, we may start psychologising them, or think that we understand them better than they understand themselves. This is not unknown in domestic arguments. We are more likely to resort to it, Gadamer says, if we think we are not prejudiced, while remaining “under the tyranny of hidden prejudices” (Gadamer,10 p 239).

Being aware that we are prejudiced, of course, may not always be appropriate. Taking a patient’s history, a doctor is professionally prejudiced in favour of a diagnosis: but for the doctor to think of that, then, as a prejudice is not very helpful. The anticipation of meaning propelled by this diagnostic prejudice, however, still needs to be checked or confirmed by what the patient reports. Moreover, when it comes to discussing what to do about the diagnosis, the hermeneutic model of a conversation between friends seeking to come to a common mind about something, may well be appropriate—for as Aristotle says somewhere, only friends can truly advise each other.

A conversation between two friends seeking to come to a common mind about something is also, if we follow Gadamer, an appropriate model for medical ethics. It cannot, of course, achieve the kind of moral certainty to which some principles based approaches aspire, though rarely achieve. On the other hand, a hermeneutic approach does not entail moral relativism, or that any perspective is as good as the next one. When two friends with different perspectives on a subject have a conversation about it, they may end up with a new shared perspective on the subject, more satisfactory to each of them than either of the perspectives they began with. This outcome, which Gadamer calls a “fusion of horizons”, (Gadamer,10 273ff) may also emerge from “conversations” among larger groups of people, who may reach some new consensus on a matter of common interest. In either case, however, to achieve this, the parties involved need to be aware of their own prejudices and prepared really to listen to what the others are saying.


Turning now from theory to practice, I want to explore some implications of a hermeneutic approach to medical ethics, in relation first to everyday ethical issues and second to public controversies.

Everyday ethical issues

In everyday medical practice, a great variety of ethical issues can arise—for example, regarding consent or confidentiality or truth telling or priorities. Sometimes, however, people are, for good or understandable reasons, too preoccupied with other things to notice, and perhaps do not do so until a moral molehill has grown into a legal mountain. The first requirement of a hermeneutic approach to everyday ethical issues therefore, is to have sufficiently sensitive moral antennae to detect when someone (a doctor, nurse, patient, or family member—for example) sees something as a problem.

The first stage of the hermeneutic process then is to listen; and so is the second—to listen to the different “stories” told by the other people involved, about what has been seen as a problem. The aim is not for an ethicist to listen to everyone and then supply them with an ethical verdict. For the process to be productive, those involved in the situation (doctors, nurses, patients, family) need to listen to one another, so that they can work out among themselves what is going on ethically and, if possible, reach some new and more productive shared understanding of the situation. An ethicist may be able to help by asking pertinent questions, but the answers need to come from the participants.

Now this is not very different from what often happens in hospitals and other health care contexts—in discussions in or near the workplace about a particular event, or in ethical or other review committees, or in working parties producing reports on principles or policies. All of these are, or may be, examples of the hermeneutic process at work: and there are no special rules except those I have already mentioned about listening and being prepared to learn from others. The difficulty about everyday ethical issues in health care, of course, is the lack of everyday time to reflect on and discuss them; and there is no panacea for that. I would only suggest that not making time for this reflection and discussion is like not making time to have one’s car serviced. A hermeneutic approach to everyday ethical issues is, at the very least, part of good risk management, and if it goes well, it may also enable those involved to agree on “a course of action likely to be responsive to more features of the situation, and thus more nuanced and efficacious”.11

Public controversies

A hermeneutic approach then, may help to illuminate everyday ethical issues. But turning to public controversies, the prospects for conversation look less promising. “Controversy” and “conversation” share a common root in the Latin verto, or verso, “to turn”, but the direction is diametrically different: contra, “against”; con or com, “together”. In controversies about abortion or euthanasia—for example, or about the use of animals or human embryos in research, the protagonists aim not to converse with those holding opposing views, but to controvert, or even convert them.

They may have good reasons for this. For if, in hermeneutic mode, we hold in check our own prejudices about the particular subject of controversy, and listen to what the protagonists are saying, we often find persuasive arguments on both sides, and in some controversies these are finely balanced. What commits or converts anyone to one side or the other, therefore, may lie beyond the reach of reasoned argument. Our deepest prejudices about moral questions are not accidental but integral to how our identity has been shaped by our familial, professional, or cultural traditions; and these prejudices are reinforced if our conversation is confined to those who also share them. Because that is easier and more congenial than discussion with people whose prejudices differ or are opposed to ours, it is unlikely that controversies will cease; and on some occasions, of course, it is right that they should continue, especially when the sharp edge of controversy is needed to puncture a complacent consensus.

Despite this, there are good reasons for trying to move on, wherever possible, from a controversial approach to a more conversational one. Nowadays, many people’s prejudices are formed not by one, but by several traditions; and a lively conversation between different traditions—familial, educational, professional, political, religious, cultural—may already be going on within our hearts and minds. There may also be a lively conversation going on within these traditions themselves. Traditions, of their very nature, grow and develop; and in different historical contexts, a tradition may have to express itself differently if it is to remain true to itself. Medicine—for example, traditionally was paternalistic because that was often the only thing that worked. In less deferential societies, armed with effective therapies, however, it has replaced “doctor’s orders” and “compliance”, with “patient choice” and “concordance”. Simply repeating what our tradition has always said, in a changed historical or social context, may not be saying what it said originally, or be true to what the tradition stands for. That possibility, leading to fresh insights and sometimes even a convergence of different perspectives, is explored more readily through questions and conversation than in the thrust and counterthrust of controversy.

Stem cell therapy research

I will now explore some implications of this in relation to the current controversy about the use of human embryos in stem cell therapy research—which the UK parliament has agreed to, but the European parliament has voted to ban. One strand in the controversy is whether the use of embryos is really necessary for stem cell therapy research: but the dominant scientific view is that it is. If that is correct, the crux of the controversy is this: does the possible development of a whole new field of regenerative medicine outweigh any misgivings about experimenting on or killing human embryos?

Such misgivings have been articulated, in carefully constructed conceptual terms, by moral theology and philosophy, but they are shared by many who do not necessarily subscribe to these formulations, and they reflect moral intuitions deeply rooted in many different cultures. Since these intuitions are often expressed in the language of “respect for life” or, even among people who are not religious, “the sanctity of life”, I will say a little more about these less precise but more popular concepts.

The first thing to say is that, contrary to what might be expected, the public debate about embryo use does not seem to be between those who argue for “respect” or “sanctity” and those who reject these concepts. Most supporters of embryo research claim, at least publicly, that they do “respect” all human life, even in its earliest stages. Assuming then, in hermeneutic mode, that people mean what they say, it looks as if the real moral divide is between “respect” on the one hand and “sanctity” on the other. So what is the difference between these two concepts, and does that explain the opposing views on embryo research?

The concept of “respect” is probably best known from Kant’s principle of respect for persons as ends in themselves.12 I must confess, however, that it did not come alive for me until I realised that the German word for “respect” which Kant originally used, Achtung, was the word I had first seen, embellished with skull and crossbones, on danger warning notices in old second world war films. In that sense, Achtung is not a bloodless philosophical concept, but what you must have for a live wire, a minefield, or a man eating tiger—a sense of wariness, and in the case of the man eating tiger’s “fearful symmetry”, a sense also of wonder. Understood as wariness and wonder, “respect” seems an appropriate way of recognising, and responding to, the “untold complexity” of the “self functioning dynamics” of life in all its forms.

What about “sanctity”? Another philosopher, Gabriel Marcel, writes that when the word “life” is used in connection with “sanctity”, it is in the way someone uses it when they say “I really love life”, or “I don’t love life anymore”. In that sense, Marcel says, “life” is “clearly not referring to the pure and simple fact of biological existence”, but to something that “implies a basic and as it were inarticulated reference to my life”: “the experience of my life… in some way secretly irrigates, as it were, the confused notion I tend to form of life independently of any knowledge of biology”.13 On this view then, “sanctity” is not a property or predicate of biological life as such. It refers rather to the wondering way in which one living being may recognise and respond to another. Marcel suggests “a mother’s adoration of her child” as a prototypical example of this, a response to life’s “primordial integrity”. In English, we might think of Hopkins’s “dearest freshness deep down things”.

How then do “respect” and “sanctity” differ? Both involve a living being’s recognition of, and response to the “self functioning dynamics” of life—in the case of “sanctity” perhaps, with more wonder and less wariness than in the case of “respect”. What “sanctity” significantly adds, however, is the “basic… reference to my life”, in the life of another being. The word “sanctity” is shown to be an appropriate response to life, that is, not by deduction from the “simple fact of biological existence”, but by actually responding to the other life as in some sense not an “it” but a “you”.

If this interpretation is correct, is a human embryo used in stem cell research a being which is in some sense a “you” rather than an “it”? Is each embryo, in other words, an end in itself? The most obvious way to demonstrate this, would be to actually greet each embryo in vitro as a potential person—the kind of greeting that a mother can certainly make to the developing fetus in her womb, but with the embryos under discussion, this greeting is problematic. That is not only because the majority of preimplantation embryos, in the wisdom of nature, are not potential persons. There is also a new factor—that the purpose for which “spare” eggs and embryos are donated to research specifically precludes their being potential persons; and in the case of those created by nuclear transfer for stem cell research, the very idea of these as embryonic humans is paradoxical.

It might be objected, of course, that since these experimental embryos would not exist had science not created them, it is the scientific manipulation of human life itself that offends against “sanctity”. If, however, that is the argument, it raises a further problem. If what makes “sanctity” appropriate is, not that the “life” involved is biologically human, but that it “implies a basic… reference to my life”, it is difficult to see why this argument should apply only to human life. It may be possible to greet a preimplantation human embryo as “you”, but surely it is very much easier and more natural to greet a non-human primate or a dog as “you”. The problem here, in other words, is that if we bring human embryos under the protection of “sanctity”, it is difficult to see why animals, at least those to whom humans can relate, should be excluded. It may be, of course, that some opponents of the use of human embryos in research also are opposed to the use of animals in research. That does not, however, appear to be the view of the majority, or for that matter of the European parliament, since extending its ban from human embryos even only to non-human primates and dogs, would outlaw much of the “second species” use of animals in biomedical research and safety testing that is currently accepted practice in European countries.


I will now try to draw some general conclusions. Questions such as the use of human embryos in stem cell research are ethically complex and morally challenging, but the ethical problems they raise are not helped by being debated within the win or lose constraints of controversy. These constraints make it difficult to discuss the status of embryos in nuanced but necessary terms such as Marcel’s “basic… reference to my life”, and much simpler to load the whole argument onto whether or not they are biologically human. That, however, does scant justice to the “sanctity of life” tradition, especially if it is considered with reference to Asian as well as European culture and religion. It also avoids what is the most morally challenging issue—that of the inevitably tragic character of many choices necessarily involved in biomedical progress. Prohibiting research on human embryos may have tragic implications for future patients, but allowing research on non-human primates and dogs has tragic implications for many members of our closely related and companion species; and to say that is not mere sentimentality. The force of the “sanctity” or “respect” traditions can be seen in the fact that few if any scientists would be happy to use animals were there viable alternatives. There are, in other words, no clean hands on either side of these controversies.

There may, however, be better and worse ways of resolving these problems. The philosopher Paul Ricoeur14 notes that controversies in medical ethics often arise when a good ethical intention—for example, to alleviate suffering through research—is blocked by a right moral rule—for example, “do not kill”. When that happens, we need to be sure that the intention is genuine, and the rule is applicable to the case in hand. In that respect, I have said much more in this paper about the rule’s applicability than the intention’s genuineness. I have not commented—for example, on the heady mix of scientific curiosity and commercial opportunity in biotechnology today. Nor have I expressed a view on the scientific opinion that embryo research is necessary, or on whether scientists who say they “respect” embryos really mean that, or are saying what they think politically most likely to gain public and parliamentary approval for the work they wish to do. These issues need to be addressed. It is not, however, within an ethicist’s competence to pass judgment on questions either of scientific knowledge or of scientific good faith. So I shall not discuss these further.

Instead, let me return to Ricoeur’s observations. When a good ethical intention is blocked by a right moral rule, he says, we need to know how far the particular moral rule is applicable, not only to the case in hand, but also under the universal Golden Rule—do not do to others what you would not have them do to you. In some cases, obeying the Golden Rule may mean making an exception to a particular moral rule, but the particular rule—for example, do not kill, is still a deep and also rational moral intuition. So the exception must be no greater than absolutely necessary. The crucial question therefore is: what “will best satisfy the exception” called for by the Golden Rule, but at the same time “betray… the [moral] rule to the smallest extent possible” (Ricoeur,14 p 269). There are no ready made answers to this. Instead, Ricoeur says, what we have to do is to use practical wisdom to invent them. To invent means to create or to discover: the ambiguity is intentional and unavoidable. Practical wisdom is the art of inventing the best course of action in the circumstances, all things considered; and in relation to the issues I have been discussing, “all things” can include everything from the smallest scientific detail to our deepest intuitions about human nature and destiny. No single scientific or ethical perspective can encompass this. To invent appropriate answers therefore is a task that practical wisdom can accomplish only through sustained public conversation between many diverse perspectives, each prepared to learn from the others, and committed to seeking a common mind on the question in hand. That task, like politics, is “a slow boring of hard planks”: but it is the one Gregory set us long ago, and it is still at the heart of medical ethics today.


View Abstract

Request permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.