Article Text

Download PDFPDF

The dangers of medical ethics
Free
  1. C Cowley
  1. Correspondence to:
 C Cowley
 School of Medicine, University of East Anglia, Norwich NR4 7TJ, UK; c.cowley{at}uea.ac.uk

Abstract

The dominant conception of medical ethics being taught in British and American medical schools is at best pointless and at worst dangerous, or so it will be argued. Although it is laudable that medical schools have now given medical ethics a secure place in the curriculum, they go wrong in treating it like a scientific body of knowledge. Ethics is a unique subject matter precisely because of its widespread familiarity in all areas of life, and any teaching has to start from this shared ethical understanding and from the familiar ethical concepts of ordinary language. Otherwise there is a real risk that spurious technocratic jargon will be deployed by teacher and student alike in the futile search for intellectual respectability, culminating in a misplaced sense of having “done” the ethics module. There are no better examples of such jargon than “consequentialism”, “deontology”, and the “Four Principles”. At best, they cannot do the work they were designed to do and, at worst, they can lead student and practitioner into ignoring their own healthy ethical intuitions and vocabulary.

  • Gillon
  • ethical experience
  • four principles
  • ordinary language
  • teaching ethics

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The stimulus for this article was my dismay in seeing the Journal of Medical Ethics “top ten” articles downloaded in 2004.1 The first, second, and fifth placed articles, by Gillon, Beauchamp, and Macklin, respectively, all defended the Four Principles. Surely, I said, the Principles have been discredited by now, however grudgingly I acknowledge their usefulness in at least stimulating debate in the 1980s. What follows is an argument that the Four Principles—together with similar efforts to theorise in ethics—are at best pointless and at worst dangerous in an undergraduate medical curriculum.

1. MAKING ETHICS MORE SCIENTIFIC

Every medical student is familiar with ethical dilemmas in their own lives. At the very least they have wondered about whether to keep or return a wallet they find on the pavement, whether to lie to their parents about where they were the night before, and whether to help a troubled colleague at some cost to themselves. As such, ethics is not only about the Issues and Policies in the newspaper, nor about ontological questions in philosophy departments; it is also personal. Not only have students experienced such dilemmas as dilemmas—that is, they have at the very least felt the conflict between ethical duty and self interest, but they have also managed to discuss the dilemma with others as dilemmas. Throughout they have—mostly reliably—deployed ordinary ethical concepts, and they have been understood by others using the same concepts. The only thing with which they may not be familiar when they begin their medical studies is the precise nature of the ethical dilemmas and disagreements that characterise the healthcare context; and even if they are familiar with such dilemmas from the popular or professional media, it is important that they may not have experienced such dilemmas first hand.

It is this prior knowledge, experience, and active vocabulary that ought to be the starting point for the teaching of all ethics in medical schools. That much might seem obvious, in the sense that familiar examples should always be invoked to explain any abstract theories and principles. Ethical knowledge, experience, and vocabulary are, however, utterly unlike the knowledge that students have acquired on their A level biology and chemistry courses, and this fact has been less obvious. The problem that underlies the present teaching of ethics derives from the mistaken assumption that science is and should be the paradigm of a respectable discipline and method, of a respectable conception of problem and solution; in so far as ethics is to be respectable, then it should be made more scientific. Like a good Popperian scientist, Gillon has sought and so far failed to find any conclusive disproof of his hypothesis.2 The alternative, so the assumption continues, will be ethical decisions that can only be based on the dark irrational forces of whim, mood, affection, or luck.

One way to make ethics more scientific is to reduce it to enlightened self interest, as when it is argued that altruism is no more than concern for one’s future pleasure, reputation, clan, or species. I will not be discussing such attempts in this essay, because I consider them wildly implausible: it is enough to ask the would-be reductionist if he has any friends, and whether he tells them with a straight face that he associates with them only out of self interest. In addition, many ethical dilemmas in the healthcare context make no obvious reference to the doctor’s self interest: the doctor does not stand to benefit whether or not treatment is withdrawn from a handicapped neonate, he simply wants to do the right or best (or least worst) thing. In a welter of conflicting ethical intuitions, it is understandably tempting to reach for a scientific way to solve this problem: and it is at this point that the cumbersome machinery of ethical theories and principles are wheeled in.

2. THE MACHINERY OF THEORY AND PRINCIPLE

Medical students—just like most philosophy students taking a course on ethics—are introduced early on to consequentialism and deontology: the idea is that the details of the situation (with two or more viable options available) can be plugged into the theories, and a “best” or “right” answer generated. The intellectual exercise for the students is then to describe the consequentialist and deontological viewpoints on the original problem. When the two theories advocate conflicting courses of action, then one of two things can happen: either intellectual inquiry comes to an end with a bland statement such as “ethics is difficult,” or, with further effort and ingenuity, the theories can be refined and developed to greater algorithmic sophistication to accommodate counterexamples and eventually reveal a clear winner (the debates are still raging in the philosophy journals). The implication of the latter approach is that ethics is a game, much like solving a crossword puzzle.

To concisely reveal how wrongheaded this is, I will follow Raimond Gaita3 in suggesting the following, before developing it below: the language of remorse, the bitter knowledge—and experience—of having done wrong, has no place for “consequentialism” or “deontology” as concepts. They act as a fifth wheel, at best an unnecessary redescription by a third party, at worst a squeaky brake distracting attention away from the real problem. Nobody has ever been upset by having failed to maximise happiness or to universalise the maxim.

What people have been upset by is, say, having broken a promise to my friend Bloggs. There may be a disagreement about whether the promise was broken for a good reason; but such a disagreement will not be at all clarified by any attempt to justify promise-keeping as a global policy. If I sincerely make a promise, then the reason why I should keep it is that it is a promise, and not that it would maximise happiness or that it would be something that I could consistently universalise. In a similar vein, it does no good for the General Medical Council (GMC) to advise doctors to “be honest and trustworthy”.4 There is no reason to be honest and trustworthy; they are just things one is supposed to be, and just the sort of things that one can be intelligibly remorseful for not being. Importantly, the GMC guidelines do not help me at all to decide how honest to be with a terminally ill patient about his condition.

Perhaps the most famous example of pointless theorising is the Four Principles. Since their first formulation by Beauchamp and Childress over twenty years ago,5 the Four have acquired a quite remarkable status. Despite the misgivings of many, they were recently defended by three prominent philosophers in a 2004 symposium in the Journal of Medical Ethics.1,2 Yet they remain utterly fatuous. Let me take the two neologisms first. What does beneficence mean other than “be nice”? What does non-maleficence mean other than “don’t be nasty” (however noble sounding the Hippocratic Oath)? Such heavy-sounding principles are hardly going to enlighten anybody, let alone solve ethical puzzles. After all, every time I give an injection I am harming the patient. Instead, there are perhaps two sets of genuine ethical questions that are needlessly obscured by the bureaucratic verbiage: (i) is this action really harmful, in what sense, according to what criteria, in whose judgment, is the response to the harm appropriate, etc, and (ii) is this harm justified by the quality and likelihood of the subsequent benefit? My point is: if these are the real questions, why not go straight to them, without all the palaver?

Some have claimed that the Four Principles could act as a structural framework for analysis or “a useful ‘checklist’ approach to bioethics for those new to the field”.6 I would say that anybody who is morally obtuse enough to need such a checklist would not be capable of interpersonal relationships of any complexity, and certainly should not be practising medicine. Even at a theoretical level, however, they cannot do the work expected of them since there still remains the problem of applying the principles, or at least of using them to distinguish relevant from irrelevant considerations during reflection on an ethical problem: they do not come with an instruction manual. This is normally taken by the defenders of the Four as an advantage, because it allows universality: the individual principles can be interpreted and prioritised in many different ways so that anybody can use them to, as Gillon puts it with disarming optimism, “explain and justify, alone or in combination, all the substantive and universalisable claims of medical ethics and probably of ethics more generally”.2 This is not, however, universality, this is triviality, and resembles the misguided explanatory optimism of other big theorists such as Marx and Freud.

There is more to say about the two other principles. Autonomy is more philosophically robust than the first two, but it is no philosophical achievement to conclude that “X should be done because it respects the patient’s autonomy”. When a competent patient declares what he wants or does not want, then of course this should be respected where possible: but that’s not a high blown Principle of Medical Ethics, that is how we should treat anybody. Again, the real ethical questions are unnecessarily obscured: should the patient be given what he wants when there is a doubt about whether he understands his situation and the options available? Should the patient be given what he wants when it is judged medically futile? Too much emphasis on autonomy leads to a conception of health care as merely another service to be provided to customers on demand. In this I agree with Alastair Campbell, who prefers to speak of respect for persons rather than respect for autonomy: “Respect for persons entails dialogue, perhaps even confrontation, but only to ensure honesty in each party, in the face of death”.7

Similarly, nobody will dispute that justice is the sort of thing we should all be striving for anyway; but the real ethical problem here will be identifying the most just option from those available, identifying the most relevant criterion for a just distribution of scarce resources (equality, need, desert, purchasing power, desire, etc), and identifying those occasions where the requirements of justice might be legitimately tempered for the greater good.

3. THE IMPORTANCE OF ORDINARY LANGUAGE

Let me say more about the Principle of justice. One of its advantages over the other three Principles is that it is an ordinary word that we all understand and have been using ever since we had the power to shout “that’s not fair!” This familiarity is more important for teaching medical ethics than it might seem. We know what justice is, without necessarily being able to produce a dictionary definition. Instead, we have all felt the sting of injustice when we were punished for something we did not do, or when the better team were robbed of victory at the last moment, or when he got a bigger piece of the cake than I did. I emphasise the word “felt” here—what I am talking about is an understanding developed and absorbed in use, as opposed to the understanding acquired and stored during the study of scientific subjects. This means that most of the work in discussing the concept of justice has already been done in primary school, without the Four Principles and without Rawls. In this I can only agree with Gillon when he says that “ethics should be basically simple for it is there to be used by everyone, not just people with a PhD in philosophy or theology”; but I disagree that his Principles are simple in this way.2

This distinction between use and storage is key. When ethical concepts are stored (and merely regurgitated in an exam, say) they do not have an essential link to conduct, and so may be left behind in the examination room. In so far as ethical principles can be left behind in this way, they are only mantras, bandied about in accordance with the rules of the game, without leading to the deepened understanding and self reflection that must precede good ethical judgment.

Medical students, I suggest, are even more prone to store rather than use the machinery of ethical terminology because they are already use to storing so many other names and facts. This is not a complaint about the teaching of medicine in medical schools: I can certainly accept that there is a lot they need to know. The worry is that they will learn a new technical nomenclature—for example, words like “glomerulonephritis” (its definition, and its place in a framework of other concepts)—one day and then learn the new technical word “consequentialism” (its definition, and its place in a framework of other concepts) the next day, in exactly the same way. They will slot one set of technical terms alongside another in the big bank of things to know for the exam. Although medical jargon is necessary because there is no other efficient way of describing such a complicated and unfamiliar condition with similar accuracy, the ethical jargon obscures the essential familiarity of ethics and drives a wedge between ethical concepts and ethical conduct.

4. THE DANGERS OF MEDICAL ETHICS

My point, however, goes further. Technical jargon is fine if it is isolated and useful: glomerulonephritis does not appear in any context other than medicine. But the technical jargon designed to make ethical judgments scientific may not be innocuously stored as I have suggested; the student may embrace the jargon as shibboleths, and allow his own ethical intuitions (those centred on ordinary ethical concepts) to be hijacked in the process.

Surely “hijack” is too strong, it will be objected. After all, the medical student is not hijacked by the new medical jargon he acquires—must acquire—to succeed as a doctor. Instead of a hijacking, it would be better to speak of the student’s powers of discernment and discrimination—for example, in diagnosis—being enhanced by the new vocabulary. Surely such enhancement can reasonably be expected in morality as well—for example, the moral maturation process through childhood and adolescence relies heavily on the acquisition of new moral vocabulary, and with it a new sensitivity to the morally relevant features of the world. And why think that such a process does not continue through adult life as well? Most of us would consider ourselves morally wiser at 40 than at 20.

In response, I can only repeat that the new moral vocabulary acquired in adolescence and refined in adulthood is not technical in nature. Unlike the vocabulary of medicine, the vocabulary of morality is mastered—and is expected to be mastered—by every competent adult; and such mastery entails regular deployment in everyday ethical difficulties. This gives a characteristic personal aspect to the meanings of such concepts; when people possess moral integrity, they are able to own their words, to trust them, to stand behind them, and it is this ability that can be perniciously disrupted by the quasiscientific jargon.

This metaphorical language is not going to please everybody, so let me try again with a concrete example: if we take an ordinary ethical concept like “loyalty”, we can look it up in the dictionary and we get a list of synonyms (“true, faithful to duty, to a person, to a country”); this has not got us very far precisely because the items on the list beg further definitions. Importantly, however, we do not learn ethical concepts in our mother tongue by consulting dictionaries. Instead, we learn them by example, from role models, and stories, and analogies described for us by parents and teachers when we are young. This is why the notion of narrative is crucial to ethics, both in childhood and in adulthood. It has also become recognised as crucial to medicine and to medical ethics. Describing a patient’s condition is not like describing the damage to my car: there is an essential narrative component, because the injury or illness took place within the context of an ongoing human life, en route from the patient’s past to his future.8

Ethical concepts are characteristically indeterminate, within limits: although certain actions can clearly be designated loyal, and certain others disloyal, there will be a region of legitimate diversity in application within these limits. This is where each of us must discover a tighter definition of loyalty, based on crucial experiences during our formative years. The meaning of “loyalty” acquires tighter personal contours with practice, that is, with deliberation about what to do; with appreciation of the unforeseen consequences of one’s chosen action; with discussion with others about what one has done, and with comparisons between one’s present self and one’s ideals, etc. Not any contours will do, of course: it would not make sense to be loyal to dirty underwear (at least not without further explanation); but within the limits of intelligibility, the precise meaning of the word loyalty for me (as opposed to the dictionary definition, which means little to me) will be revealed by the sort of things I am loyal to under what sort of adverse circumstances within the narrative context of my ongoing life.

Now let me return to my accusation of hijacking. My worry is that an individual student’s unthinking confidence in the tighter meaning of his ethical concepts will be shaken by the introduction of this technical vocabulary, and by the methods of storage and application presupposed by it. Not shaken completely, only in the classroom and other formal settings where the big ethical puzzles have to be solved; the student will continue to use his own words outside class and clinic when, say, he angrily accuses his father of disloyalty for cheating on his wife. The new words he uses in the formal contexts of class or clinic will not, however, be his; he will, as it were, be going through the motions for the sake of respectability or gravitas or out of fear of being seen as an intellectual amateur. The hijacking is more insidious, for it only starts with the technical jargon, and then proceeds to redefine ordinary words like duty, harm, or need. Some stock expressions are now bandied about as an alleged justification of almost everything, to the point where there is no precise meaning at all: “quality of life” and “best interests” are the most notorious culprits.

What’s more, with the new expert language comes a misguided notion of ethical expertise; but as I have argued elsewhere,9 there are no ethical experts in the way that there are medical experts. The doctor’s knowledge and skill commands authority because of his special standardised training and experience: within certain limits, I am rarely in a position to challenge his decision of what would be best for my headaches, precisely because I lack his training and experience. In ethics, however, there is no such reluctance to criticise others’ ethical behaviour, precisely because it is a common language to all of us.

Surely it is an advantage to standardise a vocabulary, for how could different parties even agree on what exactly they were disagreeing about? For the same reason, surely it is better to standardise patterns of acceptable argumentation—for example, by stressing the need to refer to ethical theories or the Four Principles? Otherwise we would be at the mercy of different personal definitions of ethical concepts, and coming dangerously close to a mere exchange of inarticulate ethical intuitions that can only be resolved by some explicit or implicit political compromise (I use the word “political” in the wider sense of being about relational power and self interest as opposed to a joint search for moral truth).

This, however, is too pessimistic about the possibility of frequent resolution of disagreements in ordinary situations using ordinary words. Yes, some disagreements seem intractable, even after full discovery of inferential errors and relevant facts, but these are—and have to be—exceptional. There is an issue of the half full or half empty bottle here, but the main point is that no relationship, no family, no institution, no society could ever hold itself together without an enormous amount of ethical agreement, requiring only occasional compromises, and fully compatible with the finely tuned personal variations in semantic contours I discussed above. Our ordinary ethical vocabularies are more than equipped to make sense of and deal with (though not necessarily solve) all ethical problems thrown up by medicine.

The dangers of medical ethics can be phrased in terms of responsibility. Onora O’Neill10 describes the difference between genuine and corrupt forms of responsibility in modern bureaucratic institutions. The new ideals of accountability, guidelines, monitoring, target setting, and transparency in public bodies is supposed to increase the personal responsibility of the bureaucrats by giving them less scope for capricious and unsupported decisions. In the same way, a greater emphasis on patient autonomy in medicine is supposed to make doctors more responsible, in the sense of being more responsive to the patient’s needs and wishes.

Instead, these initiatives might have had the opposite effect of encouraging a culture of back covering: the bureaucrat will seek to ensure that every step is governed by an explicit rule or instruction, and he will thus be passing responsibility onto the rules, the structures, and onto his superiors: it is no longer his problem. The doctor can pass too much responsibility onto the competent patient, muttering “caveat emptor” under his breath. In a similar way, I suggest, the jargon of medical ethics alienates the decision from the decision maker. As long as he follows the Four Principles recipe and solves the problem, he can go home and sleep well. Genuine forms of responsibility, on the other hand, require an atmosphere of trust that would allow a declaration of the form: “here is how I saw the situation, these are my reasons for acting as I did”. The genuinely responsible bureaucrat stands behind his words, is ready to take the consequences of a mistake or misjudgment, and will lose sleep at night, knowing that his decision (and especially the victims of his decision) might come back to haunt him.

CONCLUSION

What might be some practical conclusions for teachers of medical ethics in schools and hospitals? Obviously, to avoid the jargon, and to invite students and doctors to discuss problems as far as possible in their own words. There is certainly a central place for analysing the quality of arguments, pointing out inferential mistakes, inconsistencies, and illegitimate generalisations. There is also a place for good old-fashioned conceptual analysis: what does “need” mean in health care and in other contexts where we use the word; what sort of things do we need as opposed to want; are some needs more important than others, and can they be objectively measured? Also, of course, there is a place for detailed examples drawn from a clinical context, with which students may not be familiar; but with Gaita, we should always remember to ask: what sort of words would any of us use to express genuine remorse? For those are the sort of words we need here.

REFERENCES

Other content recommended for you