Article Text
Abstract
It is the JME's 40th anniversary and my 20th anniversary working in the field. I reflect on the nature of bioethics and medical ethics. I argue that both bioethics and medical ethics together have, in many ways, failed as fields. My diagnosis is that better philosophy is needed. I give some examples of the importance of philosophy to bioethics. I focus mostly on the failure of ethics in research and organ transplantation, although I also consider genetic selection, enhancement, cloning, futility, disability and other topics. I do not consider any topic comprehensively or systematically or address the many reasonable objections to my arguments. Rather, I seek to illustrate why philosophical analysis and argument remain as important as ever to progress in bioethics and medical ethics.
- Research Ethics
Statistics from Altmetric.com
Coercion, discrimination and why medical ethics needs philosophy, better philosophy
Objecting to genetic selection and cloning, Leon Kass writes,
A third objection, centered around issues of freedom and coercion… comes closer to the mark. … [T]here are always dangers of despotism within families, as parents already work their wills on their children with insufficient regard to a child's independence or real needs. Even partial control over genotype—say, to take a relatively innocent example, musician parents selecting a child with genes for perfect pitch—would add to existing social instruments of parental control and its risks of despotic rule. This is indeed one of the central arguments against human cloning: the charge of genetic despotism of one generation over the next.1
This objection from ‘coercion’ is the objection that Michael Sandel gives to genetic selection, which he calls ‘hyper-parenting’.2 In a similar vein, Jürgen Habermas argues that germline enhancements would represent a threat to the enhanced child's freedom because the parent's choice of enhancements would not only imply their endorsement of particular goods, but also communicate to their child that they expect her to pursue those goods.3 These expectations, Habermas suggests, may serve to hinder the child's freedom to do what she wants, when her desires do not align with her parent's expectations.4
The paradigm case of coercion could be said to be when a robber stops you and says, ‘Your money or your life’. Coercion involves the restriction of freedom (reduction of options), which causes that person to do what she does not want to do. Coercion is wrong when it harms a person or fails to respect that person's autonomy. That is a conceptual analysis of coercion.
Even professionals working in bioethics (which includes medical ethics), including Leon Kass, misuse this term. Embryos cannot be coerced since they are not persons and lack freedom of will. But more importantly, future people cannot be coerced by the act of genetic selection or cloning. Imagine that IVF produces two embryos, Anne and Bob. The parents choose Bob because that embryo has perfect pitch (or is a clone). Later in life, can Bob complain that his parents coerced or limited his freedom by selecting him on the basis of having perfect pitch (or being a clone)? No—he owes his very existence (all his options and freedom) to their act of selection. Without assisted reproduction and selection (or cloning), he would not have existed. It is metaphysical fact that those who owe their existence to a reproductive act cannot be coerced by that act. Even more broadly, they cannot be harmed by that act unless it makes their existence so bad that their lives are not worth living.
Failure to appreciate this metaphysical fact about identity-determining reproductive acts infects legislation and policy. For example, in the UK and Australia, the supposed guiding principle ‘paramount in law’ for making reproductive decisions is the ‘best interests of the child’. But these are almost entirely irrelevant to identity-determining reproductive acts such as IVF and genetic selection, and cloning. Legislation and practice are based on confusion.5
It is possible that ‘best interests of the child’ does not refer to the particular child produced, but children in a more ‘impersonal’ sense. Suppose that some 14-year-old girl announces that she intends to have a child. We might say ‘You ought to wait and have your first child later, when you could give this child a better start in life’. When people make such claims, they may not be assuming that this girl would later have the very same child. The phrase ‘your first child’ can be a description which would refer to any first child that this girl later has. If this is the sense of ‘best interests of the child’, then the best interests of the child principle employed by the reproductive legislation is equivalent, at least in part, to the controversial principle of procreative beneficence—that we have a moral obligation to select the best child.6 ,7
Concerns about coercion equally fail to apply to many acts of germline genetic enhancement. Engineering perfect pitch or increasing intelligence or giving a child a talent increases options and freedom. Coercion in such cases only exists if parents choose to then limit options. But how parents choose to react to their child's abilities or disabilities is entirely independent from what those abilities or disabilities are. Genetic selection or enhancement is neither necessary nor sufficient for hyperparenting. Indeed, selection and parenting are independent acts.
What would coerce Bob? It would indeed reduce Bob's freedom (and open future)8 to force him as a child to practise music for 6 hours a day when he wants to play with his friends. That would be coercion and bad parenting. But couples who select (or clone) can be great parents, and couples who leave selection to nature can be hyperparents. They are entirely independent phenomena.
The objection that selection involves a limitation of freedom given by Kass, Sandel and Habermas is based on a conceptual confusion. Or it is an empirical prediction about the character (virtue) of parents who select or clone for which no evidence has been produced. Indeed, if one's reason for selection was to have the child with the best chance of the best life, that would be virtuous (and a moral obligation according to the principle of procreactive beneficence).6 ,7
A similarly contestable objection to cloning is that clones would live in the shadow of their pre-existing clone, burdened by the expectations of those around to live a certain way9 or a clone would be discriminated against in various ways. This is true today—Prince Charles must live in the shadow of the expectation that he could become king. Nonetheless, no one sees this as a reason against giving birth to heirs. Children who are born to sports stars, say, are burdened with such expectations—but they don't have as great a chance as clones to live up to them.
To say that discrimination against clones is a reason not to bring them into existence is like saying having an African–American child in racist nineteenth century America would be wrong because such a child would be a slave or a victim of discrimination. What is clearly wrong is the racism, or clonism, and not the fact of being black or a clone.
It is certainly possible that human beings would have discriminatory attitudes to clones, or to children produced artificially by IVF, or by mitochondrial transfer, or after genetic selection. But the problem is not the manner of procreation, but the primitive, prejudiced attitudes people have. Discriminating against people because of their mode of creation is a new form of discrimination akin to racism and sexism—in the case of clones, ‘clonism’.
Of course, the fact that sexism (or racism) is wrong is compatible with it also being wrong to have a female child in a very sexist society—if you could reliably predict that she would be abused, used as a sex slave, etc. Still, what we should change if we can are the sexist attitudes and practices, rather than selecting sex per se. Similarly, it may be the attitudes to clones that should be changed.
The conceptual confusion about the nature of coercion infects everyday ethical discourse. Having been on several ethics committees, I have often heard the claim that paying people more than a minimal amount of money to take part in research (say £10) would coerce them. There are even rules against it. A similar objection is given to paying people for their organs—it would coerce them into selling.
This is another conceptual confusion. When the status quo is available, coercion cannot exist. ‘Give me your money or I will give you a lollipop’ is not coercion because you can remain with the status quo—your money and you can reject the lollipop. If a person chooses to take an offer when the status quo is available, then it is because that person believes the offer will make her better off (provided she is competent and rational). It is not ‘against her better judgement’—it is her better judgement.
What these people mean is that research participants or prospective organ sellers would be exploited. Exploitation occurs when a person is made an offer that they would not accept were it not for some background injustice, or, more broadly, if they weren't unjustifiably worse off. The answer to exploitation is twofold. We should either correct background injustice or we should make the offer reasonable. Bankers are not exploited when they are offered their massive salaries—and a poor Indian would not be exploited if offered the same salary to do the same job.
To deny people the opportunity to better themselves (as they seek to do when they accept money to participate in research or sell organs) is to limit their freedom, ‘keeping them in their place’. We should give people the opportunity to better themselves. One sufficient response to the problem of payment to participate in research or organ markets is to correct social injustice. But the other response, absent correction of injustice, is to pay people fairly—that is to pay more,10 and to set a minimum fair price, like a minimum wage. So paradoxically, the current system of banning organ markets and paying people poorly to participate in research is wrong.
Coercion and exploitation sometimes coexist in medicine. Consider a catastrophic lethal disease such as motor neuron disease or Ebola. The objection is sometimes put forward that it is coercive to offer dangerous experimental interventions to desperate dying patients. But that is 180 degrees the wrong way around and misguidedly paternalistic.11 It is coercive to limit freedom, options and access to experimental interventions and deny the patient a choice. It can then be exploitation to make an offer of a place on a trial which involves a 50% chance of getting a placebo, when the alternative is death. Such trials may be in the public interest, to obtain greater levels of confidence to make resource allocation decisions justly. But when they are not necessary to make resource allocation decisions according to principles of distributive justice, they are both coercive and exploitative.
These examples show why philosophy is the heart of medical ethics. I have not considered counter objections or dealt with these issues in a systematic way. What I have tried to show is that good philosophy—which may well include good counter—arguments to all my philosophical arguments in this paper—is essential to understanding and deciding bioethical issues—and law. But it is even more important…
Science and ethics
Ethics has been a part of philosophy for thousands of years. Aristotle produced a famous book called Nicomachean ethics. Derek Parfit recently produced a massive 1600-page two-volume set about the nature of ethics entitled On what matters.
Ethics is concerned with norms and values. Its subject matter is the way the world ought to be or should be. It is about good and bad, right and wrong. Science is about the way the world is, was, will be, could be, would be. Ethics is about values; science is about facts. (Strictly, science is about natural facts. On realist views of ethics, ethics is about normative or evaluative facts.)
David Hume famously described this ‘fact–value’ or ‘is–ought’ distinction. One of his greatest contributions to ethics was to observe that values cannot be read straight off natural facts. To do so is what GE Moore described as the naturalistic fallacy.12 Science and ethics are completely different kinds of enterprises.
This distinction is essential to understanding the failure of much of bioethics and medical ethics. Even if science were complete and we knew everything about the world and ourselves, it would not answer the ethical questions of how we should live or whether equality is more important than maximising the good, or when we should die. The stated basis of the National Health Service is egalitarianism—equal treatment for equal need. But that is a highly contestable ethical principle.13
Every time we decide to act, we employ values. Often these values are not contested—longer good life is better than shorter good life. But nonetheless, they are normative judgements—the subject matter of ethics.
The tendency today is to roll over and ‘scientify’ everything. Evidence will tell us what to do, people believe. But what constitutes sufficient evidence is an ethical decision when we make up our minds about what to do. What level of blood pressure, cholesterol and glucose is safe, or healthy, is like what the speed limit or blood alcohol ought to be. It is an ethical judgement about weighing risk and benefit. In Australia the speed limit is 100 km/h; in Germany, it is unlimited. Which is right? It depends on how you weigh convenience, pleasure, economic growth versus health. The safest speed to drive at is (almost) zero.
Ethics is not peripheral to medicine and research—it is central. What you study will determine what you will find. It is an ethical decision, as is when you will start treating, or whether to stop treatment.
One excellent example of hidden ethical values is the concept of futility used to limit treatment. There are many definitions.14 Some are quantitative, such a treatment with a <1% chance of a beneficial effect. But this is not futile. Imagine that you have had a massive stroke and will die, but there is a treatment that has a 1/10 000 chance of saving your life and returning you to full health. Such a treatment is not futile in the way that trying to sew a decapitated head back on is futile (that is, being incapable of achieving the desired result); it is just very unlikely to achieve the desired result.
What people who deploy ‘futility’ arguments usually mean is the treatment is cost-ineffective. Such judgements are most justifiably made as resource allocation and distributive justice decisions.14
Another example of the failure to identify important moral considerations is the consideration of justice by disability activists and those who advance a social constructivist model of disability. Their central claim is that there is nothing inherently bad about disability, and people with disability are only disadvantaged either because of social prejudice/injustice or transition costs.15 While prejudice and injustice no doubt contribute importantly to the disadvantage associated with disability, there are objective elements to the badness of disability (eg, deaf people cannot hear music) and more importantly, justice does not require strict equality.16 To claim that all disadvantage is the result of prejudice/injustice is to claim that resources should be allocated to remove all disadvantage. This implies that we should give absolute priority to the worst off. The finite nature of resources makes eliminating all disadvantage impossible—some inequality would always remain and the only way to bring about complete equality is to ‘level down’ by removing the advantage some will enjoy after all resources are deployed.
Perhaps their claim is meant to be that inequality in opportunity should be as small as possible. This, however, is a controversial conception of justice—prioritarian, sufficientarian and consequentialist conceptions of justice do not give absolute priority to reducing inequality.
As I have argued elsewhere, since there are only enough hearts to transplant two-thirds of children who need one, an egalitarian position would require that infants with trisomy 18 are given the same chance of a heart transplant as children without severe intellectual disability and a normal life expectancy.13 If one adopts the sole aim of minimising inequality, in fact such severely disabled infants should be given priority for heart transplantation since they have profound intellectual disability and a very short life expectancy, even with transplantation. They are the worst off. This is a very implausible account of justice.
Two failures of medical ethics: research ethics and organ transplantation
I am on the Ethical Legal and Social Aspects Committee of the Human Brain Project and I recently returned from a meeting that left me feeling there is no future for medical ethics. The EU has devoted about a billion Euros to the Human Brain Project. Much of this research requires using and sharing huge amounts of data. But requirements to ‘get consent’ from research participants may grind this to a halt. Even the use of deidentified data apparently will not satisfy new European standards protecting privacy and confidentiality and requiring consent.17 ,18
This is symptomatic of lethal and widespread malaise. We now have enormous scientific capacity to construct population level genetic and other databases that could massively enhance knowledge and save and improve lives. But such research cannot be carried out because of ‘ethical’ obstacles and data protection.
When I conveyed the possibility that data could be used ethically without consent, a famous European lawyer who was a privacy expert responded emphatically and with a kind of moralistic finalitude
The first article of the German constitution is that every human being has dignity.
How precisely does that address the issue? It is like saying ‘That is what the Nazis did’—another supposed ethical trump card. It is another objection I have commonly encountered in medical ethics: ‘Eugenics—that is what the Nazis did’. But testing for genetic disorders such as Down syndrome, Fragile X, cystic fibrosis, etc is eugenics. The difference is that it is voluntary, based on sound conceptions of the morally good and good science, and not motivated by racist social Darwinist ideology.19
This problem of large datasets is symptomatic of an obsession with prioritising consent over all other values. Many ethics committees spend huge amounts of time and delay studies while they ruminate over the precise wording of plain language statements. Not only is this against the public interest in terms of finding cures for disease, but it also harms participants. I have previously shown how ethics committees fail to understand basic decision theory and the concept of expected harm. This led to the avoidable death of Jesse Gelsinger.20 Iain Chalmers and colleagues have shown how obsession with written consent literally has lethal consequences to trial participants when it involves delay in accessing experimental emergency lifesaving treatment, such as tranexamic acid after trauma in the CRASH 2 study.21
A major problem is that many in medical ethics generally don't understand the ethical significance and place of consent and the legitimate grounds for limiting freedom.
‘Consent’ is required in law to protect against a charge of battery, which involves non-consensual bodily touching. It is only recently that layer upon layer of privacy legislation has been introduced to protect data. Consent is important in ethical terms because of the value of autonomy. There are many concepts of autonomy but central to all is the idea that human beings (as opposed to most other animals) have the capacity to rule themselves according to conceptions of good and right people. Autonomy is about forming and acting on your own conception of the good life. Consent is important in so far as we respect, or might fail to respect, someone's conception of how their own life goes. Of course, it is difficult to predict what people want for their own lives, and the default should be to obtain their agreement to acts that affect them. But when it is impossible, difficult or costly to obtain consent, it can be ethical to proceed without it.
For example, if people wish to donate their organs or gametes for posthumous conception, it fails to respect their past autonomy to over-ride their wish to donate organs or gametes.22 It is positively wrong to let families over-ride the expressed wish of organ donors to donate, even though this is standard medical practice and there is no legal basis for it. People on transplant waiting lists die because of this unethical practice.
Likewise there is no ethical obligation to obtain consent to use data or discard tissue that is not central to a person's life plans and conceptions of their own good. To use someone's discarded hair to stuff a pillow without their consent is not wrong. It might be bizarre, but it is not immoral.
Even more importantly, it is legitimate to restrict freedom and not obtain consent when it is in the public interest. Our freedom is restricted by the law all the time. One trivial example is laws requiring the wearing of a seat belt. Such laws benefit both the individual by reducing risk and society by reducing healthcare expenditure. Of course, there will be some people who are harmed by wearing seat belts—they may even be killed when they would have been thrown clear of an accident. But overall, the benefits of seat belts are judged to significantly outweigh the risks, both for the individual and society, and people are not given a choice.
So, too, the use of data and discarded tissue (and anonymised case studies in ethical discourse) is in the public interest. Even if people do strongly oppose it, or could be identified, it could still be used, just like seat belts, in the public interest. Given the huge advances that could come from our massive information technology capacity, all patient data and discarded tissue should be used, with adequate oversight and compensation systems should any harm result.
There is a moral imperative to perform good research and not unnecessarily impede it. To delay by 1 year the development of a treatment that cures a lethal disease that kills 100 000 people per year is to be responsible for the deaths of those 100 000 people, even if you never see them. I have used this argument to defend a moral obligation to conduct stem cell research.23
But by obstructing lifesaving research by inappropriate and excessive attention to consent, research ethics has probably been responsible for the deaths of many millions of people. Iain Chalmers, a pioneer of evidence-based medicine and meta-analysis, comes to a similar conclusion. In a wise but unread piece, he begins,
I consider the influence of ‘a confused ethical analysis’, the double standard on informed consent to treatment within and outwith controlled trials, and the failure of research regulators to use their powers to reduce unnecessary research and promote full publication of necessary research. I suggest that these problems should be addressed by more thoughtful ethical analyses, more effective protection of the interests of patients by research regulators, and empirical research to inform the future development of research regulation. Because ethicists and research regulators have paid insufficient attention to these issues, I conclude that they have contributed to the avoidable suffering and deaths of millions of people, the vast majority of whom have not been participants in clinical research.24
Organ transplantation is another example of the lethal effects of bad ethics. Organ transplantation is a lifesaving intervention. Millions of people die around the world because of a shortage of organs. But there is no shortage in reality—we just don't use all the organs that could be used because of bad ethical reasons.25 We all have the most basic moral duty to donate organs, as I will now argue.
Why we should not be moral relativists or subjectivists
Most people I meet, including those involved in medical ethics, are moral relativists (they believe that ethics is relative to culture) or subjectivists (what is right is just what people desire). For example, they believe we should obtain consent because the Declaration of Helsinki, the World Medical Association (WMA), or the BMJ ethics committee say we should. They believe that what is right is relative to views or desires of a group, individual or culture. This is deeply wrong and denies the existence of ethics altogether.
Ethics is the study of morality. Morality is different from self-interest. Self-interest or prudence is promoting your own good. Morality is, by definition, in some way other-regarding. Just how much sacrifice of our own good for others, or in what circumstances, is the subject of much dispute. But the basic idea of morality is that it requires a degree of altruism and impartiality to consider and respond to other people's interests.
Peter Singer describes a case of minimal moral obligation.26 Imagine you are passing a small pond and a 2-year-old child is drowning. All you have to do to rescue that child is get your shoes wet. Singer argues that, if morality requires anything, it requires that you save the child's life.
This can be called a duty of easy rescue: when the cost to you of performing act X is small and the benefit (or prevention of harm) to another person is great, you should perform X. This could be called the most minimal moral obligation.
Now this is a moral obligation in virtue of the meaning of the term. If some amoralist, or libertarian or even the WMA says they don't want to rescue the child, or don't think it is a moral obligation, this does not change the fact that morality requires rescuing the child. It is a minimal moral obligation.
When it comes to organ donation, there are many proposed ways of obtaining organs: consent systems, organ conscription, opt-out, priority to those who agree to donate, directed donation, etc. Many people believe that the one we choose is relative to the desires of a given community or the recommendations of some expert group.
But there is a basic moral obligation to donate organs. Why? Because this is not just an easy rescue, it is a zero cost rescue. Organs are of no use to us when we are dead, but they are literally lifesaving to others. Nonetheless, most people choose to bury or burn these lifesaving resources, and are allowed to. Yet the state extracts death duties and inheritance taxes, but not the most important of their previous assets—their organs.
The failure to meet even our most minimal moral obligations is damning. It represents the failure of modern practical ethics. Donating our data and discarded tissue, or providing DNA, to researchers for the discovery of treatments for lethal or disabling disease is also an easy rescue.
Moralism and the future of medical ethics
The moralists appear to be winning. They slavishly appeal to codes, such as the Declaration of Helsinki. Such documents are useful and represent the distillation of the views of reasonable people. Still, they do not represent the final word and in many cases are philosophically naïve. For example, the third principle states ‘The Declaration of Geneva of the WMA binds the physician with the words “The health of my patient will be my first consideration”, and the International Code of Medical Ethics declares that “A physician shall act in the patient's best interest when providing medical care”.’27
The eighth principle states ‘While the primary purpose of medical research is to generate new knowledge, this goal can never take precedence over the rights and interests of individual research subjects’.
These requirements are virtually never met in randomised controlled trials in the following way. In order to justify clinical trials, the principle of equipoise was concocted to conform to this kind of principle. It states that clinical trials comparing A and B can only be conducted when clinical equipoise exists—that clinicians are uncertain whether some new treatment A is superior to B. Theoretical equipoise exists when the evidence for A being superior to B is exactly balanced by the evidence that B is superior to A. But as soon as any data accrue to shift confidence from 50/50, theoretical equipoise is disturbed. Continued accrual of participants is to gain higher and higher levels of confidence that A is better than B, to protect future patients and ensure cost-effective use of limited resources. But it is not in the trial participants’ interests—it exposes half the participants to risk of harm.28 Theoretical equipoise is always disturbed in clinical research, well before trials are terminated.
The justification for continuing such large trials28 is that it is necessary to convince clinicians. This is 180 degrees the wrong way round and assumes a clinical subjectivism about justification of treatment. The ethical issue is rather what level of confidence (statistical significance) ought to convince a reasonable clinician. Just because some clinician wants a p<0.0001 does not make it right to aim for a p<0.0001! That is the naturalistic fallacy. The ethical question is: what is the right p value to aim for? This requires balancing the interests of trial participants, future patients and justice.
The Declaration attempts to address this difficult ethical issue by stating:
When the risks are found to outweigh the potential benefits or when there is conclusive proof of definitive outcomes, physicians must assess whether to continue, modify or immediately stop the study.
But what constitutes ‘conclusive’? p<0.05? p<0.0001? The definition would be different for a dying patient, an administrator responsible for allocating resources or a treating clinician.
Such decisions have literally life and death implications for patients. To take just one example, in the classic ISIS-2 trial from the late 80s, before the start of that trial, a large meta-analysis was performed which showed that thrombolytics (‘clot busters’ such as streptokinase) reduced the risk of death after heart attack by approximately 20% at the p<0.001 level (OR 0.78; 95% CI 0.69 to 0.90).29 This was apparently not ‘conclusive’, so the ISIS-2 trial30 was commenced (despite another large Italian study being underway). Over the span of this study, there were 238 more deaths in the group receiving placebo compared with the group receiving streptokinase. This study found (unsurprisingly) that streptokinase reduced the risk of death by 23% (2p<0.00001, 95% CI 18% to 32%).
Early in the trial, when only 4000 of around 17 000 patients had been randomised, the data monitoring committee stated that there was ‘proof beyond reasonable doubt’ that streptokinase worked. Why was the trial continued? It was to convince clinicians. As the chairman of the data monitoring committee, the legendary Sir Richard Doll put it, ‘Involving large numbers of clinicians in the trial predisposes them to accept the results … Participation in a large-scale controlled trial constitutes, in practice, one of the best means of continuing medical education’.31
But surely it is an open and ethical question whether this is the best way to change practice. Perhaps we ought to be aggressively educating clinicians, or constructing mandatory best practice guidelines, or informing patients of existing data, instead of subjecting half of the patients at risk of dying to placebo in a medical education exercise? This is true of many other large clinical trials conducted today.
In short, modern research ethics allows ethically contentious practices to occur32 and imposes arguably unethical constraints on research capable of doing great good.
I left a promising career in medicine to do bioethics because I had done philosophy in 1982 and attended Peter Singer's lectures in practical ethics. The field was new and exciting and there were original proposals and arguments. Singer, Glover, Lockwood, Parfit and others were breaking new ground, giving new analyses and arguments. Now medical ethics is more like a religion, with positions based on faith not argument, and imperiously imposed in a simple-minded way, often by committees or groups of people with no training in ethics, or even an understanding of the nature of ethics.
What medical ethics needs is more and better philosophy—and a return to the adventurousness and originality of its pioneering days. There have been successes—euthanasia and better treatment of animals to mention just two. But the field has in many ways dried up or become dominated by moralists bent on protecting privacy and confidentiality at great cost and ‘getting consent’, and in other ways ‘protecting basic human rights and dignity’. Medical ethics isn't sufficiently philosophical, and when it is philosophical, it's the bad arguments or a narrow range of arguments that often seem to make a difference. And there is the attempted scientification of ethics in empirical ethics, a kind of sociological ethics, surveying people's opinions and practice. But this can never directly lead to answering the question: what should we do?
Most people working in or talking about medical ethics have never studied ethics. This is my 20th year working full time in the field and there is much I still have to learn. I don't think any of the arguments I have given are the final word, or even necessarily right—they are contributions to trying to make progress in thinking about ethics. The path to ethical knowledge is long, and we are almost certainly only near the beginning.
But for many people working in bioethics or medical ethics, or formulating guidelines or policy, ethics is a ‘hobby’. They have no formal training in ethics. Imagine that I was to sit on a cardiological research funding panel, or review a paper in cardiology, or stem cell science. It would be laughable. Yet I have 7 years formal training in medicine and research. Many people ‘doing medical ethics’ have nothing like that training or experience.
The trouble with medical ethics is that there is not enough original, good philosophy. Not that you need a philosophy degree to do good philosophy: John Locke was a doctor; Derek Parfit does not have a doctorate and only an undergraduate degree in history; Iain Chalmers is not a philosopher. Yet philosophical thinking is the most important activity in medicine and in life—ethics determines what we should do. Science can only tell us how to do it.
Final personal reflections
From time to time, we ought to ask how well we are doing. In my own career, apart from promoting people's careers, I am only aware of two instances where my work did some good. One was after a very intensive prime time morning interview where I was defending the use of prognosis as the grounds for distributing organs, rather than a pure egalitarian approach. It was based on an editorial I wrote for the BMJ13 when I was engaged in a dialogue with the mother of a child with Down syndrome. At the end, I said we would not face these choices if more people donated organs. A person came up to me and said that on the basis of that, she decided to become an organ donor.
When I was Chair of the Department of Human Services Ethics Committee 1998–2002, I did try to improve ethics review to allow more efficient and quick review, while also addressing important neglected issues.32 I tried (with great support from Rowan Frew, Angela Watt and Jill Hambling) to create a common application form, introduce multicentre review to reduce unnecessary delay, support expert review, produce guidelines and protocols for problematic research with vulnerable patients, and require systematic review and publication of results. Indeed, it was the failure to perform a systematic review that caused the death of Ellen Roache.33 I never knew if this work had any direct positive impact until recently when a researcher came up to me to thank me for suggesting that research on juvenile prisoners could be conducted ethically without parental consent.34
It is hard to know how much good or harm we have done. But I think we should at least reflect. Modern medical ethics, as a field, seems to me to have failed in many important respects.
When I was medical student, Professor Sutherland, a brilliant pathologist and teacher, would hammer home to us every morning at the 8:00 am post mortem:
More mistakes in medicine are made by not looking than by not knowing.
In my limited medical experience, that is absolutely correct.
Our scientific progress has been truly amazing. Other ethical progress, however, is less awe-inspiring. Indeed, today it seems right to say,
More harm is done in life by bad ethics than by not knowing.
Good ethics requires good philosophy.
Acknowledgments
Thanks to Ingmar Persson, Roger Crisp, Derek Parfit, Udo Schuklenk, Jonathan Pugh and Brian Earp for comments on earlier versions.
References
Footnotes
-
Competing interests I am the Editor-in-Chief of the Journal of Medical Ethics.
-
Provenance and peer review Commissioned; internally peer reviewed.