Article Text
Statistics from Altmetric.com
In 2002, I wrote an editorial in this Journal arguing that it was time to review the structure and function of ethics committees in the USA, Australia and the UK.1 This followed the deaths of Ellen Roche and Jesse Gelsinger, which were at least in significant part due to the poor functioning of research ethics committees (RECs) in the USA.2 In the case of Ellen Roche, it was the failure to require a systematic review of the existing literature which led to her death. Iain Chalmers and I had previously documented in 1996 the failure of ethics committees to require systematic review.3 In 1998, at the time of revising the previous guidelines and forming the first Australian National Statement on Research Ethics, I made a public submission to the Australian Health Ethics Committee arguing that the new guidelines for RECs should include a person with skills in conducting systematic reviews of the literature and that the requirement to have a religious representative on the committee be replaced with the requirement to have an ethicist. Neither of these changes was made.
I argued, ‘It is time to ask whether institutional RECs should be abandoned in favour of expert committees that cover many institutions—suprainstitutional specialist committees. Supraregional specialist ethics committees could specialise—for example, in genetic research, cancer clinical trials, dermatology, respiratory physiology and each of the specialist areas of medical research. An example of such a committee is provided by the Thoracic Society of Australia and New Zealand research ethics committee, which reviews multicentre trials of respiratory drugs’.4
Around that time, I was involved ‘at the coal face’ as Chair of the Department of Human Services Ethics Committee in Victoria. In a public presentation, I arguedThere are two philosophical views of what ethics review is. On the one view, a subjectivist view, a study is ethical if an appropriately constituted ethics committee follows rules of due process and judges it to be ethical. This leaves open the possibility that committees may legitimately vary in their evaluation of research. On the second view, the objective view, … there are substantive principles and criteria for the evaluation of research, and the role of the ethics committee is to examine whether the research is ethical by evaluating the degree to which the research conforms to these objective criteria for ethics review, and that in some cases, where committees disagree whether a project is ethical or not, some committees may be in error by failing to accurately evaluate whether the research conforms to these principles.5
In a thoughtful article in this issue, Andrew Moore and Andrew Donnelly divide ethics review into ‘ethics-consistency’ and ‘code-consistency’ review.6 This division is orthogonal to the subjective/objective distinction. Ethics-consistency review involves a broad review of ethical acceptability. This could be according to subjective or objective standards. Such review may employ codes but these ‘code statements’ are ‘attempts to express the best ethical wisdom so far achieved and thus as presumptive ethical guides’. Code-consistent review gives priority to applying accurately established and applicable codes.
Moore and Donnelly support code-consistent review for a number of reasons. One of these is that ‘One risk of ethics-consistency thinking is that it invites the idea that ethics-consistency reviewers are answerable only to higher masters than, or at least to factors independent of, democratically elected governments’ (ref. 6, p. 12).
In practice, I am inclined to agree with Moore and Donnelly that ethics committees in their current form are best suited to applying codes and not to deeper ethical deliberation. The reason for this is their lack of ethical expertise. However, I do not think that accepting code-consistency review is the only or even best answer to these problems. Some level of ethical deliberation beyond mere application of code seems desirable or necessary for two reasons.
Interpretation
Part of the reason Moore and Donnelly endorse code-consistency review is because of concerns about the ‘inscrutability’ of ethics: ‘Some code content can be known for sure, and one's understanding of it is not improvable; but the corresponding content of ethics can rarely be known for sure, and human understanding of every ethical matter is improvable in principle’ (ref. 6, p. 7).
However, even if codes are to play a role in ethics review, they cannot be the whole story. The reason is codes must be interpreted. Any code will inevitably be framed in vague and imprecise language like ‘reasonable’, ‘fair’, ‘equitable’ and so on. These vital concepts admit of a variety of interpretations and are inescapably normative concepts requiring normative knowledge, reflection and interpretation. Just to give one example, I picked the most famous code governing research, The Declaration of Helsinki. I looked through the code until I found a principle or requirement that required ethical interpretation. It was the very first one:The Declaration of Geneva of the WMA binds the physician with the words, “The health of my patient will be my first consideration”, and the International Code of Medical Ethics declares that, “A physician shall act in the patient's best interest when providing medical care”.
Literally interpreted, this would rule out all non-therapeutic studies involving risk. It would rule out phase I clinical trials which expose patients to risks with no hope of benefits, unless one construes interests to include long-term interests in finding a cure for their disease, or contributing to knowledge, or helping others, etc.
One could object that code-consistency review is analogous to the sort of review performed by law courts; the law also requires interpretation and probably appeals to inescapably ethical concepts. Still, courts are applying legal rather than ethical standards, and similarly Moore and Donnelly could claim that, on my view, RECs are still applying code-based rather than ethical standards. Also, one might claim that ‘reasonable’, ‘fair’, etc are not inescapably ethical. The law also refers to such terms but it often gives them specific legal meanings, often determined by previous legal judgements. One could similarly have specific code-based meanings, for example, determined by previous ‘case law’ in the form of previous REC judgements.i
Whether one calls RECs ethics committees or quasi-legal committees is a terminological matter. The legal analogy is revealing. To function as a judge, one requires a legal education of considerable length and depth. One does need to make sophisticated normative judgements, interpretation, analysis in order to apply the law. It is the same, I will argue, with ethics committees. They need to similarly deepen their expertise in making the normative or ethical judgements that are necessary to apply, interpret and delimit code.
Context sensitivity
Principles, rules and theories are essential to non-relativistic and objectivist ethics. But they require interpretation or application in the context of the actual facts of a given situation. Take Principle 28 of the Declaration of Helsinki:For a potential research subject who is incapable of giving informed consent, the physician must seek informed consent from the legally authorised representative. These individuals must not be included in a research study that has no likelihood of benefit for them unless it is intended to promote the health of the group represented by the potential subject, the research cannot instead be performed with persons capable of providing informed consent, and the research entails only minimal risk and minimal burden.
In general, this is a reasonable principle. A new medication for schizophrenia should be trialled on patients who are competent and free to consent, not those who are involuntarily treated and acutely delusional.
This principle is based on the value of obtaining people’s consent to do things to them. But this is a value which must be weighed against other values. Arguably, it was because consent was given too much value that the ethics committee reviewing the gene therapy trial for OTC deficiency required the investigators to recruit adult participants, like Jesse Gelsinger, with the mild form of the disease rather than newborn babies with the lethal and severe form of the disease. Such a gene therapy trial is certainly not minimal risk or burden (indeed it was lethal) and there were participants who were capable of consenting, like Gelsinger. But because the expected harm was greater to adults than newborn babies who would die regardless, I argued that this trial should have been done in this situation on incompetent newborns as they had less to lose.2 I argued that the death of Gelsinger was in part due to prioritising consent over minimising expected harm and failing to understand the concept of expected harm. It was, in short, a failure of philosophical understanding of practical ethics.
Peter Douglas has also addressed these shortcomings of ethics committees and addresses these problems by arguing that ethics committees should stick more closely to their code, as Moore and Donnelly argue.5 But Douglas also argues, as I have, for increasing their ethics expertise. Moore and Donnelly do not address in detail the concept of ethics expertise. They do admit ‘ethical thought and response are always fallible and potentially improvable’ (ref. 6, p. 13). But they fear that ‘[d]ue to the relative inscrutability of ethical considerations, ethics-consistency review would tend in practice to drift into the fundamentally different but epistemically easier practice of treating the say-so of boards as being what makes reviewed activities ethically acceptable or unacceptable’.
Douglas refers to and defends Singer’s concept of ethical expertise.7 He argues that ethical expertise of human REC (HREC) members could be enhanced in two ways. The first is by including as ‘one of the membership categories of an HREC, a person with ethical expertise’. This is what I have argued for. The second is ‘to raise the level of ethical expertise of all HREC members’ through training and rigorous referral to the ethical code or National Statement.
Douglas notes that consistent with my suggestion of creating more centralised expert review: …it is worth noting that a report commission by the UK Department of Health which was released in 2005 found that “the totally voluntary systems of RECs [Research Ethics Committees] may not be sustainable and, indeed, may no longer be appropriate”, recommending that “It is timely to rationalise further the number of RECs, with more intense operation for the smaller number resulting”.5
I share Moore and Donnelly's concerns that institutional ethics committees, lacking real moral expertise and knowledge, will slide into subjectivism and ethical relativism, as far as their codes will allow them.
Moore and Donnelly’s solution to context sensitivity is:If a board's code-consistency review appears to generate a repellent decision, it should carefully re-examine this. If a repellent decision still seems unavoidable, it should take advice from experienced practitioners and other wise informants. If this too fails to secure any code-consistent and non-repellent path, the board should alert the relevant authorities and pursue code reform with them.6
Both Moore and Donnelly and Douglas believe that improving the code-consistency role of ethics committees is important to improve ethics review. If institutional ethics committees continue (and it seems they will), then I agree. But merely focusing on their code compliance role will fail because of reasons of interpretation and context sensitivity.
What then should we do? We should adopt the framework offered by Moore and Donnelly and the practical suggestions by Douglas, but we also need superinstitutional ethics-consistency committees. Moore and Donnelly appear to be sympathetic, perhaps, to this as they refer to ‘authorities’ capable of reforming the code. But what we need are specialist ethics committees, with people with the right skills and experience to identify risks and to engage in ethical reflection and deliberation. Ethics is difficult but there can be better or worse answers.
Consider a controversial research proposal: lethal research. Research which will inevitably kill a human being is not permitted under any research code. But should such research in principle ever be approved? I can think of at least one case where it perhaps should be approved. Consider a person in a jurisdiction who permits voluntary euthanasia. Imagine that this person, John, has terminal liver cancer and wishes to die now rather than suffering and dying in the further future. This is an entirely reasonable wish. But let us assume also that John has an altruistic streak—he would like some good to come of his death. He volunteers for a highly risky experiment. Perhaps it is some new phase I trial or some new intracranial device implantation or novel surgical procedure. Or perhaps he wishes to contribute to knowledge of severe infections, like Ebola, and would participate in a lethal challenge study.
John knows he will fall asleep, just as he would with conventional euthanasia, and never wake up. But in the interval between him falling asleep and him finally dying, he will contribute to science. He will, in effect, have donated his body to science. Instead of no good coming from his death, some good would come of it. We could call this voluntary research euthanasia (VRE).
How long should a person be kept alive but unconscious for the purposes of research? What kinds of interventions could be tested? Must there be a reasonable prospect of significant advancement of knowledge? Who else besides the competent patient must approve of such research?
These are all questions which a qualified ethics committee could address. But what counts as qualified and how should they deliberate about such a controversial procedure?
What should such a real ethics committee doing ethics-consistency evaluation of complex proposals do? The best answer I have found is reflective equilibrium, using their code, other international statutes, law, declarations, moral principles, theories and intuitions. John Rawls described this procedure of reflective equilibrium to decide on principles of justice. But it is appropriate for making any policy level or institutional decision.
The analogy with law is again instructive. Courts exist at various levels, from lower to higher. Higher courts are tasked with more fine grained interpretation, but in some cases law is reformed through the evolution of case law. It is true that Parliament may set law through a legislative process and perhaps this is what Moore and Donnelly have in mind when addressing code reform. But in my view, Parliament (or some similar institution democratically elected) is neither the only nor best way to develop research ethics. I believe having expert suprainstitutional ethics committees or ‘higher courts’ for controversial appeal and review would better enable progress, using reflective equilibrium and a case law approach.
Rawls claims we should…see if the principles which would be chosen match our considered convictions of justice …There are questions which we feel sure must be answered in a certain way. For example, we are confident that religious intolerance and racial discrimination are unjust. We think that we have examined these things with care and have reached what we believe is an impartial judgement not likely to be distorted by an excessive attention to our own interests …
He argues that in attempting to describe a just distribution of, say wealth and authority,[w]e begin by describing it so that it represents generally shared and preferably weak conditions. We then see if these conditions are strong enough to yield a significant set of principles. If not, we look for further premises equally reasonable. But if so, and these principles match our considered convictions of justice, so far well and good. But presumably there will be discrepancies. In this case we have a choice. We can either modify the account of the initial situation or we can revise our existing judgements, for even the judgements we take provisionally as fixed points are liable to revision. By going back and forth, sometimes altering the conditions of the contractual circumstances, at others withdrawing our judgement and conforming them to principle, I assume that eventually we shall find a description of the initial situation that both expresses reasonable conditions and yields principles which match our considered judgments duly pruned and adjusted. This state of affairs I refer to as reflective equilibrium.8
For example, the value behind VRE is altruism. An individual ought to be able to sacrifice his or her interests for his or her fellows. Of course, she ought to be competent to do so, understand what she is doing and not be coerced by others. This applies to donating blood, bone marrow, some of one’s liver, one kidney or even two kidneys to others.
Medicine and research ethics tends to be focused on the individual’s own interests: procedures can only be carried out on an individual if they benefit him or her, or at least do not harm him or her. But this is overly individualistic or selfish ethics. Individuals do often rationally wish to benefit others.
By carrying on such logical arguments, comparing other areas of self-sacrifice, such as war,9 we could identify the nature and limits of self-sacrifice for the public interest. Whether this would extend to support VRE in a particular circumstance remains an open question.
Importantly, Rawls gives a detailed description of parties to rational deliberation (he calls them ‘competent judges)’ and this would be a good start in selecting ethics committee members. They should be knowledgeable about the facts and of the consequences of the various courses of action. Importantly, they should be ‘reasonable’. There are four criteria for reasonableness: (i) being willing to use inductive logic, (ii) being disposed to find reasons for and against a solution, (iii) having an open mind, (iv) making a conscientious effort to overcome his intellectual, emotional and moral prejudices.10 Lastly, they are to have ‘sympathetic knowledge…of those human interests which, by conflicting in particular cases, give rise to the need to make a moral decision’.10
Ethics is stifling innovation. Research is good. We have a moral imperative to engage in it. We need proper philosophical ethical review. But the bureaucracy needs to be slashed and, as I said over 15 years ago, the nature and structure of ethics review needs to be improved.
Footnotes
Competing interests None declared.
Provenance and peer review Commissioned; internally peer reviewed.
↵i Thanks to Tom Douglas for these points.
Linked Articles
- Feature article
- Commentary