Article Text

Download PDFPDF

Should research ethics committees be told how to think?
  1. G M Sayers
  1. Correspondence to:
 Dr Gwen M Sayers
 Department of General and Geriatric Medicine, Northwick Park Hospital, Watford Road, Harrow, Middlesex HA1 3UJ, UK; gwen.sayers{at}imperial.ac.uk

Abstract

Research ethics committees (RECs) are charged with providing an opinion on whether research proposals are ethical. These committees are overseen by a central office that acts for the Department of Health and hence the State. An advisory group has recently reported back to the Department of Health, recommending that it should deal with (excessive) inconsistency in the decisions made by different RECs. This article questions the desirability and feasibility of questing for consistent ethical decisions.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

“Research Ethics Committees are convened to provide independent advice on the extent to which proposals for research studies to be carried out within the NHS comply with recognised ethical standards.”1 In 1968, the Department of Health first asked the health authorities to establish RECs, but guidelines addressing their function were not produced until 1991.2 Even then, great procedural variability was evident in the workings of different committees.

In 2000, the Central Office for Research Ethics Committees, acting for the Department of Health in England, was charged with improving the operating system of RECs and advising the Department of Health on policy matters regarding the operation of RECs. The idea was to create a common UK system for the process of ethical review.1 The number of RECs and their catchment areas were historically determined to suit local needs, and the resultant overcapacity is currently being rationalised.1 The NHS RECs, managed by the Central Office for NHS Research Ethics Committees, are advisory committees to the strategic health authorities, with delegated authority to issue an ethical opinion. About 15 non-NHS phase I RECs also exist. These committees, although not managed by the Central Office for Research Ethics Committees, are accountable to the Department of Health as the appointing authority, in the same way as the NHS RECs are accountable to the strategic health authorities.

Uniformity of process was implemented through standard operating procedures (SOPs),3 which set out the rules and regulations governing the operation of committees. These now run to 208 pages covering, among other things, REC membership, quorum requirements and procedures for submission, resubmission and appeals against the decision of a committee. Even letters to researchers must have standardised wording. The SOPs ensure that all research projects are processed in exactly the same manner and within the same time frame.

The SOPs have met criticism, both from those researchers who find completing the forms unduly time consuming and from some administrators who find that the procedures leave little room for their own discretion, so removing, on occasion, their personal responsibility. Nevertheless, the SOPs have clearly abolished the procedural variations previously experienced by researchers dealing with different RECs.

Variation remains, however, in the opinions provided by different committees, and although this variance is probably far less than was reported 15 years ago,4 the Ad Hoc Advisory Group on the operation of research ethics committees (Warner Report) has recommended that “excessive inconsistency amongst committees” should be dealt with by providing appropriate training and by sharing good practice from issues and arguments already explored.1

The idea of excluding excessive inconsistency has evolved from other, perhaps less subtle, notions. The most direct notion—“Should RECs be told how to think?”—provides the title of this article. The term “excessive” is not numerically quantified in the Warner report, and I use the unqualified term “inconsistency” in this paper. I do so, not because I am suggesting that the report advocates complete uniformity, but because the term excessive (without a numerical context) is vague. The report quantifies neither the amount of variation currently present nor the amount of reduction in variation that is sought. This article deals only with moral disagreement because any inconsistency unrelated to the “judgemental aspect of ethical review”,1 if present, can be dealt with by the SOPs.

Edwards et al5 provide three arguments that justify inconsistencies by the RECs. The justice argument allows for inconsistencies that suit local considerations, given the cultural diversity of different populations. The moral pluralism argument is based on ethical incommensurability. The due process argument holds that the outcome of ethical review is not necessarily what is morally relevant.

I deal with the same matter, first examining the notion of uniformity by considering unlikely and undesirable scenarios. The possible, but probably undesirable, use of training to ensure uniform opinions by the RECs is then discussed. The impossibility of achieving absolute moral agreement is shown, and finally—of greatest importance—the probable, and (surely) undesirable, loss of independence of opinion is portrayed as the likely outcome of a quest for consistency.

THE NOTION OF UNIFORMITY AND OTHER UNLIKELY SCENARIOS

In George Orwell’s6Nineteen Eighty-Four, Winston Smith referred to his era as the “age of Big Brother”—“the age of uniformity”. Pursuing his theme, Orwell6 shows that the habit of thinking dies in a strictly regimented society (p 197), and apart from conquering the earth, the party aimed at removing all possibility of independent thought (p 201). The aim was not only to enforce obedience to the will of the State but to ensure “complete uniformity of opinion on all subjects…”(p 214). “[Not] even the smallest deviation of opinion on the most unimportant subject can be tolerated” (p 219).

In reality, the only way to remove differences in ethical opinion would be to have a single decision maker; but an ethics committee contains several members who, together, reach a decision. Therefore, even if there was only one mega committee making all the REC decisions for the nation, the component members could still disagree on ethical issues. The chair would then try to seek a consensus through discussion, failing which the majority opinion would be decisive. This is currently how RECs work and why they sometimes provide diverse opinions. But if diversity in the ethical opinions of committee members was in itself not tolerable, and a committee of more than one member was required, the committee applicants would either need to be screened for their moral views before appointment or, which may prove easier, they would indeed have to be told how to think.

Telling people how to think implies that there are no other acceptable ways of thinking. It conjures up images of brain washing and tyranny. The definition of indoctrination is “to teach (a person or group of people) systematically to accept doctrines, especially uncritically”.7 Education, on the other hand, is “the act or process of acquiring knowledge”.7 To function optimally, REC members need knowledge about ethics and research ethics. They currently obtain this knowledge (and are required to do so under the SOPs) by formal attendance at educational events. This type of training in ethics allows REC members to understand and reflect on the values underpinning the opinions that they reach.

The essence of an ethics committee’s deliberations is evaluative analysis, and removing this freedom from the RECs would remove the very reason for their existence. They have to decide whether a research project is ethical. This is not a factual or scientific judgement—it is a value judgement, and hence may be open to disagreement.

Agreement is more readily reached on non-moral matters, sometimes by using standards or evidence to inform decisions. Thus decision-making pathways exist for the treatment of heart failure, fractured hips and strokes. It is, however, far harder to achieve concordance when deciding whether patients with dementia should undergo cardiopulmonary resuscitation or whether patients with severe brain damage should have artificial feeding. Nevertheless, evidence-based ethics has been suggested as a helpful tool when considering whether some aspects of research proposals are ethical.8,9

POSSIBLE BUT (PROBABLY) UNDESIRABLE USE OF TRAINING TO TELL RECS HOW TO THINK

The Warner Report recommends appropriate training to deal with excessive inconsistencies in opinion. Scofield,10 however, argues that a trained ethicist, although an expert in descriptive ethics and meta-ethics, is not qualified to offer expert opinion testimony in court—an opinion on “what ought to be done”. In his view, claiming to be an expert in normative ethics makes one “a Wizard of Oughts”. This is because, although the ethicist knows about ethical theory, whereas the layperson does not, when they decide what ought to be done (or what they think) the ethicist and the layperson are equally qualified.10

If the RECs were told (or trained) to use only one ethical theory to inform their opinions, a certain amount of inconsistency would probably be eliminated. Ethics, however, is unable to show that the validity of any one theory is unarguably superior to another,10 and people, informed by the same theory, can nevertheless opine differently. Thus if required to use only principle-based ethics, disagreement may still exist among the RECs because principles can conflict, admit exceptions and be imprecise.

Intuitionism, as a theoretical tool, would fare no better. McMahan11 describes a moral intuition as a spontaneous judgement unrelated to conscious inferential reasoning, although possibly influenced by unconscious cognitive processing. A compelling intuition may trump a moral theory with which it conflicts, because agreed criteria, determining whether, or to what extent, the theory is justified, do not exist (p 97). The example he uses is the commonly held powerful intuition that killing a person is worse than a comparable act allowing the person to die (p 99). The RECs cannot be told to decide intuitively, to abolish their differences, because intuitions may not be present or they may conflict. Intuitions, further, may be suspect—tainted by social prejudice, self-interest or an aberrant mental process (p 103).

Utilitarianism, too, would not abolish disagreement, particularly as the consequences of a research project are uncertain, and different utilitarians can disagree about the likely consequences. One utilitarian may support a project that can help numerous future patients, although considerable risks to some participants do exist. Another utilitarian may baulk at the disutility of injuring people even for potentially sizeable communitarian gains. After all, the Warner Report agreed that “the role of the Research Ethics Committees is both to protect the interests of human participants in research and to promote research that is of real value”.1

Training in evidence-based ethics, however, would circumvent the problem of choosing between competing ethical theories; but training, too, has a normative dimension. Data from the literature or commissioned surveys may be used to support the trainer’s own views on how the RECs ought to think. For example, empirical work shows that payment does not blind research subjects to risks,12 and that poorer subjects are not more willing to participate for increased payments than richer ones.13 Yet, the same studies show that payment makes subjects more willing to participate regardless of the extent of risk,12 and most subjects sensitive to payment believed that it influenced their decision to participate.13 Depending, therefore, on whether the trainer favoured paying research participants, an evidence-based instruction prescribing either course of action could be justified by using the results provided by these two studies.

In cases where the opinions provided by different committees, evaluating similar proposals, are shown to differ, the trainer could regard the majoritarian opinion as the right one and instruct all committees to make this type of decision in all future cases. Such training could be regarded as sharing good practice from issues and arguments already explored. On moral matters, however, the majoritarian decision is not always right. We are unable to determine the moral status of a fetus, or whether capital punishment is wrong, by surveying the arguments of a sample of the population.

THE IMPOSSIBILITY OF ABSOLUTE ETHICAL AGREEMENT

To have uniform ethical opinions, there would need to be truth in the following statement:“Ethics should be able to tell you exactly which acts are right and which ones are wrong.”14

This presumes that there is a single right answer to all ethical problems and that this answer is accessible. All that needs to be done is to locate (or manufacture) it and then pass it on to all the RECs, just like a SOP.

This can be tested by assuming that the principle “The infliction of needless suffering is wrong” is true, in which case the full argument follows:

  1. The infliction of needless suffering is wrong.

  2. This act is a case of the infliction of needless suffering.

  3. Therefore, this act is wrong.14

The first statement is a moral principle, and the third statement is the desired conclusion; but the truth value of the second statement depends on the circumstances of the act.14 As the circumstances of an act are not necessarily clear-cut and accessible to everyone, the moral classification of the act can be equivocal. Hence, although most would agree that the infliction of needless suffering is wrong, they may disagree on whether a type of suffering is needless, or even whether it constitutes suffering, making the wrongness of the act a bone of contention.

When Alice visits the dentist she experiences pain, but not needlessly; otherwise it would be wrong for her to visit the dentist. But what if Alice’s dentist wanted to take a biopsy specimen of her gum after filling her tooth, to find out the effect of filling teeth on gums? The biopsy of the gum would obviously cause further pain but, if his research was likely to help her or others in the future, the additional pain would not necessarily be needless.

Even if Alice consented to additional pain, it raises the question of how much pain can be added to the clinical equation, and for what gains, before the amount of pain itself becomes wrong. I will argue that disagreement can be expected regarding both the extent of permissible suffering and the good of the gains.

Joel Feinberg15 defines harm as follows,

A harms B when:
 A sets back B’s interests
 And, in a normative sense when:
 A does this in such a manner as to violate B’s rights (or to wrong him).15

But one class of harms that is not properly described as “wrongs”, is that to which the victim has freely consented (Feinberg,15 p 35). Volenti non fit injuria is a maxim that when translated means “To one who has consented no wrong is done” (Feinberg,15 p 115). But even so, the amount of harm to which a person can lawfully consent has limits that were articulated by the judiciary in R v Brown.16

The appellants in this case were a group of sadomasochists who committed consensual acts of violence against each other for sexual pleasure. They argued that the willing and enthusiastic consent given by the victims to the acts perpetrated on them, prevented prosecution under the Offences against the Person Act 1861.

The House of Lords ruled (but only by a 3 to 2 majority) that it was not in the public interest for a person to cause actual bodily harm to another for no good reason; and in the absence of such a reason, the victim’s consent afforded no defence to a charge under the 1861 Act. The satisfying of sadomasochistic desires did not, in the view of their Lordships’, constitute a good reason. Lord Mustill (dissenting), however, said that the behaviour of the appellants showed no animosity or personal rancour from the inflictor, and no protest was raised from the recipient; in fact it was quite the reverse. He therefore could not view the behaviour as an assault.

A prime function of the RECs is to decide whether the harms to which a research participant is invited to consent are reasonable and not excessive; this is clearly a value judgement that is open to disagreement. If the House of Lords was divided regarding whether adults should be allowed to consent to unusual sexual acts, why should members of RECs—or indeed different RECs—have uniform opinions on difficult research ethical issues?

On one occasion, Harrow REC had to decide whether a project that entailed a slight risk of death to well volunteers was ethical. The Declaration of Helsinki (para 8) states :


 Medical research involving human subjects should only be conducted if the importance of the objective outweighs the inherent risks and burdens to the subject. This is especially important when the human subjects are healthy volunteers.

In practice, it is difficult to determine how important a scientific objective is, and whether a project will necessarily answer the scientific question is prospectively uncertain. What caused the greatest concern, however, was the risk (albeit very small) of death.

The committee concluded that the risks of the study outweighed the benefits and, on this basis, provided an unfavourable opinion. This was not a unanimous decision, and on reflection I have classified the positions adopted by the committee members into three groups: there were protectionists, expeditors and ambivalent (or abstaining) members.

Protectionism favours preventing risks and harms that may befall subjects. Expedition favours facilitating research and therefore tolerates greater risks, provided the consenting subjects are adequately informed. Ambivalence is, arguably, the most balanced position to adopt when faced with hard cases. It implies both a desire to expedite research and a desire to protect the subjects.

No doubt, other committees may have provided different opinions, and there should be no problem with this knowledge; after all, this is why an appeals process exists both in law and in the SOPs.

THE PROBABLE AND UNDESIRABLE LOSS OF INDEPENDENCE OF OPINION

Ensuring consistent opinions would necessitate providing RECs with a formula for balancing risks against benefits, and the following assumptions would have to underpin the process:

  1. That a study is always either ethical or not ethical

  2. That on the spectrum of harm (or benefit) there is a cut-off point (x), which can be captured. Then, if the harm is greater than x or if the benefit is lesser than x, the study is unethical.

  3. That the outcome of the new process would be better than that which we have at present.

Research projects can be roughly divided into three groups. There are some studies that pose no ethical problems and virtually all committees would provide a favourable opinion; in the past these studies could have been approved by chair’s achon. Then there are some studies, which are clearly either so dangerous or valueless that virtually all committees would reject them. Finally, there are the indeterminate, or arguably unethical studies. It is these studies that are most likely to draw diverse opinions from different committees.

To put these inconsistent decisions into perspective there is a useful legal concept called the Wednesbury principle. This principle derives from case law17 and is a standard for unreasonableness. A decision is “Wednesbury unreasonable” when it is one that no other committee, in the position of the deciding committee, would make.

Wednesbury unreasonableness includes, firstly, taking into account what one should not, such as the power and stature of the particular researchers; secondly, failing to take into account what one should, such as real risks of serious injury; and thirdly, plain absurdity.17

When a decision is reversed on appeal, it implies that another committee balanced the equation of harms to benefits differently. In the past, the SOPs permitted resubmission of a project to a second ethics committee, while preventing that committee from accessing the paperwork, and hence the reasoning, of the first committee. Permitting committees to access all the relevant paperwork is a sensible way of avoiding a certain amount of inconsistency in opinion.

The indeterminate situations, in which different committees are likely to provide different opinions, can of course be made more determinate, by teaching committees how to think or telling them what to decide. The core reasoning revolves around whether the potential benefits of a particular study outweigh the potential harms and, when the potential harms are relatively great, whether the freely given consent of the subject entitles the researcher to go ahead. Clearly, the outcome of the deliberations by the RECs will be greatly influenced by who teaches the committees how to think, or what the committees are told to think.

If the determination is set at a highly protective level, only minimal, or no harms would be tolerated; but it is difficult to conceive of a good reason for protecting research subjects from trivial harms. Most patient advocates realise that, in the long run, it is far better for patients to be able to partake of the benefits of medical research, including the advancement of scientific knowledge, in return for small harms or risks of harm that may befall research participants.

If the determination was set at a very low level of protection, there would be fewer impediments to research; therefore, interest groups, apart from the research subjects, would benefit to a greater degree. These interest groups include the researchers, their affiliated institutions and their sponsors. Collectively, this group constitutes a strong lobby for expediting research, and their interests could swamp the welfare interests of the participants.

The Warner Report1 concluded that


 It should remain the role of research ethical review to safeguard the rights, dignity, safety and welfare of potential human research participants by providing an independent opinion on the ethical implications of a research proposal.

If RECs were told how to think, their opinions would no longer be independent—they would be implementing a state-driven agenda. If the Central Office for Research Ethics committees was to substitute its opinion for that of the ethics committees, we could be moving in a dangerous direction. As RECs are meant to provide an independent opinion on whether a research proposal is ethical, that opinion needs to be independent of both science and the state, otherwise participants may not have adequate protection from scientific excesses. The notion that those charged with promoting research could be telling committees how to think is chilling.

A blighted history of unethical medical research continued long after the Nazi atrocities and was brought to light by Henry Beecher18 and Maurice Pappworth.19 As a result, society needs assurance that research is ethical and, however imperfect they may seem, ethics committees do exactly that. If the RECs were provided with algorithms to ensure that they all reached the same decision, any bureaucrat could do the task and the only reason to retain ethics committees would be to lull the public and the international scientific community by a false pretence.

In this country, judges bristled when they were informed of proposed legislation telling them how to interpret the Human Rights Act, and requiring them to balance the state’s interest in security against the individual’s right not to be tortured20—a right that usually bears no derogation. Analogically, research subjects are surely better served by the RECs whose opinions are independent of the state.

CONCLUSION

The recommendation that excessive inconsistency should be removed from the opinions provided by different RECs is arguably unworkable, because standardising ethical decision making is not akin to standardising medical decision making. Depending on who sets the standards or who chooses the evidence, decisions may veer towards or away from protectionism. Both ways, independence of opinion of the committees would be lost and the opinions of the RECs would no longer be separate from state control.

Acknowledgments

I thank the anonymous reviewers for their very helpful comments and also Charlotte Rose, the OREC Manager for North London, both for her encouragement when I agreed to undertake this project and for her help in clarifying the relationships between the different RECs and their governing bodies.

REFERENCES

Footnotes

  • Competing interests: GMS chairs the Harrow Local Research Ethics Committee.

  • This article was adapted from material presented by GMS at a debate organised by COREC.