Article Text

Download PDFPDF

Ottawa Statement does not impede randomised evaluation of government health programmes
  1. Charles Weijer1,
  2. Monica Taljaard2
  1. 1 Rotman Institute of Philosophy, Western University, London, Ontario, Canada
  2. 2 Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
  1. Correspondence to Dr Charles Weijer, Rotman Institute of Philosophy, University of Western Ontario, London, ON N6A 3K7, Canada; cweijer{at}uwo.ca

Abstract

In this issue of JME, Watson et al call for research evaluation of government health programmes and identify ethical guidance, including the Ottawa Statement on the ethical design and conduct of cluster randomised trials, as a hindrance. While cluster randomised trials of health programmes as a whole should be evaluated by research ethics committees (RECs), Watson et al argue that the health programme per se is not within the researcher’s control or responsibility and, thus, is out of scope for ethics review. We argue that this view is wrong. The scope of research ethics review is not defined by researcher control or responsibility, but rather by the protection of research participants. And the randomised evaluation of health programmes impacts the liberty and welfare interests of participants insofar as they may be exposed to a harmful programme or denied access to a beneficial one. Further, Watson et al’s claim that ‘study programmes … would occur whether or not there were any … research activities’ is incorrect in the case of cluster randomised designs. In a cluster randomised trial, the government does not implement a programme as usual. Rather, researchers collaborate with the government to randomise clusters to intervention or control conditions in order to rigorously evaluate the programme. As a result, equipoise issues are triggered that must be addressed by the REC.

  • research ethics
  • clinical trials
  • policy guidelines/inst. review boards/review cttes
http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

All too often governments implement novel programmes in public health and health systems with little or no evidence that they are safe, effective or a wise investment of public funds. Research evaluation should be part and parcel of the roll-out of all new government health programmes. The need for research is being acknowledged gradually. For instance in 2011, a report by the UK House of Lords concluded that rigorous evaluation plans should accompany all publicly funded community behaviour change interventions.1 In this issue of JME, Watson et al add their voices to the call for research on government health programmes, identifying ethical protections, such as those found in the Ottawa Statement on the ethical design and conduct of cluster randomised trials,2 as an obstacle.3 They say that ‘current guidelines act as a hinderance [to programme research] because they assume that researchers have responsibility’ for the programme itself.3 Thus, they argue that ethical standards, including those of the Ottawa Statement, ‘should be relaxed’.3

In this response, we focus on the sort of prospective randomised evaluation of government health programmes covered by the Ottawa Statement. Public health and health systems programmes may be prospectively evaluated in parallel arm or stepped wedge cluster randomised trials (CRTs). In parallel arm designs, clusters are randomised to receive either the study intervention or a control condition for the duration of the study; in a stepped wedge trial, all clusters begin in the control condition and over time groups of clusters cross over to the study intervention so all clusters (although not necessarily all individuals) are receiving the intervention at the end of the study. All too commonly, researchers label these CRTs as audit, quality improvement or service evaluation. To their credit, Watson et al point out that programme evaluation CRTs are research and they should be reviewed by a REC.3

While the CRT as a whole is research, for Watson et al the government programme under evaluation falls outside of the authority of the REC and therefore is out of scope for review. They claim that these ‘study programmes, initiatives, policies and interventions … would occur whether or not there were any concurrent, coincident, or otherwise related research activities ….’3 They believe this ‘absolve(s) researchers from responsibility for the programme itself’.3 We are told researchers must ‘obtain [REC] approval for all the things they do in their role as researcher [including] … data collection, [and] analysis ….’3 But REC review of the programme under evaluation is not required. The ‘researchers may not be well placed to supply a rationale [for the programme] … much less justify it’.3 Further, government ministers or hospital chiefs would be reluctant to submit a programme to such review. In their words: “It would be perverse if, by agreeing to the evaluation of an intervention,…[those in charge] had to submit the intervention to a [REC]”.3

There is something intuitively appealing about Watson et al’s view. If the government is rolling out a programme anyway, why impede its evaluation with REC review of the programme? The view is intuitive; and, like some common-sense intuitions, it is quite wrong. In shifting responsibility away from researchers and RECs, Watson et al appeal to the ‘democratic mandate’ of governments.3 But the government has no mandate to experiment on its people without independent oversight. The history of research ethics is replete with examples of governments conducting unethical research in the name of the common good, be it German wartime experiments to treat hypothermia in downed pilots, or American radiation experiments conducted on soldiers, patients and prisoners.4 Legitimate research requires review by the REC to ensure the rights of individual are not violated in the name of the many.

Further, the sine qua non of research ethics is not researcher responsibility, but the protection of research participants. Research commonly involves multiple stakeholders, be they physicians, healthcare institutions, insurers, pharmaceutical companies or governments, each with control of differing aspects of the study (eg, patient care, facility access and standards, study intervention, reimbursement and confidential data). To suggest that the boundary of what is legitimately reviewed by the REC is that which is within the researcher’s control seriously distorts research ethics. The job of the REC is to protect the liberty and welfare interests of research participants from all aspects of a research study. Insofar as a government programme evaluated in a CRT impacts research participants, its benefits and harms must be assessed by the REC.

To be clear, Watson et al say that CRTs evaluating programmes are research and should be reviewed by a REC. But as the government programme per se is not within the researcher’s control or responsibility, it should not be part of the ethical review. In their words, “the approvals necessary for the researcher should be limited only to factors over which the researcher has control [emphasis in original]”.3 Predictably, this has the consequence of minimising ethical issues in programme evaluation research. They say: “there is little that is ethically fraught about randomising roll-out in a situation where every site is going to receive the programme anyway ….”3 And yet programme evaluation CRTs raise prominent equipoise considerations. In a CRT, the government does not implement a programme as usual. Rather, researchers collaborate with the government to randomise provinces, communities, neighbourhoods or hospitals to intervention or control conditions in order to rigorously evaluate the programme. With rare exceptions of lotteries to allocate a scarce resource,5 governments do not allocate programmes to citizens by chance; the choice of a cluster randomised design signals that researchers and government are working together to plan the roll-out of a programme so that it may be evaluated. As a result, equipoise issues are triggered that must be addressed by the REC.

First, the Ottawa Statement says that the ‘researcher must ensure that the study intervention is adequately justified. The benefits and harms of the study intervention must be consistent with competent practice in the field of study relevant to the CRT’.2 The protocol should justify the choice of study design. It should also describe the programme in detail and explain the justification for its planned use, including evidence supporting its safety and effectiveness. The task of the REC is to ensure that the intervention is not known to be positively harmful, and conversely that its effectiveness in the study context has not already been established. While evidence may exist regarding the effectiveness of public health and health systems interventions, it is commonly lacking in the country or health system setting and this may be sufficient to satisfy equipoise.

Watson et al claim that researchers may not be able to provide this information. They say, “researchers may not be well placed to supply the rationale (after all it is not their choice), much less justify it, especially since their role may be one of surfacing the rationale and assessing its soundness”.3 This is implausible. Researchers and their government partners planning a CRT should be able to explain clearly what is the intervention and why it is being done, even if later evidence leads to a re-examination of its rationale. They go on to consider cases in which a ‘programme is highly likely or certain to cause harm’.3 Having removed the ethical evaluation of the programme from the authority of the REC, Watson et al state that such instances ‘should be judged on a case-by-case basis’ (judged by whom and according to what standard, one might wonder).3 They go on to admit that their view may allow serious equipoise violations because ‘an evaluation might be ethical even if a programme is not (since the evaluation may, for instance, evidence harm that would otherwise remain concealed)’.3

Second, the Ottawa Statement says that ‘[r]esearchers must adequately justify the choice of the control condition. When the control arm is usual practice or no treatment, individuals in the control arm must not be deprived of effective care or programs to which they would have access, were there no trial’.2 The study protocol should explain what is the control condition and whether it will be augmented in any way. Delaying or depriving research participants of access to a programme that is perceived to be beneficial requires careful scrutiny by the REC. If evidence of programme effectiveness or cost-effectiveness in the study context is lacking, however, a usual care control may be justified. Some have suggested that stepped wedge designs may offer ethical advantages in these conditions, but this claim has recently been challenged persuasively.6 Augmented control conditions have the advantage of engaging control clusters to prevent attrition, but caution is advised as they may bias the study toward the null hypothesis.2

Watson et al deny that the control condition is the researcher’s responsibility and, as a result, it need not be considered by the REC. They say, “the researcher cannot be held responsible for the fact that controls have not received the intervention, since it is the policymaker, not the researcher, who decides where and when to intervene (and where not to do so)”. This is disingenuous. Were the programme not being evaluated, the government would roll-out the programme to the whole population. When the programme is evaluated in a CRT, ‘where and when to intervene (and where not to do so)’ is determined by the study design, of which the researcher is plainly the author. Thus, the justification of the control condition must be considered by the REC.

We share with Watson et al the conviction that novel government health programmes must be rigorously evaluated. CRTs are often the only randomised design suitable for this task. But Watson et al claim that ethical standards found within the Ottawa Statement are a ‘remarkably poor fit’ with research evaluation of health programmes and as a result ‘guidelines act as a hindrance’.3 No evidence is offered to support the empirical claim that important programme research is in fact being hindered by existing ethical guidance. In our experience, the Ottawa Statement remains a much-needed guide for researchers designing CRTs and RECs reviewing them.

Acknowledgments

The authors thank Anthony Belardo for editing the paper.

References

Footnotes

  • Contributors CW wrote the first draft of the manuscript. MT provided critical review and revisions of the manuscript. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted.

  • Funding This work is supported by a Canadian Institutes of Health Research project grant (PJT-153045).

  • Competing interests CW receives consulting income from Eli Lilly and Company Canada. MT has no competing interests to declare.

  • Patient consent for publication Not required.

  • Ethics approval Ethics approval was not required for this conceptual work.

  • Provenance and peer review Not commissioned; internally peer reviewed.

Linked Articles

Other content recommended for you