Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
In 1979 the National Commission for the Protection of Human Subjects of Biomedical and Behavioural Research in the US delivered a set of guidelines for the ethical conduct of research on human research subjects.1 In developing these guidelines, subsequently known as The Belmont Report, the Commission was “...directed to consider: (i) the boundaries between biomedical and behavioural research and the accepted and routine practice of medicine”(p1); and outline a set of ethical principles which would specifically govern research activities. The Report notes that maintaining this distinction is important to ensure that all research activities are subjected to ethical review and, while it acknowledges that distinguishing research and clinical care is less easy in some cases, it suggests that this is a relatively simple and straightforward task.
Forty years later, biomedical activities appear more complex: clinical activities are hybridised, trial design is no longer solely aimed at improving the evidence base, but at fostering closer integration with clinical activities (London, see page 409 ) and learning health systems reuse individuals’ health data to generate real-time improvements in patient care.2 In short, the conceptual boundaries between research and clinical care do not appear to be as distinct as the Belmont Report implies. Two papers in this issue (Dheensa et al (see page 397) and Ballantyne and Schaefer ( see page 392)) address some of the ethical challenges generated by merging of research and clinical care.
The UK’s 100 000 Genomes Project (100kGP) is an example of a biomedical development in which research and clinical care are no longer understood as distinct activities. Patients in the 100kGP are offered clinical genomic sequencing on the understanding that their health data will be used for research purposes. Dheensa et al note that the 100kGP was designed with the dual purpose of providing patients with a clinical diagnosis or personalised/targeted treatment and providing academic and/or commercial researchers with access to patient data. They suggest that because of its ‘hybridised’ nature, the 100kGP differs from therapeutic research, primarily because it explicitly and deliberately mixes research and clinical goals. They suggest that the 100kGP has a number of characteristics of learning healthcare systems,1 insofar as clinical sequencing data is used to improve our understanding of genomic variation, which will result in future improvements in patient care.
In their paper Dheensa et al outline the findings of an interview study of stakeholder views of the 100kGP. It is clear that many of the individuals interviewed were uncertain, and in some cases uncomfortable, about the hybridised nature of the 100kGP. Among other things they pointed to: an initial need to frame 100kGP as either research or clinical practice, the tensions generated by individuals participating without a full and clear understanding of the project and the reporting of individual results, the need to manage patient-participants’ expectations and the overwhelming value of hybridised activities for patient care. While many of the interviewees in the study acknowledged the need for innovative types of ethical governance of these hybrid activities, few were keen to speculate on the form that these might take.
The authors do not avoid this challenge and one of the most interesting aspects of the paper is their discussion of where patient-participants stand in relation to hybrid activities, including whether they have an obligation to participate in healthcare related research. While it is clear the authors think that they might, discussing at length Faden et al’s ethical framework for learning healthcare systems, which suggests that patients do have an obligation to participate in certain types of observational research,1 they are clearly uncomfortable with coming out in direct support of grounding research participation on such an obligation. They go on to consider alternative solutions, suggesting that patients involved in hybrid activities could be encouraged to choose what types of research their data is used for; arguing that this would aid transparency, build trustworthiness and, as a consequence, undermine the notion that hybrid activities involve undue inducement to participate. They note, however, that facilitating individuals’ choice in these situations is difficult to achieve, as the goals of the research in these projects are unknown at the outset and will normally change over time. Thus, the choices that can be articulated at the point of consent may be so vague as to be meaningless, suggesting that taking a broad consent may be the best way to proceed. An alternative solution is discussed, namely, delegating responsibility for making these choices to access committees who would be charged with ensuring that data usage is in the public interest. They note however, that such committees may not be representative and may struggle to define the public interest and therefore, what types of research should be allowed. In the end this paper fails to offer any solutions to the problems generated by hybridised activities, such as 100kGP, but that is not its purpose. It sets out to articulate and provide examples of some of the challenges generated by the inherent ethical and conceptual ambiguities of hybrid biomedical activities and to highlight the need for new and innovative ethical solutions to these challenges; in this respect it is successful.
While Ballantyne and Schaefer do not explicitly focus on hybridised biomedical activities, their paper, which focuses on consent waivers for secondary research on clinical health data, addresses similar issues to Dheensa et al. In a nutshell, Ballantyne and Schaefer claim that individuals have an obligation to participate in health data research and that this obligation provides grounds for a consent waiver for all secondary research uses of identifiable healthcare data. While in some jurisdictions researchers can obtain a consent waiver if it is impracticable to gain individual consent, the authors note that that the impracticability test penalises small-scale or ‘niche’ projects that require smaller, more focused datasets, with the result that certain patient groups can be excluded from research participation. They argue that a consent waiver for all secondary uses of health data should be granted, not on the grounds of practicality, or even inclusivity, but rather because health data created in, and by, a public health system is a public resource that should be used for the public good.
Ballantyne and Schaefer argue that a number of regulatory changes are necessary if unconsented secondary usage is to become the norm, namely: improvements in data security, an obligation on researchers to use deidentified data whenever possible, legal ramifications and financial penalties for researchers involved in privacy breaches, scientific review, and greater transparency about the existence and nature of research in the form of publicised audits of research uses of healthcare data. All of these seem like good ideas, but are not necessarily new. Perhaps the most interesting part of Ballantyne and Schaefer’s paper lies in their recommendation that we should (further) develop a ‘public good test’, which is directly relevant to granting a consent waiver. At a minimum this would entail a) ending restrictive publication practices by ensuring that all research results are made publically available and b) prohibiting the commercialisation/patenting of research results. While these ‘minimal’ requirements may be necessary for researchers to obtain a waiver, it must be noted that they may also have the effect of curtailing commercial involvement in research, which, arguably, may negatively impact the public good, insofar as some research that may foreseeably have public benefit may not be carried out without some form of commercial incentive. More interestingly, Ballantyne and Schaefer stipulate that in addition to the above requirements, the public good test would also require that health data research should have social value, which they argue should be assessed by Research Ethics Committees (RECS) and Institutional Review Boards (IRBS). This is where the argument becomes more speculative, because it is not really clear how RECS and IRBS could carry out this task, not least because social value, as defined by Ballantyne and Schaefer, is a vague concept, which incorporates a range of disparate and contradictory criteria, such that research is seen as having social value if it: offers ‘…significant potential benefit across the whole population by addressing conditions causing high mortality and morbidity’, addresses a source of inequity and promotes inclusivity by explicitly addressing the needs of excluded or vulnerable patient groups. In addition, the social value of any project involving health data would take into account the extent to which research results are made publically available and the degree to which the public have been involved in its design and execution. While few would dispute that research which seeks to improve the health of the population and ensure the inclusion of heretofore excluded groups is to be encouraged, arguably these criteria require some refinement if they are to aid, hard-pressed REC and IRB members to use the public good test to grant consent waivers.
Technological advances in the early twenty-first century, such as, the development of big data methods that enable the analysis and use of large and disparate datasets, have resulted in a push to involve more and more people in health data research. How to involve the majority, or all, of the population in these endeavours presents researchers with a number of ethical, economic and logistical challenges. One way of overcoming these challenges is to blur the categories of research and clinical care and create a hybridised activity that makes access to certain types of care conditional on consenting to secondary uses of one’s personal health data. Another solution is to retain these categorical boundaries and seek consent waivers to access healthcare data for research purposes, on the grounds that healthcare system users have an obligation to participate. Both of these solutions enable researchers to undertake healthcare data research, but both involve some degree of fudging. My reading of Dheensa et al and Ballantyne and Schaefer suggests that perhaps it is time we stopped trying to fit the ethical principles and procedures that were developed forty years ago into what can now be seen as a shifting landscape of biomedical activities; may be it is time to develop a new approach in research ethics.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Provenance and peer review Commissioned; internally peer reviewed.