Article Text

Download PDFPDF

Somewhere between dystopia and utopia
Free
  1. Jesse Wall
  1. Faculty of Law, University of Auckland, Auckland, New Zealand
  1. Correspondence to Dr Jesse Wall, Faculty of Law, University of Auckland, Auckland, New Zealand; jesse.wall{at}auckland.ac.nz

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The Journal of Medical Ethics can sometimes read part Men Like Gods and part A Brave New World. At times, we learn how all controversies can resolved with reference to four (or five) principles. At other times, we learn how “every discovery in pure science is potentially subversive”.1 This issue is no exception. Here, we can read about the utopia of gene editing, manufactured organs, and machine learnt algorithmic decision-making. We can also read about the dystopia of inherited disorders from edited germlines, the testing of xenotransplants on braindead patients, and misdiagnosis by artificial intelligence. What we see in the three papers that I will foreground here is the application of medical ethics to avoid the dystopia while preserving the promises of medical science. But as I seek to explain here, our ability to address the controversies that we anticipate in medical science depend on our ability to refine our understanding of what medical ethics requires of us now.

From the diagnosis of eye diseases from fungus images to predicting the risk of imminent risk of suicide attempts, machine learnt medical decision making has, in some high profile instances, outperformed clinicians in medical diagnosis and treatment recommendations. While clinician and machine learnt diagnoses will both involve degrees of uncertainty and fallibility, Thomas Grote2 asks us to consider is how much weight a clinician ought to give an algorithmic diagnosis which she disagreements with. Since a clinician is held to account for her decision, ‘instead of enhancing their decisioning making capabilities, the deployment of machine learnt algorithms runs the risk of defensive medicine’ among clinicians. Equally, machine learnt algorithms may introduce a paternalistic model of medical decision making, where ‘doctor knows best’ at the expense of patient autonomy.3 Behind this anxiety, Grote identifies three ethical pitfalls to be avoided; responsibility and accountability ‘gaps’ where the machine learnt algorithms gets decisions wrong, the equitable development of the technology that does not benefit any given population group at the expense of another, and the challenge of normative alignment, where judgements of value are implicit in the ostensibly value-neutral data that the machine had learnt from. Machine learnt algorithmic decision-making therefore requires us to consider how concepts of accountability, equity, and value-neutrality, apply to this emerging clinical practice.

Recent success in the field of germ-line gene therapy have shown how genetic diseases can be treated through changes to the heritable germline. As Bryan Cwik explains,4 should these therapies ‘ever become serious candidates for clinical application, there will be earnest consideration of a number of ethical issues’. Of principal concern is the intergenerational monitoring of subjects and their descendants. Cwik highlights three ethical challenges that confront clinical trials of germline gene therapy, if it is to be pursued. First, given how intrusive it is for research subjects, and burdensome it is for researchers, clinical tails will need to consider ‘exactly what information is required from future subjects’, how many generations will be required, and why is the information required. Second, researchers will need to manage how they communicate any findings of study to research subjects, even where the subjects have withdrawn from the study. This intergenerational monitoring may therefore involve a ‘limited waiver of privacy’ from research participants on behalf of their decedents The third challenge is that, since the research has generated risks for both the participants and their descendants, a clinical trial will heighten the obligations that are owed to subjects by researchers, in terms of subjects’ health and continued involvement in the study. Existing standards for clinical trials require therefore reconsideration if they are to apply to germ-line gene therapy.

Xenotransplantation and tissue engineering offer a promising alternative to insufficient number of organs that are available for transplantation. But as Brendan Parent explains,5 ‘their safe develop requires a reexamination of the morality of using the recently deceased as subjects in testing and research’. For Parent, this requires any such research and testing to be undertaken in accordance with existing expectations for authorising posthumous research on a deceased person. It requires any communication with the deceased’s family be ‘transparent, respectful, and sympathetic’. It also requires that existing academic committees provide the institutional oversight of such research, in a way that should parallel that of research on living subjects. This includes establishing the necessity of testing on human subjects, the participation in the research being free of incentives to the deceased or those providing the authorisation, and protocols to minimise the invasiveness of the procedures. In contrast to the germ-line gene therapy, existing research protocol may be sufficiently robust to ensure the ethical the ethical testing of xenotransplantation and tissue engineering on human subjects.

These three papers discuss new frontiers in medical science. Yet, in this issue you can also read about more contemporary controversies (or even some time-honoured controversies) that confront (still) medical science and practice. Here, you can read about, among other things, the ethical justification for regulatory limits on research risk6 and the obligations of gamete donors to their genetic descendants7; you can (re)consider questions of conscientious objection8 and the moral status of the pre-born.9 But note how our ability to address the ethical controversies in the new frontiers in medical science relies on the solutions to our contemporary puzzles. The ethical use of machine learnt algorithmic diagnoses and treatment decisions requires us to delineate concepts of practitioner responsibility and patient autonomy, it requires us to consider the equitable access to new technology and the divide between value-neutral and value-ladened analysis. An ethical trail of germ-line gene editing requires the extension of the obligations that researchers owe subjects and the waiver of the right to privacy. The testing and research on xenotransplantation and tissue engineering may require the same procedures and expectations that we find in analogous fields of medical research. These are concepts, principles, and procedures, that have been wrestled over by medical ethicists in countless debates and disagreements.

The point is that it is not just the three papers that have highlighted here that steer us away from the dystopia of A Brave New World, but it is medical ethics as an enterprise that does so. From the smallest thought experiment, to the most detailed research protocols, the sharper our solutions to contemporary puzzles, the less daunting the anticipated controversies will be. In this way medical ethics is also an unpredictable enterprise. We cannot predict the way in which rigorous ethical analysis will inform future debates and disagreements. In the same way that early proponents of patient autonomy could not have anticipated the dangers of ‘machine knows best’, or theorists of privacy could not have anticipated the ethics of a waiver privacy on behalf of generations of decedents, or authors of research protocols could have anticipated the testing of manufactured organs on braindead humans. Nor do we need to anticipate future controversies. We can leave that science fiction writers. It is enough to know that, if our analysis and debate is rigorous and informed, our best papers now will be the obligatory first footnote in the future.

References

Footnotes

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Patient consent for publication Not required.

  • Provenance and peer review Not commissioned; internally peer reviewed.

Other content recommended for you