Article Text

Download PDFPDF
AI knows best? Avoiding the traps of paternalism and other pitfalls of AI-based patient preference prediction
  1. Andrea Ferrario1,2,
  2. Sophie Gloeckler3,
  3. Nikola Biller-Andorno3
  1. 1 Department of Management, Technology and Economics, ETH Zurich, Zurich, Switzerland
  2. 2 Mobiliar Lab for Analytics at ETH, ETH Zurich, Zurich, Switzerland
  3. 3 Institute of Biomedical Ethics and History of Medicine, University of Zurich, Zurich, Switzerland
  1. Correspondence to Professor Nikola Biller-Andorno, University of Zurich, Zurich, Switzerland; biller-andorno{at}ibme.uzh.ch

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

In our recent article ‘The Ethics of the Algorithmic Prediction of Goal of Care Preferences: From Theory to Practice’1, we aimed to ignite a critical discussion on why and how to design artificial intelligence (AI) systems assisting clinicians and next-of-kin by predicting goal of care preferences for incapacitated patients. Here, we would like to thank the commentators for their valuable responses to our work. We identified three core themes in their commentaries: (1) the risks of AI paternalism, (2) worries about attacks to our humanity stemming from the use of AI and (3) the possibility of designing AI systems for more relevant use cases than the one we consider in our work. We shall focus on these themes, leaving aside some other interesting suggestions, given the limited space available for our response.

Diaz Milian and Bhattacharyya discuss the risks of AI paternalism highlighting how the use of an AI to predict goal of care preferences may incentivise clinicians and surrogates to shift the burden of decision making to the machine.2 In particular, ‘[i]f the AI-generated response is given priority above the other agents’2 and the AI makes decisions autonomously, then ‘AI paternalism’ would emerge.2 Against this possibility, the authors suggest four safeguards to be implemented in the AI life-cycle.2 These are procedures that may improve the trustworthiness of the AI and human control over it.3

We agree that it is important …

View Full Text

Footnotes

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles