Article Text
Statistics from Altmetric.com
In our recent article ‘The Ethics of the Algorithmic Prediction of Goal of Care Preferences: From Theory to Practice’1, we aimed to ignite a critical discussion on why and how to design artificial intelligence (AI) systems assisting clinicians and next-of-kin by predicting goal of care preferences for incapacitated patients. Here, we would like to thank the commentators for their valuable responses to our work. We identified three core themes in their commentaries: (1) the risks of AI paternalism, (2) worries about attacks to our humanity stemming from the use of AI and (3) the possibility of designing AI systems for more relevant use cases than the one we consider in our work. We shall focus on these themes, leaving aside some other interesting suggestions, given the limited space available for our response.
Diaz Milian and Bhattacharyya discuss the risks of AI paternalism highlighting how the use of an AI to predict goal of care preferences may incentivise clinicians and surrogates to shift the burden of decision making to the machine.2 In particular, ‘[i]f the AI-generated response is given priority above the other agents’2 and the AI makes decisions autonomously, then ‘AI paternalism’ would emerge.2 Against this possibility, the authors suggest four safeguards to be implemented in the AI life-cycle.2 These are procedures that may improve the trustworthiness of the AI and human control over it.3
We agree that it is important …
Footnotes
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Provenance and peer review Commissioned; internally peer reviewed.
Linked Articles
- Feature article
- Commentary
- Commentary
- Commentary
- Commentary
- Commentary
Read the full text or download the PDF:
Other content recommended for you
- Ethics of the algorithmic prediction of goal of care preferences: from theory to practice
- AI support for ethical decision-making around resuscitation: proceed with care
- Evidence, ethics and the promise of artificial intelligence in psychiatry
- Real-time use of artificial intelligence in the evaluation of cancer in Barrett’s oesophagus
- Does “AI” stand for augmenting inequality in the era of covid-19 healthcare?
- Use of artificial intelligence for image analysis in breast cancer screening programmes: systematic review of test accuracy
- Computer knows best? The need for value-flexibility in medical AI
- Public perceptions on the application of artificial intelligence in healthcare: a qualitative meta-synthesis
- Artificial intelligence (AI) and global health: how can AI contribute to health in resource-poor settings?
- Human factors challenges for the safe use of artificial intelligence in patient care