Article Text
Statistics from Altmetric.com
Introduction
In their contribution ‘Ethics of the algorithmic prediction of goal of care preferences’1 Ferrario et al elaborate a from theory to practice contribution concerning the realisation of artificial intelligence (AI)-based patient preference prediction (PPP) systems. Such systems are intended to help find the treatment that the patient would have chosen in clinical situations—especially in the intensive care or emergency units—where the patient is no longer capable of making that decision herself.
The authors identify several challenges that complicate their effective development, application and evaluation—and offer solutions to them. One of these issues is the question of who should ultimately use said systems. While it is undisputed that clinicians should use these AI systems for their decision-making process, there is an ongoing debate about whether next of kin should use them as well. The authors advocate that ‘’access should be provided to both clinicians and loved ones with due explanations and as desired’. We will disagree with this assessment and explain in our commentary why it is important that surrogates provide their own assessments with as little external (AI) influence as possible.
The role of next of kin in patient preference finding
Why do next of kin actually participate in the process of preference finding and treatment decision-making? A key reason is that clinicians usually …
Footnotes
Contributors MT and DS contributed equally to the conceptualisation and writing of the commentary.
Funding This study was funded by Bundesministerium für Bildung und Forschung (grant numbers: 01GP1903A and 01GP2202B).
Competing interests None declared.
Provenance and peer review Not commissioned; internally peer reviewed.
Linked Articles
Read the full text or download the PDF:
Other content recommended for you
- Ethics of the algorithmic prediction of goal of care preferences: from theory to practice
- AI support for ethical decision-making around resuscitation: proceed with care
- Computer knows best? The need for value-flexibility in medical AI
- Development and validation of a deep learning system to screen vision-threatening conditions in high myopia using optical coherence tomography images
- Randomised controlled trials in medical AI: ethical considerations
- Artificial intelligence for diabetic retinopathy in low-income and middle-income countries: a scoping review
- Use of artificial intelligence for image analysis in breast cancer screening programmes: systematic review of test accuracy
- AI-based clinical decision-making systems in palliative medicine: ethical challenges
- Evidence, ethics and the promise of artificial intelligence in psychiatry
- Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI