Article Text
Commentary
For the sake of multifacetedness. Why artificial intelligence patient preference prediction systems shouldn’t be for next of kin
Statistics from Altmetric.com
Footnotes
Contributors MT and DS contributed equally to the conceptualisation and writing of the commentary.
Funding This study was funded by Bundesministerium für Bildung und Forschung (grant numbers: 01GP1903A and 01GP2202B).
Competing interests None declared.
Provenance and peer review Not commissioned; internally peer reviewed.
Linked Articles
Read the full text or download the PDF:
Other content recommended for you
- Ethics of the algorithmic prediction of goal of care preferences: from theory to practice
- AI support for ethical decision-making around resuscitation: proceed with care
- Computer knows best? The need for value-flexibility in medical AI
- Development and validation of a deep learning system to screen vision-threatening conditions in high myopia using optical coherence tomography images
- Randomised controlled trials in medical AI: ethical considerations
- Use of artificial intelligence for image analysis in breast cancer screening programmes: systematic review of test accuracy
- AI-based clinical decision-making systems in palliative medicine: ethical challenges
- Evidence, ethics and the promise of artificial intelligence in psychiatry
- Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI
- Evaluation framework to guide implementation of AI systems into healthcare settings