Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Tensions between respect for autonomy and paternalism loom large in Ferrario et al’s discussion of artificial intelligence (AI)-based preference predictors.1 To be sure, their analysis (rightfully) brings out the moral matter of respecting patient preferences. My point here, however, is that their consideration of AI-based preference predictors in treatment of incapacitated patients opens more fundamental moral questions about the desirability of over-ruling considered patient preferences, not only if these are disclosed by surrogates, but possibly also in treating competent patients.
I do not advocate such an evolution—the moral desirability of that calls for a much broader debate, one in which the meaning of ‘doing good’ in medicine, and how this intersects with normative views on ‘the goal(s) of medicine’ would be central elements. While my aim in this piece is more modest, I nonetheless hope to approach it sideways, by indicating how the contribution by Ferrario et al reopens discussion about paternalism and the normativity of preferences in medicine.
This follows from what these authors give as reason for endorsing this technology in care for incapacitated patients. Their argument for employing such tools in case of incapacitation is based on the premise that advice of surrogate decision-makers about care preferences is suboptimal because of their biased perception …
Contributors SS is the sole author.
Funding This study was funded by H2020 European Research Council (Grant number: 949841; European Research Council (ERC)).
Competing interests None declared.
Provenance and peer review Not commissioned; internally peer reviewed.
Read the full text or download the PDF:
Other content recommended for you
- Ethics of the algorithmic prediction of goal of care preferences: from theory to practice
- AI support for ethical decision-making around resuscitation: proceed with care
- Randomised controlled trials in medical AI: ethical considerations
- AI knows best? Avoiding the traps of paternalism and other pitfalls of AI-based patient preference prediction
- Development and validation of a deep learning system to screen vision-threatening conditions in high myopia using optical coherence tomography images
- Autonomy-based criticisms of the patient preference predictor
- Computer knows best? The need for value-flexibility in medical AI
- Use of artificial intelligence for image analysis in breast cancer screening programmes: systematic review of test accuracy
- AI-based clinical decision-making systems in palliative medicine: ethical challenges
- Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI