Article Text
Statistics from Altmetric.com
In response to Ferrario et al’s1 work entitled ‘Ethics of the algorithmic prediction of goal of care preferences: from theory to practice’, we would like to point out an area of concern: the risk of artificial intelligence (AI) paternalism in their proposed framework. Accordingly, in this commentary, we underscore the importance of the implementation of safeguards for AI algorithms before they are deployed in clinical practice.
The goal of documenting a living will and advanced directives is to convey personal preferences regarding the acceptance of therapies, including life support, for future use in case of losing one’s capacity. This is standard practice in the care of incapacitated critically ill patients as it is considered to extend the individual’s autonomy. Notably, most of the documents that intensivists encounter in clinical practice are written in a generic fashion and lack context. This problem usually leads to the reliance on family members or friends to act as surrogate decision-makers. Surrogates should aid in decision-making by relaying the patient’s wishes based on their understanding of their preferences by recalling prior conversations, or experiences. Nevertheless, surrogates often lack that knowledge, express their own preferences or choose to prolong life …
Footnotes
Twitter @anirbanb_007
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Provenance and peer review Not commissioned; internally peer reviewed.