Article Text
Abstract
Artificial intelligence (AI) systems are increasingly being used in healthcare, thanks to the high level of performance that these systems have proven to deliver. So far, clinical applications have focused on diagnosis and on prediction of outcomes. It is less clear in what way AI can or should support complex clinical decisions that crucially depend on patient preferences. In this paper, we focus on the ethical questions arising from the design, development and deployment of AI systems to support decision-making around cardiopulmonary resuscitation and the determination of a patient’s Do Not Attempt to Resuscitate status (also known as code status). The COVID-19 pandemic has made us keenly aware of the difficulties physicians encounter when they have to act quickly in stressful situations without knowing what their patient would have wanted. We discuss the results of an interview study conducted with healthcare professionals in a university hospital aimed at understanding the status quo of resuscitation decision processes while exploring a potential role for AI systems in decision-making around code status. Our data suggest that (1) current practices are fraught with challenges such as insufficient knowledge regarding patient preferences, time pressure and personal bias guiding care considerations and (2) there is considerable openness among clinicians to consider the use of AI-based decision support. We suggest a model for how AI can contribute to improve decision-making around resuscitation and propose a set of ethically relevant preconditions—conceptual, methodological and procedural—that need to be considered in further development and implementation efforts.
- clinical ethics
- decision-making
- emergency medicine
- end-of-life
- patient perspective
- artificial intelligence
Data availability statement
Data are available upon request.
Statistics from Altmetric.com
- clinical ethics
- decision-making
- emergency medicine
- end-of-life
- patient perspective
- artificial intelligence
Data availability statement
Data are available upon request.
Footnotes
NB-A and AF are joint first authors.
Contributors NB-A conceived the study and acquired the funding. NB-A and AF designed the study and led the development of the study materials, with input and consensus from all authors. FM and PB collected the empirical data. All authors contributed to the analysis and interpretation of the empirical data. NB-A and AF drafted the manuscript and all authors contributed to and approved the final version.
Funding This study formed part of a larger fellowship project on ‘Digital support of decision-making in health care’ (led by Nikola Biller-Andorno) at the Collegium Helveticum, an Institute of Advanced Studies carried by the University of Zurich, the Swiss Federal Institute of Technology (ETH) Zurich and the Zurich University of the Arts (ZHdK).
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
Read the full text or download the PDF:
Other content recommended for you
- Ethics of the algorithmic prediction of goal of care preferences: from theory to practice
- AI-based clinical decision-making systems in palliative medicine: ethical challenges
- Implications of conscious AI in primary healthcare
- Does “AI” stand for augmenting inequality in the era of covid-19 healthcare?
- Use of artificial intelligence for image analysis in breast cancer screening programmes: systematic review of test accuracy
- Evaluation framework to guide implementation of AI systems into healthcare settings
- Artificial intelligence for diabetic retinopathy in low-income and middle-income countries: a scoping review
- Public perceptions on the application of artificial intelligence in healthcare: a qualitative meta-synthesis
- Computer knows best? The need for value-flexibility in medical AI
- Medicine and the rise of the robots: a qualitative review of recent advances of artificial intelligence in health