Article Text
Abstract
Artificial intelligence (AI) systems are increasingly being used in healthcare, thanks to the high level of performance that these systems have proven to deliver. So far, clinical applications have focused on diagnosis and on prediction of outcomes. It is less clear in what way AI can or should support complex clinical decisions that crucially depend on patient preferences. In this paper, we focus on the ethical questions arising from the design, development and deployment of AI systems to support decision-making around cardiopulmonary resuscitation and the determination of a patient’s Do Not Attempt to Resuscitate status (also known as code status). The COVID-19 pandemic has made us keenly aware of the difficulties physicians encounter when they have to act quickly in stressful situations without knowing what their patient would have wanted. We discuss the results of an interview study conducted with healthcare professionals in a university hospital aimed at understanding the status quo of resuscitation decision processes while exploring a potential role for AI systems in decision-making around code status. Our data suggest that (1) current practices are fraught with challenges such as insufficient knowledge regarding patient preferences, time pressure and personal bias guiding care considerations and (2) there is considerable openness among clinicians to consider the use of AI-based decision support. We suggest a model for how AI can contribute to improve decision-making around resuscitation and propose a set of ethically relevant preconditions—conceptual, methodological and procedural—that need to be considered in further development and implementation efforts.
- clinical ethics
- decision-making
- emergency medicine
- end-of-life
- patient perspective
- artificial intelligence
Data availability statement
Data are available upon request.
Statistics from Altmetric.com
Read the full text or download the PDF:
Other content recommended for you
- Ethics of the algorithmic prediction of goal of care preferences: from theory to practice
- AI-based clinical decision-making systems in palliative medicine: ethical challenges
- Does “AI” stand for augmenting inequality in the era of covid-19 healthcare?
- Use of artificial intelligence for image analysis in breast cancer screening programmes: systematic review of test accuracy
- Evaluation framework to guide implementation of AI systems into healthcare settings
- Public perceptions on the application of artificial intelligence in healthcare: a qualitative meta-synthesis
- Computer knows best? The need for value-flexibility in medical AI
- Medicine and the rise of the robots: a qualitative review of recent advances of artificial intelligence in health
- Artificial intelligence (AI) and global health: how can AI contribute to health in resource-poor settings?
- Human factors challenges for the safe use of artificial intelligence in patient care