Article Text

AI support for ethical decision-making around resuscitation: proceed with care
  1. Nikola Biller-Andorno1,2,
  2. Andrea Ferrario3,
  3. Susanne Joebges1,
  4. Tanja Krones1,4,
  5. Federico Massini1,2,
  6. Phyllis Barth1,2,
  7. Georgios Arampatzis2,5,
  8. Michael Krauthammer6
  1. 1 Institute of Biomedical Ethics and History of Medicine, Universität Zürich, Zurich, Switzerland
  2. 2 Collegium Helveticum, Zurich, Switzerland
  3. 3 Department of Management, Technology, and Economics, Eidgenössische Technische Hochschule Zürich, Zurich, Switzerland
  4. 4 Clinical Ethics, Universitätsspital Zürich, Zurich, Switzerland
  5. 5 Computational Science and Engineering Laboratory, Eidgenössische Technische Hochschule Zürich, Zurich, Switzerland
  6. 6 Department of Quantitative Biomedicine, Chair of Medical Informatics, Universität Zürich, Zurich, Switzerland
  1. Correspondence to Professor Nikola Biller-Andorno, Institute of Biomedical Ethics and History of Medicine, Universität Zürich, Zurich 8006, Switzerland; biller-andorno{at}ibme.uzh.ch

Abstract

Artificial intelligence (AI) systems are increasingly being used in healthcare, thanks to the high level of performance that these systems have proven to deliver. So far, clinical applications have focused on diagnosis and on prediction of outcomes. It is less clear in what way AI can or should support complex clinical decisions that crucially depend on patient preferences. In this paper, we focus on the ethical questions arising from the design, development and deployment of AI systems to support decision-making around cardiopulmonary resuscitation and the determination of a patient’s Do Not Attempt to Resuscitate status (also known as code status). The COVID-19 pandemic has made us keenly aware of the difficulties physicians encounter when they have to act quickly in stressful situations without knowing what their patient would have wanted. We discuss the results of an interview study conducted with healthcare professionals in a university hospital aimed at understanding the status quo of resuscitation decision processes while exploring a potential role for AI systems in decision-making around code status. Our data suggest that (1) current practices are fraught with challenges such as insufficient knowledge regarding patient preferences, time pressure and personal bias guiding care considerations and (2) there is considerable openness among clinicians to consider the use of AI-based decision support. We suggest a model for how AI can contribute to improve decision-making around resuscitation and propose a set of ethically relevant preconditions—conceptual, methodological and procedural—that need to be considered in further development and implementation efforts.

  • clinical ethics
  • decision-making
  • emergency medicine
  • end-of-life
  • patient perspective
  • artificial intelligence

Data availability statement

Data are available upon request.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

AI-based decision support in healthcare

An artificial intelligence (AI) is a computer system performing tasks that normally require human-level intelligence, like the identification of patterns in data and the generation of predictions to assist decision-making processes. At the core of modern AI lies machine learning, which ‘transforms the inputs of an algorithm into outputs using statistical, data-driven rules that are automatically derived from a large set of examples, rather than being explicitly specified by humans’.1 Deep learning is a form of machine learning that allows to automatically learn multiple layers of data representations, with minimal data preprocessing by humans.2 Thanks to the increase in available data and computational power, machine and deep learning models can be trained on massive data sets to deliver high performance in tasks from multiple domains, including healthcare.

It has been recognised for some time that machine learning algorithms are not perfect. They can be subject to biases—caused, for instance, by incomplete training data sets or misclassification errors, which may potentially lead to serious real-life implications such as amplifying socioeconomic disparities in healthcare.3 In spite of these challenges, more and more basic research for AI in healthcare applications is emerging, from diagnostics to prevention, drug discovery, treatment recommendations and operational excellence. Some have been translated into approved medical devices, and a few are already used for healthcare purposes.4 For example, AI systems using deep learning have been successful in multiple medical imaging use cases5–9; they have proven to predict reliably the risk of imminent suicide attempts,10 and they have been used to assess the probability of patients developing serious conditions or being transferred to palliative care.11

An increasing number of scientific contributions aim at the comparison of the performance of AI systems with the performance of human experts in the same healthcare domain: it has been shown12 that AI systems using deep learning can match the diagnostic performance of healthcare professionals (HCPs). In some medical domains, the use of AI systems will replace a considerable part of the work of human experts.13 However, performance comparisons between AI systems and human experts suffer from the difficulty in reproducing and comparing results due to the lack of a unified approach.12 14 15 Yet, it is reasonable to assume that AI systems will increasingly gain in epistemic authority, even though the assistive role of AI is frequently emphasised.16 This renders a definition of ethical framework conditions for the use of AI-based decision support in healthcare all the more important and timely.17 We have chosen to approach the issue through the study of a specific potential application, an AI-based support system for decisions around resuscitation.

Resuscitation decisions as an advanced test case for AI-based decision support?

Critical healthcare issues such as the cardiopulmonary resuscitation of patients require complex decisions with significant ethical implications.18 Moreover, such decisions frequently need to be taken in relatively short time frames and under stressful conditions. The current pandemic, with its high number of people falling severely ill very quickly, has once more made us keenly aware of the difficulties physicians encounter when they have to act without having a chance to know what the patient would have wanted. Humans making such decisions must integrate multiple sources of information and calibrate them with personal, social and ethical standards. With AI-based systems becoming mature for clinical decision support, such information processing and decision-making can be assisted computationally. Algorithmic suggestions can be consistent with patient preferences and likely outcomes without getting compromised by stress, time pressure, personal bias, conflicts of interest and fear of legal consequences that may influence provider perspectives and the end-of-life conversations with their patients.

The status quo is less than perfect: a national audit in England showed that almost 40 000 patients every year are receiving Do Not Attempt to Resuscitate (DNAR) orders without their consent or knowledge of their families.1 Physicians seem to be making these decisions for a good part relying on their own judgement, based on medical parameters, experience and personal values.19 Often no advance directive and no surrogate decision-maker are available, so physicians or a legal representative unfamiliar with the patient need to step in.20 Patients often have insufficient access to professional advance care planning (ACP) and a limited understanding of a life-threatening resuscitation situation, so it is difficult for them to assess the situation and define their resuscitation status ahead of time (eg, in an advance directive). This means that a significant part of patients arrives in the hospital arrives without having made any indication at all of their will regarding resuscitation. Others may base their decision on inaccurate assumptions. For instance, cardiopulmonary resuscitation is not as successful as it is often perceived,21 so patients may overestimate its benefits and disregard its potential harms when taking their decision. Also, it has been shown that relatives acting as surrogate decision makers on life-sustaining treatment frequently feel overburdened22 and make choices that often do not resonate with the patient’s preferences.22 23

A personalised, AI-based decision support system that is readily available to patients, relatives and HCPs could make a significant contribution towards improving the status quo, in addition to other initiatives like ACP that intend to improve decision-making by providing relevant information and a structured decision-making process to the patient.24 Decision sciences are currently focusing on a variety of value clarification methods,25 and algorithmic support would be a novel approach helping people understand how others with whom they share certain features and values have decided in a similar situation.26 In this sense, AI would act as a DNAR status decision support for (potential) patients (cf. the patient-centric application in figure 1). AI-based decision support might also be helpful for legal representatives having to decide on behalf of incapacitated patients, ideally in an ACP by proxy process, without having a clear sense of what the patient would have wanted. It might also support HCPs who need to make resuscitation decisions without the possibility to consult an advance directive or legal representatives, by helping clarify which option most likely corresponds to what the patient would have chosen (cf. the proxy/provider-centric application in figure 1). Given the limitations of algorithmic decision-making (with a view to the risk of bias and to the dependency on suitable training data), the highly personal nature of the decision and its far-reaching consequences, the AI system would be conceived to play a consultative rather than a prescriptive role, it would not replace but support human decision-making.

Figure 1

Potential role for AI-based resuscitation decision support for hospitalised patients. AI, artificial intelligence; CPR, Cardiopulmonary Resucitation; DNAR, Do Not Attempt to Resuscitate.

Considering the use of AI to support decision-making around resuscitation quickly raises a large set of questions about feasibility, appropriateness and potential impact: Are suitable data sets available for an algorithm to train on, given that many human DNAR decisions today are taken in non-ideal circumstances? Should an AI system be modelled after patient preferences or outcomes, or both? How would users perceive such a system? For successfully implementing decision support, context factors matter greatly27: Would an AI system be a good fit with clinical routines? Would users consider its outputs interpretable and trustworthy? And how can we evaluate if an AI system does more good than harm?

Resuscitation decision-making in the hospital and the use of AI systems: views of HCPs

We conducted a qualitative pilot study inviting HCPs’ perspectives to probe the potential and limitations of an AI-based decision support around resuscitation. Although resuscitation decisions need to be taken across the spectrum of medical care, we chose to focus on a hospital setting, assuming that the majority of resuscitation attempts occur in the inpatient sector, rendering resuscitation status particularly important, and that digital documentation was most advanced. We chose a university hospital in the region for our exploration, for various reasons: (1) code status is systematically collected from all patients and documented in the electronic hospital information system (since 2013), (2) the hospital has got a large number of documented resuscitation decisions (120 000 in total), spread across the different departments, (3) unlike many other healthcare institutions, the hospital offers ACP for patients by trained professionals including so-called physician orders for life-sustaining treatment (POLST, ‘an approach to end-of-life planning based on conversations between patients, loved ones and healthcare professionals’2),28 so that the decisions resulting from this process can be considered fairly well informed and well reflected.29 A significant part of healthcare providers also receive obligatory training on the ethical and procedural rules governing resuscitation,30 which are defined by national guidelines.

We therefore assumed that resuscitation or code status (we use the term interchangeably) was a topic that HCPs were clearly aware of, and that it was part of everyday routine for many of them, so that interview partners could provide us with substantive answers. We were also particularly interested in this setting as a candidate for a potential later training site of an AI system. Several other studies have already probed factors influencing (human) resuscitation decisions in other hospitals in Switzerland,31–33 which provided a valuable general background for our work.

Methods

In summer 2019, we approached physicians and nurses working in a university hospital in Switzerland with a request for an interview for a study on resuscitation practices. The sample was designed for maximum variation. We selected individuals to represent those clinical disciplines that we assumed would most frequently be confronted with resuscitation decisions and that at the same time covered a good part of the in-hospital pathway (emergency care, internal medicine, intensive care, surgery, palliative care). All study participants had clinical experience but differed in level of seniority.

Participation in the study was voluntary; all participants were contacted by email, and interviews took place at the hospital. Ethics approval was obtained by the relevant IRB in accordance with legal provisions. All participants gave informed consent and the interviews (in German) were recorded on tape. We conducted 11 semistructured interviews (6 women, 5 men, 7 physicians, 4 nurses) of 25–45 min, which took place at the university hospital. They were audio-recorded, transcribed and analysed following standards for qualitative expert interview analysis.34

The interview guide was based on the general methodological standards of episodic interviewing35 in the context of expert interviews.34–36 Episodic interviews combine narratives on a specific topic (eg, general views/experiences with/thoughts about a technology) with episodic triggers (eg, please tell me, when and how you used the technology during the last week/last time) in order to reconstruct specific social realities. We developed the questions based on an international literature review of papers on resuscitation status decision handling, the historical development of patient’s involvement regarding resuscitation status, case studies about controversial handling of a patient’s resuscitation status and ethical guidelines regarding DNAR orders. We complemented our search with a specific information research for the university hospital (patient information brochure regarding resuscitation status, e-learning programme about resuscitation status discussion with the patient). Questions covered current practices regarding determination and documentation of the decision to attempt resuscitation or not (DNAR), the decision-making process, perceived challenges and reactions to the possibility of AI-based support. The interview guide3 was constructed (1) to highlight specificities and procedural criticalities in the existing processes around resuscitation for different units at the hospital, with a focus on the update of resuscitation status entries, and (2) to discuss with HCPs the possibility of using AI systems to support their decision-making on resuscitation.

We collected the interviews over a period of 8 weeks and analysed them using a combination of thematic coding35 36 and qualitative content analyses,37 which is proposed as a useful coding and analysis strategy for reconstructing expert knowledge on specific topics, especially with regard to new technological developments.34 38 We first summarised the specific interview case in a short summary36 and sorted episodic depictions and semantic concepts within a case. In a second step, we compared categories between cases to (a) reconstruct current processes on dealing with resuscitation status and (b) understand experts’ expectations, hopes and fears about using AI technology within the overall process37 (cf. figure 2).

Figure 2

Knowledge construction via episodic interviewing (after Flick 2002, p. 159).

Results

Our study aimed to provide insights into resuscitation status decision-making in a large university hospital of international renown that might serve as a future site for training and pilot implementing a prototype of an AI-based decision support system. We wanted to understand where the issue of resuscitation status became relevant over the course of patient pathways and what the challenges were.

The aspirational goal for the studied institution is as clear as it is ambitious: the resuscitation status of all patients has to be established based on patient preferences—determined either in conversation with the patient, through an advance directive or a legal representative—and needs to be respected unless there are overriding considerations, such as clear cases of inappropriate or even futile care. An exception regards newly admitted patients whose condition and urgent medical needs do not allow for a determination of their resuscitation preferences.39 Code status has to be continuously updated and documented in a way that allows easy access to the respective treatment team.

Based on the insights gleaned from the interviews, figure 3 shows where resuscitation status becomes relevant throughout patient pathways, highlighting and situating key challenges.

Figure 3

Schematic presentation of patient pathways and implications for resuscitation status decision-making. Selected element descriptions: (1) The reanimation status must be decided during the admission process: ‘Generally, this is done by the admitting physician […]’ (Interview 11). (2) If the patient’s admission takes place in the EU, often the status is set directly by the admitting physician due to urgency: ‘When he is an emergency and unconscious and you have to see that he survives, there is simply no time […]’ (Interview 3). (3) Some interviewees mentioned that they would find it appropriate for the GP to regularly discuss with their patient a valid resuscitation status: ‘I also believe that this discussion [about one's own desired resuscitation status] should take place at home or at the family doctor […]’ (Interview 5). (4) Often the resuscitation status is not discussed again with the patient, if s/he is being referred from another hospital or medical institution: ‘[…] I think in everyday life the assistant and the senior doctor would look at the existing note and say, ‘it is there already’’. (Interview 1). (5) Since the risk for cardiovascular complications due to anesthesia or the invasive/surgical procedure is higher, a positive resuscitation status is presumed for patients undergoing invasive interventions. However, the question is often not addressed during the planning of the intervention; cases of iatrogenic cardiac complications that have been overcome by resuscitation on patients with DNAR status are invoked by interview partners: ‘[…] but it's different on the catheter table.’ (Interview 1); ‘[…] I entered DNAR, I remember that. And then the surgeon said: ‘That is not possible’ […]’’ (Interview 11). (6) On the ICU the resuscitation status of a patient is assessed every day, based not only on patient preferences but also on medical indication/futility considerations: ‘This is discussed daily in all rounds. Reanimation yes, no, where are the limits.’ (Interview 4). (7) The communication between the ICU and the rest of the hospital is complicated by the use of different clinic information systems: ‘Yes, we still have two [information] systems!’ (Interview 5), ‘Between the IPS and Intermediary Care it [accessing DNAR status information] is sometimes difficult because it does not appear in the form.’ (Interview 11). (8) Every in-patient must have a defined resuscitation status. The unilateral decision by the physician should be an exception due to urgency. Re-evaluation of a patient’s resuscitation status is needed under several circumstances (post-operative, after a resuscitation, disease progression…). Especially for patients with chronic diseases or with poor prognosis, a discussion about advance care planning (ACP) may be appropriate: ‘We always to first have a conversation with the patient [about resuscitation preferences].’ (Interview 7). (9) A negative resuscitation status is not mandatory for a progression to the Palliative Unit. Here the ACP system is slowly being established: ‘There are patients who want to be resuscitated, who say: ‘I want everything’’. (Interview 9). (10) Throughout the interviews, intervention-oriented specialties were depicted as still somewhat reluctant to discuss with patients about their reanimation status: 'in surgical disciplines, when patients come for elective procedures […], DNAR status is likely not determined […]. With tumor patients our experience is that often this topic until the very end, when it’s already quite clear that no long-term survival is expected, that often DNAR conversations have not occurred and no decisions seem to have been taken […].’ (Interview 10).

We identified several core challenges regarding the institutional goal of documenting up-to-date and patient preference-based resuscitation status information of all inpatients, which emerged from our interviews with HCPs (cf. box 1). Representative quotes from the interviews are used for illustration.

Box 1

Challenges of resuscitation status determination and documentation as seen by HCPs

  1. Code status data are not generated, for example, due to time constraints resulting from high workload of the team or because the patient’s health status does not (yet) allow a conversation and neither an advance directive nor legal proxies are available.

    • ‘Being busy is always the issue, a lot to do, I think that is the main obstacle…’ (Interview 1)

    • ‘You have to spend a lot of time on that…’ (Interview 8)

    • ‘Where life-saving measures have to be taken immediately, so to speak, and there’s no time to phone the family doctors and study any papers or anything.’ (Interview 10)

  2. Data are not based on patient preferences: rather, status is set by physicians either because of urgency (cf. figure 3, Item 2) or because the conversation with the patient is considered too awkward or too overwhelming, so a default is entered with the idea the conversation might happen at a later stage.

    • ‘But those (assistant physicians) who have ‘I have to do it’ in their heads but are very busy, they decide for themselves and not necessarily in consultation with the senior physician […]’ (Interview 1)

    • ‘There is often a note (in the chart) saying that the issue will be raised later with the patient but it does not happen […]’ (Interview 3)

  3. Data are biased, for example, because patients or relatives may overestimate the potential benefit of resuscitation measures so that their declared status choice does not reflect their preferences. Providers’ interests, such as survival after major surgery (cf. figure 3, Item 5) or offering therapies for patients with advanced cancer (cf. figure 3, Item 10) may also influence status decisions.

    • ‘It is difficult for people (patients or relatives) who have never dealt with these issues. For us, it is our daily bread, and we discuss these topics frequently, but there are people who have never, into old age, talked about such a topic. And then it is really a process, […]’ (Interview 3)

    • ‘[…] it is just as problematic that a patient may not understand, or may not want to understand, or may not be able to understand that there is nothing to be done, the same for relatives. Of course, there are also religious reasons.’ (Interview 8)

    • ‘[…] also professional groups, who then of course do not want anyone to die after their operation, do they? This does not look good in the statistics.’ (Interview 11)

  4. The documented resuscitation status may be contested, for example, by nursing staff who may have a different perception of what the status should be, which can lead to moral distress and conflicts in team.

    • ‘[…] if you have the feeling that the patient’s will has not been taken into account, that he wants something else […]’ (Interview 5)

    • ‘Nursing is generally a little more sensitive to this question, so nurses are more often, let’s say, more often struggling with certain resuscitation decisions by doctors.’ (Interview 6)

  5. Data are inaccessible, for example, because there are compatibility issues between multiple hospital documentation systems (cf. figure 3, Item 7) or the hospital system and external data sources.

    • ‘Yes, we still have two systems! The other department documents it in XXX (name of the system) and we document it on the prescription sheet. For us in the intensive care unit, only what is on the prescription sheet counts, not what is in XXX.’ (Interview 5)

  6. Data are incorrect: errors when entering data or transferring data between systems.

    • ‘In my opinion, transmission errors do occur time and again.’ (Interview 5)

Interview partners demonstrated clear awareness of the importance of code status documentation and its challenges. When the issue of AI came up in the interviews, there was a general openness towards considering AI-based decision support to help improve the status quo. At the same time, interviewees volunteered considerations—very much a brainstorming, as this potential application of AI is still a highly innovative idea—regarding the role, strengths and limitations of such a system if used in routine hospital care (cf. box 2). Again, representative quotes from the interviews are used for illustration.

Box 2

Roles, strengths and limitations of AI-based support systems for resuscitation decisions—views of HCPs

Roles

  • An AI-based support system should assist, not replace human judgement. It should make suggestions, not decisions Then I would say: ‘Okay, now that’s a good hint […] and then I would feel obliged to somehow verify it myself.’’ (Interview 8)

  • It could prompt conversations within the team and support the discussion with the patient. ‘Sometimes colleagues feel a bit offended when you question their judgement. […] With the tool, I could say: ‘Hey, I consulted this tool, it favors a different decision than you do, let’s discuss this together.’’ (Interview 9)

Strengths

  • AI systems are innovative and have immediate appeal (‘exciting stuff’). ‘I think such tools, that’s actually very exciting stuff, I must say.’ (Interview 9)

  • An AI-based support system generates predictions based on evidence, rather than on opinions ‘I also think that such tools are based more on facts than on opinions. At least that’s what I imagine, as emotions are probably not involved.’ (Interview 9)

  • It provides an easily accessible ‘digital second opinion’, which is particularly helpful in difficult cases. ‘Something like a ‘digital second opinion’ or so I think is actually a good thing. Even if I haven’t thought of something, the tool will have thought of it.’ (Interview 9)

Limitations and open questions

  • Even if an AI-based system were fast to access and deliver the requested prediction, it would still not be useful in emergencies, where every minute has significant impact on resuscitation outcomes. ‘You’d have to run to the nurses’ station, open the system and see what it says. And if someone is ‘CPR yes’ you have lost a minute. 10% less survival.’ (Interview 7)

  • It seems counterintuitive that an AI can in fact provide relevant input on highly personal, individual decision (such as for or against resuscitation) by training on data from others. Is such a system not necessarily going to be reductionist? How good will its performance be, and how good is good enough? ‘Yes, it is certainly helpful in some cases, but I believe that a resuscitation decision is also something personal. Particularly patients who are ill but want to live up to a certain event.’ (Interview 11)

  • Understandability/explainability is a key issue for perceived reliability and trust. Knowing what the AI does helps figure out an appropriate distribution of tasks between humans and the machine. ‘It depends on how well I could understand how the tool works. I think it would be a good support. I would use the decision, or rather the hint, from the tool, as a prompter to read the chart or to have a conversation, whatever, to verify. If I know exactly ‘Ah, the thing does this and that’, I could tell myself, ok the tool has done this for me and this is what I still need to do […].” (Interview 8)

  • Ultimately, the decision should remain with humans who can choose if they want to be supported by an AI or not—but what if evidence starts showing that AI-based predictions of preferences are more accurate than what a human would have predicted? ‘At the end of the day, I think that people should still make the decision, at least for the time being. Although there are clues that AI may decide better than humans, I think this has been seen in diagnostics.’ (Interview 9)

AI support for DNAR decision-making in the hospital: towards a framework of preconditions for ethical use

Our study shows that in a highly functional university hospital setting there is ample room for improving the current system of determining and documenting patients’ code status (box 1). HCPs are open towards considering AI-based support, but rightly point to the need to clarify issues such as the role of AI in relationship to human decision-making (box 2). In addition, legal issues regarding, for instance, liability in case of errors will have to be addressed. We do think AI carries high potential for improving resuscitation-related decision-making, given the status quo. However, we are aware that the development of such a system is ethically demanding and requires attention to a set of preconditions as a first input into what we hope will grow into a broader debate.

Conceptual preconditions

The AI-based system is an assistive advice, to be used either by patients reflecting on the choice of their DNAR status or by proxies and/or physicians deliberating if resuscitation would reflect an incapacitated patient’s will (cf. figure 1). It does not have decisional authority and should never replace conversations with the patient, legal representatives or within the treatment team. The system can act as a conversation prompter, tie breaker or second opinion; it may invite self-critical reflection of the physician in charge or possibly of relatives and even patients. It may act as a support tool in case no information about a patient’s will is available.

The AI-based system predicts which resuscitation status a patient would have chosen after well-informed considerations of the benefits and harms (as presented in state-of-art, evidence-based decision aids). AI-based algorithms, given appropriate training data, could also predict under what conditions (if any) this choice would change, for instance when the likelihood for survival drops below a certain value. In this scenario, the outcome-based preference predictions would then be compared with the likely outcomes (possibly also predicted by an algorithm) for an individual patient, and the code status adjusted accordingly.

When training algorithms, the use of more easily accessible but unsuitable surrogate parameters for well-considered patient choice—such as patients having received cardiopulmonary resuscitation (CPR) in a hospital or having been attributed a certain code status, should be avoided as they may not reflect patients’ wishes. They may perpetuate or even reinforce the flaws of the current status quo of DNAR decision-making at the site(s) from which training data are collected.

Methodological preconditions

Data quality needs to be assessed to reduce the risk of ‘garbage in, garbage out’. Patient resuscitation preferences used for training the AI system (the ‘ground truth’) need to be elicited in a state-of-the-art way, for example, through ACP conversations. A simple model would conceive of patient preferences as categorical (eg, no CPR under any circumstances). The machine-learning task is therefore a prediction of a dichotomous outcome (DNAR yes/no), given available patient features, which are recorded in the electronic health record. The decision to opt for or against CPR in the case of cardiac arrest can be assumed to depend on a number of factors, among them a patient’s health status, life expectancy, current quality of life, perceived social obligations towards others, religious beliefs or deeply held secular values. Good training data will include such factors, which need to be encoded in health records, through discussions with the patient, relatives or other means. The algorithm will then be designed to predict the DNAR status based on these features, for example, by applying a similarity metric to determine what ‘similar patients’ would decide in the same circumstances. Ideally, decisions taken on the basis of these preferences can be validated ex-post by physicians, relatives or even patients.

It is theoretically possible to conceive an outcome-sensitive code status prediction algorithm. Here, ACP conversations would result in several resuscitation status/outcome tuples. Each tuple reflects the DNAR choice given a possible patient outcome (after CPR) expectation, such as chances for survival to discharge do not drop below a certain value or the risk for severe cognitive deficits does not exceed a certain percentage. The AI prediction would then first determine the likely outcome of the patient (after CPR) and would then proceed as above and determine DNAR status given that outcome.

The development of an AI-based resuscitation decision assistant should make efforts towards explainability40 41 and include elements of explanations from social sciences.42 An opaque (ie, ‘black box’) system, even if performing well, may not find acceptance and be trusted to advice on matters of life or death; issues of user’s trust, accountability, liability and data protection have to be addressed in accordance with existing and emerging standards.43–45 The system has to be monitored for any bias and to ensure the quality and fairness of its advice.

Procedural preconditions

The embedding of the AI system in healthcare processes will be decisive for its performance and user acceptance. Compatibility with the hospital information system and access management are relevant design features. User training is key: It has to cover the technical aspects of the AI system, and provide a view on how to integrate its predictions into clinical decision-making and into the communication with patients, relatives and staff in accordance with best practice principles of shared decision-making.46 47 It should also aim at improving communication skills of the AI system users and raise awareness on the risks and limitations of AI support in decision-making.48 This should allow countering any tendencies to avoid difficult conversations with patients or family members about DNAR status by resorting to algorithmic predictions, as well as minimising the likelihood to outsource a core clinical task—establishing a patient’s DNAR status—and its potential implications for the patient–provider relationship to an AI system. We sketch examples for potential use cases in box 3.

Box 3

A smart resuscitation decision assistant: towards possible use case scenarios

Scenario 1 (patient-centric):

  • A patient wants to define his DNAR status with a view to a pending hospital admission for surgery but is unsure about what DNAR status choice best resonates with his or her preferences and consults the AI-based DNAR status decision support tool. The tool compares available personal data (regarding age, health status, general values and so on) against a large pool of patients with known DNAR status preferences, providing a statement along the lines of ‘Individuals who are similar to you with regard to characteristics A, B, C would choose ‘CPR yes’ in x% of cases as long as the likelihood for survival-to-discharge was over 30%’. The AI could also show a diagram of how the DNAR status changed with the likely outcome. The patient can now, if s/he so chooses, discuss this input with close contacts or his healthcare providers.

Scenario 2 (proxy/provider-centric):

  • If patient preferences are unknow, the user interface of the hospital documentation system could show the message: ‘The resuscitation status of patient XY is missing’. The user has to select one of the following options: a) Patient is currently not incapacitated with regard to making a DNAR status choice: ‘If the patient is not incapacitated, please discuss the issue with him/her, and enter the resuscitation status chosen by the patient here’. b) Patient is currently incapacitated with regard to making a DNAR status choice: ‘If the patient does not currently have decision-making capacity, please try to establish the declared (advance directive) or presumed (legal proxy) patient will and enter this information. If no such information is reasonably available, and if you are interested in an automated preference prediction, please click here.’ The AI will calculate the likely outcome of CPR for this patient based on the available clinical data and will then make a prediction regarding the likely patient preferences for this outcome, for example, ‘X% of patients with the same outcome have set the DNAR status to ‘CPR no’. If the patient regains decision-making capacity, please make sure to consult the patient regarding his or her DNAR status choice.’

The examples given in box 3 hint at the many questions that the implementation of such a system would raise: What information can or should the system reveal about how it calculated the preferences (transparency, explainability)? How do we know an AI system is good enough for use in clinical routine? What if the AI system yields a result the physician considers highly implausible or inappropriate? and so on.

Deploying an AI-based support for resuscitation decisions is a high risk, high reward undertaking. Whereas the potential to improve decision-making and alleviate burden on treatment teams is considerable, particularly when little is known about a patient and when various courses of action are justifiable, a poorly designed system can reinforce current flaws in decision-making and introduce new ones. Our qualitative pilot provided just a first exploration of health professionals as one potential user group. Critical scrutiny from many perspective—including, importantly, those of patients and legal representatives—will be key to increasing chances for success, not only with a view to user acceptance but also to responsible use of AI.

Conclusion and outlook

Leveraging AI for healthcare decision support raises critical questions at the interface of medicine, ethics and computer sciences. These questions include issues of bias and fairness as well as of autonomy and accountability.49–51 The exploration of an example for a specific decision support system—in our case AI support for decision-making with regard to a patient’s resuscitation status—provides an opportunity to discuss and address ethical and policy concerns in a very concrete, hands-on way.45 Given the sensitivity around DNAR as a life-or-death decision and the widespread notion that CPR decisions are highly personal and cannot be predicted by ‘machines’ we had deliberately chosen an application whose ethical complexity goes beyond that of, for instance, AI-based diagnostic devices.

Whenever we think about the potential risks and benefits of using AI support in a given area, it is important to acknowledge the ground truth. Our data suggest that current practices of reaching and documenting CPR decisions are fraught with challenges such as insufficient knowledge regarding patient preferences, time pressure and personal bias guiding care considerations. At the same time, we found considerable openness among clinicians to consider the use of AI-based decision support, although critical thoughts were offered regarding the relationship of such ‘algorithmic advice’ and human decision-making.

A carefully reflected, well-designed AI-based system can have an immediate, significant and practical impact to personalised healthcare by contributing to better outcomes of critical healthcare decisions. However, it is of prime importance that framework conditions are defined such that they justify citizens’ and health professionals’ trust in the AI system. We have suggested a model for how AI can contribute to improve decision-making around resuscitation and propose a set of ethically relevant preconditions—conceptual, methodological and procedural—that need to be considered in further development and implementation efforts. Evaluation standards for the performance of AI-based decision-support systems will be urgently needed to make sure systems can be discarded if unsuitable or improved.

Once AI systems are unequivocally recognised as being more accurate and reliable than human practitioners in generating predictions, or suggesting treatments and diagnosis, a shift in epistemic authority may occur.14 This shift will raise questions about a potential obligation to rely on these systems when engaged in medical decision-making processes52–54—an important debate that is, however, well beyond the scope of this paper.

Supplemental material

Supplemental material

Data availability statement

Data are available upon request.

Ethics statements

Patient consent for publication

Ethics approval

This study was reviewed and approved by the CEBES Institutional Review Board of the Institute of Biomedical Ethics and History of Medicine, University of Zurich. In accordance with the Swiss Federal Act on Research involving Human Beings, the study is exempt from review by the Cantonal Ethics Review Committee.

Acknowledgments

We gratefully acknowledge our interview partners who generously supported our study with their time and insights. We would also like to thank Professors Dr Petros Koumoutsakos, Dr David Blum and Dr Settimio Monteverde for valuable comments and advice.

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

Footnotes

  • NB-A and AF are joint first authors.

  • Contributors NB-A conceived the study and acquired the funding. NB-A and AF designed the study and led the development of the study materials, with input and consensus from all authors. FM and PB collected the empirical data. All authors contributed to the analysis and interpretation of the empirical data. NB-A and AF drafted the manuscript and all authors contributed to and approved the final version.

  • Funding This study formed part of a larger fellowship project on ‘Digital support of decision-making in health care’ (led by Nikola Biller-Andorno) at the Collegium Helveticum, an Institute of Advanced Studies carried by the University of Zurich, the Swiss Federal Institute of Technology (ETH) Zurich and the Zurich University of the Arts (ZHdK).

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

  • https://www.telegraph.co.uk/news/2016/05/01/unforgivable-failings-in-end-of-life-care-revealed-40000-dying-p/

  • www.polst.org, last accessed on 25 July 2020.

  • The interview guide has been translated into English and is available from the authors on request. We also produced English interview summaries and translations of key interview quotes.