Article Text

Download PDFPDF

Limits of trust in medical AI
  1. Joshua James Hatherley
  1. School of Historical, Philosophical, and International Studies, Monash University, Clayton, VIC 3194, Australia
  1. Correspondence to Joshua James Hatherley, School of Historical, Philosophical, and International Studies, Monash University, Clayton, VIC 3194, Australia; joshua.hatherley{at}monash.edu

Abstract

Artificial intelligence (AI) is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI’s progress in medicine, however, has led to concerns regarding the potential effects of this technology on relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since AI systems can be relied on, and are capable of reliability, but cannot be trusted, and are not capable of trustworthiness. Insofar as patients are required to rely on AI systems for their medical decision-making, there is potential for this to produce a deficit of trust in relationships in clinical practice.

  • ethics
  • information technology
  • quality of health care

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Artificial intelligence (AI) is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks; for instance, detecting diabetic retinopathy from images,1 predicting hospital readmissions2 and aiding in the discovery of new drugs.3 It has been suggested that AI will facilitate a variety of improvements in medical practice, ranging from economic savings to the improvement of empathetic communication between doctors and patients, from increased productivity to greater professional satisfaction and from improved health outcomes to an amplified rate of discovery in medical research.4 AI’s progress in medicine, however, has led to concerns regarding the potential effects of this technology on relationships of trust, particularly between doctors and patients.5 ,6 In this paper, I will argue that there is merit to these concerns, since AI systems are not the appropriate objects of trust under any familiar philosophical accounts of trust. This is critical, since, as I will argue in section 3, AI systems are likely to displace the epistemic authority of human clinicians if they come to exceed them in performance. As such, I will argue that insofar as patients are required to rely on AI systems for their medical decision-making, AI threatens to produce a deficit in trusting clinical relationships between doctors and patients.

Trust in medicine

Trust has both intrinsic and instrumental significance in medicine.i Intrinsically, trust is what imbues the doctor-patient relationship with its uniqueness and importance. A patient comes to a physician in a state of sickness and vulnerability, and is thereby forced to place their trust in another person to treat them with competence and, ideally, empathy and care. This vulnerability of the patient is what imbues the relationship with inherent value, since ‘trust is inseparable from vulnerability, in that there is no need for trust in the absence of vulnerability’.7 The vulnerability of the patient, and the resulting power of the physician, imbue the physician with a fiduciary obligation to behave in a morally upright and appropriate manner, to use their authority in the service of the patient as opposed to themselves or some other end.

In contrast, trust also has instrumental value in medicine. First, because patients are more likely to accept and behave in accordance with their physician’s judgement if they have a trusting relationship with them. They are more likely to demonstrate ‘willingness to seek care, reveal sensitive information, submit to treatment, participate in research, adhere to treatment regimens, remain with a physician and recommend physicians to others’.7 Second, it is speculated that trusting doctor-patient relationships have a number of therapeutically valuable effects on patients—improved patient outcomes and placebo effects, for example. Finally, a good physician is one that can demonstrate care for their patients, and patients are more likely to feel that they have been adequately cared for when they trust the person caring for them.

AI in medicine

AI’s effect on relations of trust between doctors and patients is bound up with the precise role that AI may come to occupy in medical practice and the epistemic authority that it comes to hold in clinical decision-making procedures. If AI systems are eventually adopted as merely another tool at the clinician’s disposal—akin to a stethoscope, thermometer or blood pressure monitor—the effect of these systems on trust would likely be minimal. Patients, of course, would rely on the accuracy of these tools, but their trust would be staked in the judgement of the human physician who interprets their outputs and incorporates them into their own clinical judgements. However, recent developments in areas such as deep learning suggest that the epistemic authority of human clinicians in clinical decision-making will be challenged by the use of AI in medicine.

Researchers in AI are working busily to develop AI systems that can surpass the performance of human clinicians in diagnosis, prognosis and treatment selection4—three of the four fundamental tasks of the clinician, according to Eric Cassell.8 ,ii Indeed, a recent systematic review and meta-analysis comparing the performance of deep learning AI systems to human clinicians found that deep learning AI systems already match the accuracy of human clinicians in the performance of certain diagnostic tasks.9 ,iii If AI succeeds in surpassing the performance of human clinicians in such principal medical tasks, how might this effect the epistemic authority of human clinicians in clinical practice?

The prospect gestures at an important problem currently faced in the sciences, which Paul Humphreys has called our ‘anthropocentric predicament’. Humphreys argues that advanced technologies have produced a situation in which ‘an exclusively anthropocentric epistemology is no longer appropriate because there now exist superior, non-human, epistemic authorities. So we are now faced with a problem, which we can call the anthropocentric predicament, of how we, as humans, can understand and evaluate computationally based scientific methods that transcend our own abilities’.10

There have been two principal kinds of response to medicine’s anthropocentric predicament in the wake of medical AI, which I will refer to as substitutionism and extensionism. Substitutionists argue that advanced AI will eventually make doctors obsolete by surpassing them in the performance of key clinical tasks and roles.11 Extensionists, in contrast, argue that AI will simply extend and improve on the capabilities and competencies of human clinicians without replacing them outright. In particular, this is because AI systems lack emotional intelligence and empathy, abilities that are essential in the delivery of healthcare, meaning that a human presence will still be essential.12 Yet among both camps, the likely disruptive impact—what Liu and colleagues have labelled a ‘seismic shift’13—that AI will have medicine widely undisputed. Although extensionists rally against the substitution of clinicians, the likelihood of their displacement in key clinical roles is often acknowledged. For instance, Eric Topol, a principal physician advocate for the use of AI in medicine, along with his colleague Saurabh Jha, claim that ‘(j)obs are not lost; rather, roles are redefined; humans are displaced to tasks needing a human element’.14

This displacement of the roles of human clinicians in the wake of advanced medical AI reflects a displacement of their epistemic authority. Indeed, if AI surpasses the performance of clinicians in key clinical tasks, doctors will have an epistemic obligation to defer to the judgements of the machine or align their judgements with the AI in their clinical decision-making.15 As Bjerring and Busch have argued, ‘if a practitioner knows of an epistemic source that is more knowledgeable, more accurate and more reliable in decision-making, she should treat it as an expert and align her verdicts with those of the source’.16 This displacement of the epistemic authority of clinicians would be necessary to realise some of the goals of the introduction of AI in medicine. Aside from the possible reduction of burdensome administrative tasks and the improvement of cost-efficiency in medicine, a primary motivation for research into medical AI is the potential to reduce the alarming prevalence of wastefulness and human error in medical practice.4 17 In order to achieve this, it would be necessary in most instances for human clinicians to give more weight to the outputs of a supremely reliable AI system over their own clinical intuitions and judgements.

The displacement of clinicians from a position of epistemic authority in clinical decision-making has important implications for relations of trust between patients and doctors, since it implies a displacement of patient trust from human clinicians to AI systems. In the next section, I will argue that this displacement of trust from humans to machines could lead to shallow relations of trust in clinical practice that are lacking in important respects.

Trust in AI

Trust has been a central topic of concern in the debate over AI and its many applications, with some private corporations and research organisations releasing guidelines for the development of trust and trustworthiness in AI.18 19 Concerns over the ‘black box’ nature of some AI systems—particular deep learning—along with the threat of algorithmic bias have pushed the issue of trust to the forefront of debate.20 But what does it mean to say that one trusts an AI, or that an AI is trustworthy? A key response to this question has been to emphasise the centrality of reliability in trust. Alex John London claims that ‘(i)f the goal is to secure trust among stakeholders, then the accuracy of a system relative to viable alternatives must be a central concern’.21 Similarly, Zachary Lipton claims that if trust is ‘simply confidence that a model will perform well (… then) a sufficiently accurate model should be demonstrably trustworthy’.22

But is confidence in someone or something’s accuracy or reliability sufficient for trust? According to many accounts of interpersonal trust have been proposed in the philosophical literature, the answer to this question is no. According to these accounts, trusting someone to do x is more than merely relying on them to do x. Consider the following two scenarios:

  1. Stan, a thief, is planning a burglary. He has observed a wealthy homeowner, Jane, leaving her home at 09:00 a.m. and returning at 19:00 p.m. every Monday for the past month. Stan is hoping to go through with his planned burglary next Monday, and is relying on Jane to continue her pattern in order for his burglary to be successful.

  2. Brendan has a chronic illness that causes him significant pain and suffering. His illness is managed by his regular general practitioner, Dr Smith. Dr Smith has supported Brendan through his illness for 15 years. Brendan has recently been experiencing significantly more pain than usual, which is causing him extreme discomfort. He makes an appointment with Dr Smith, confident that she will be able to help him relieve this pain in some way.

In the scenario (1), although the thief relies on Jane to leave her house at 09:00 a.m., it seems inappropriate to say that the thief trusts Jane to do so in the same way that Brendan trusts Dr Smith to successfully treat his illness in scenario (2), despite the fact that Brendan also relies on Dr. Smith. How do we explain this intuition? What makes trusting someone more than merely relying on them?

Russell Hardin argues that reliance is insufficient for trust because trusting someone also requires a belief that one’s interests are encapsulated in the interests of the trusted person.23 ‘What matters’, claims Hardin, ‘(…) is not merely my expectation that you will act in certain ways but also my belief that you have the relevant motivations to act in those ways, that you deliberately take my interests into account because they are mine’.23 For Hardin, trust requires not only a predictive expectation on the part of the truster, but also a belief that one’s interests are encapsulated in the interests of the trusted person and that the trusted person has the right motivations for action. Indeed, Hardin claims that ‘I would not, in our usual sense, trust a fully programmed automaton, even if it were programmed to discover and attempt to serve my interests—although I might come to rely heavily on it’.23

Having the right kind of motivations for action is an important part of many other influential accounts of trust. Annette Baier, for instance, argues that reliance underdetermines trust because trust ‘seems to be reliance on (the trusted person’s) good will toward one, as distinct from their dependable habits, or only on their dependably exhibited fear, anger or other motives compatible with ill will toward one, or on motives not directed to one at all’.24 This emphasis on the good will of the trusted person is also central to Karen Jones’ account, wherein she claims that ‘to trust someone is to have an attitude of optimism about her goodwill and to have the confident expectation that, when the need arises, the one trusted will be directly and favourably moved by the thought that you are counting on her’.25 If the right kind of motivations are necessary for the kind of trust that we would usually recognise as interpersonal trust, then AI systems would not appear to be the appropriate objects of this kind of trust. Unlike a human clinician, AI systems have no goodwill towards us, nor any motivation to act in our interests. This may be at least part of the reason that some people may be uncomfortable with the idea of placing their trust in an AI for important medical decisions or tasks.

Additionally, other philosophical accounts of trust distinguish between trust and reliance on the basis of normative and descriptive expectations: I rely on you when I predict that you will behave in a certain way, though I trust you when I judge that you ought to behave in a certain way.26 Trusting someone, that is, generates an obligation on behalf of the trusted person to (at least genuinely attempt to) do what one is trusting them to do. There are some important limitations to this claim, for example, in circumstances where the trust that one has in another is misguided or unwelcome. Suppose, for instance, that one were to place their trust in a friend who is a dermatologist to remove their wisdom teeth. Trusting the dermatologist for this procedure would appear quite mistaken, given that the dermatologist does not have the expertise or competency to perform this task. Nor, presumably, would the dermatologist welcome this trust in any way.

But outside of this and other somewhat fanciful scenarios, clinicians do in fact have an obligation to perform those tasks that have been entrusted to them, providing of course that this trust has been communicated to them. This is precisely the nature of fiduciary obligations in medicine. If this is true, another limitation of trusting AI would also be demonstrated, since AI systems are not the appropriate objects of moral responsibility. In order for an agent to be morally responsible for an action, they must be blameworthy when they fail to come through on that action. But if an AI system were to incorrectly diagnose a patient, leading to their avoidable death, it would appear misguided or inappropriate to blame the AI for its error. Rather, one would generally look to the designers, the supervising clinician, the hospital, etc, in order to apportion blame. Trusting a clinician generates a moral responsibility on behalf of the clinician, while trusting an AI system generates a moral responsibility on behalf of seemingly anyone but the AI system.

These considerations highlight two important deficits in relations between patients and medical AI systems that each stem from a lack of agency on the part of the AI. First, AI systems lack the right kind of motivation for trust—either in the form of encapsulated interest or a sense of good will—since they lack motivation entirely. Second, relations with AI systems cannot be said to be trusting relations, as one might have with a human clinician, since trust generates normative obligations that cannot be borne by an AI. To say that one can trust an AI is thus akin to saying that one can trust a naturally occurring phenomenon. Although I am supremely confident that tomorrow the sun will rise in the east and set in the west, there is not familiar sense in which I could reasonably said to trust the sun to do so. Trusting relations, in other words, are exclusive to beings with agency, meaning that the displacement of human clinicians from a position of epistemic authority and privilege in the clinical encounter threatens to lead to relations of trust that are shallow or deficient in important respects within medical practice.

Conclusion

To say that one can trust an AI system, or that the AI is trustworthy, is merely to say that one can rely on the AI system, or that the system is reliable. Yet as we have seen, reliability is insufficient to generate a relation of trust under any of its familiar philosophical notions, which all require characteristics essential and exclusive to beings with a form of agency. What does this mean for the pursuit of ‘Trustworthy AI’ initiated by the European Union’s High Level Expert Group on Artificial Intelligence (HLEG AI)?18 Although valuable, the pursuit of trustworthy AI represents a notable conceptual misunderstanding, since AI systems are not the appropriate objects of trust or trustworthiness. Interestingly, this has also been suggested by a key member of the HLEG AI, Thomas Metzinger.27 Rather than trustworthy AI, this pursuit may be better served by being reframed in terms of reliable AI, reserving the label of ‘trust’ for reciprocal relations between beings with agency.

In contrast to AI, therefore, human clinicians can offer their patients the kind of rich interpersonal trust that imbues the doctor-patient relationship with its uniqueness and significance. Insofar as patients come to rely on AI systems for important medical assessments and decisions as opposed to human clinicians, they may be sacrificing opportunities for trusting relationships in medicine. A more thoughtful engagement is needed with the potential effects of AI on medical practice to further understand the implications of this technology, so that it can be deployed is such a way as to reap its potential benefits while retaining those aspects of medicine—such as trust—that are particularly valuable for its functioning.

Acknowledgments

Thanks are due to Rob Sparrow and an anonymous reviewer from the Journal of Medical Ethics for their helpful comments and insight, which assisted me greatly in the preparation of this paper.

References

Footnotes

  • Contributors JJH is the sole author.

  • Funding Research for this paper was funded through Australian Government Research Training Program Scholarship.

  • Competing interests None declared.

  • Patient consent for publication Not required.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Two kinds of trust are discussed in relation to medicine and clinical practice: interpersonal and social.28 Interpersonal trust concerns trust between persons (eg, between doctors and patients), while social trust is more general and abstract, directed towards groups and institutions as opposed to individuals (eg, between patients and a particular hospital or the medical institution more generally). In this paper, I leave the issue of social trust to one side in order to focus in on medical AI and interpersonal trust. All references to trust will henceforth refer exclusively to interpersonal trust.

  • The fourth is the identification of causes. Given that AI systems based on neural networks learn from correlations alone, their capacity to illuminate underlying causes of illness is limited.29

  • Importantly, the study identified a number of troubling methodological limitations in the broader literature comparing the performance of human clinicians to deep learning AI systems, so this finding ought to be taken with a grain of salt. Most alarmingly, of the 31 587 scholarly articles returned on a search for articles comparing the performance of deep learning systems and human clinicians, only 14 compared performance between the two groups on the same test data set.

Other content recommended for you