Article Text

Download PDFPDF

Philosophical evaluation of the conceptualisation of trust in the NHS’ Code of Conduct for artificial intelligence-driven technology
  1. Soogeun Samuel Lee
  1. School of Medicine, Cardiff University, Cardiff, UK
  1. Correspondence to Soogeun Samuel Lee, School of Medicine, Cardiff University, Cardiff, UK; soogeunlee98{at}gmail.com

Abstract

The UK Government’s Code of Conduct for data-driven health and care technologies, specifically artificial intelligence (AI)-driven technologies, comprises 10 principles that outline a gold-standard of ethical conduct for AI developers and implementers within the National Health Service. Considering the importance of trust in medicine, in this essay I aim to evaluate the conceptualisation of trust within this piece of ethical governance. I examine the Code of Conduct, specifically Principle 7, and extract two positions: a principle of rationally justified trust that posits trust should be made on sound epistemological bases and a principle of value-based trust that views trust in an all-things-considered manner. I argue rationally justified trust is largely infeasible in trusting AI due to AI’s complexity and inexplicability. Contrarily, I show how value-based trust is more feasible as it is intuitively used by individuals. Furthermore, it better complies with Principle 1. I therefore conclude this essay by suggesting the Code of Conduct to hold the principle of value-based trust more explicitly.

  • information technology
  • ethics
  • philosophy of medicine

Data availability statement

No data are available.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

In 2018, the UK government published a Code of Conduct, hereafter the Code, for developing and implementing data-driven technology in the National Health Service (NHS). It outlines the government’s expectations from ‘suppliers and users of data-driven technologies’, particularly artificial intelligence (AI) technologies, when working in conjunction with NHS services.1 In this essay, I aim to evaluate the conceptualisation of trust in the Code.

I begin with a brief overview of AI and the black box problem of AI. I then provide a classical definition of trust and establish a framework more relevant to AI, based on Coeckelbergh’s phenomenological view of trust and Dennett’s theory of intentionality.2 3 Next, I examine the Code, specifically Principle 7 wherein trust is primarily addressed. I show that the Code presupposes two principles of trust to varying degrees of strength: the explicit principle that trust should be rationally justified on sound epistemological bases and an implicit principle that trust should be value-based, meaning an all-things-considered judgement of trustworthiness is used. I critique the explicit principle as impractical for lay-users (ie, non-medical experts and non-AI experts) due to AI’s esoteric technicality. I then illustrate how Principle 7 would in practice lead to value-based trust regardless of the explicit presupposition. Finally, I argue value-based trust is more appropriate with respect to AI’s complexity as it can address Principle 1—to accommodate specific user needs—while rationally justified trust cannot. I emphasise the importance of this notion by highlighting cultural differences around trusting within socio-technic environments. This leads to my conclusion that recommends the Code to support the fostering of value-based trust more strongly.

At the time of writing, there has only been one direct evaluation of the Code published.4 My aim is therefore to also add to the participatory discourse with a philosophical evaluation of the Code.

Artificial intelligence

AI broadly refers to technologies that resemble processes normally associated with human intelligence such as learning, knowledge-based reasoning, novel problem-solving and environmental sensory processing and interaction.5

In healthcare, AI has gained mainstream attention for algorithms achieving and in some cases beating consultant-level standards of diagnostic accuracy.6 Other heralded benefits of AI technologies are the time-saving provisions capable of freeing healthcare staff from repetitive administrative work. Accordingly, in 2018, the UK Government dedicated £300 million of a £1 billion deal with the AI sector towards AI research in healthcare. Current partnerships exist between the NHS and AI developers including IBM, Google’s DeepMind, Babylon Health and Ultromics.7

Presently, the most effective AI are created through deep learning.8 Deep learning is a technical subject and the details are not necessary to understand for the purposes of this essay.1 It will suffice to understand that deep learning simulates human learning processes by automating the analysis of massive amounts of data to create complex algorithms that ‘learn’ and identify real-world patterns to a standard equal or superior to human capabilities. As seen in figure 1, these algorithms mimic the brain using an artificial neural network composed of nodes that can be thought of as ‘neurons’ where information signals aggregate, are modified, then propagated. Iterative analysis of large datasets is used to configure and fine-tune the nodes and their connections.8

Figure 1

A simplified diagram of how deep learning occurs. The connection between the nodes of an artificial neural network are too complex for a human to understand.

The black box problem

Due to their complexity ANNs cannot be directly programmed by humans because, simply put, they are too complex for humans to understand. Thus, a key problem in AI research is the inability to explain how exactly an algorithm reaches an output. This is referred to as the ‘black box’ problem.8 Explanations of how algorithms work are therefore not strict explanations—first-order reasons to explain AI behaviour are unobtainable—but interpretations; understanding is based on assumptions gleaned from the relation between outputs and human interpretation of the training data. There is therefore an inherent inexplicability in AI.

Trust

According to Wolfensberger and Wrigley, trust is an attitude we hold towards others that allows us to rely on them. We constantly rely on others; we trust our parents, friends, taxi-drivers, doctors and so on. Trust is therefore essential to navigating our day-to-day lives. However, trust is not mere reliance. When we trust, we hold a normative expectation that the trustee will be competent in doing something. This places us in a vulnerable position and introduces the possibility for betrayal. This possibility is what distinguishes trust from other attitudes such as reliance and confidence.9

The significance of trust becomes more salient in medicine. Doctor-patient relations are characterised by a power imbalance: patients are generally epistemically poor in medical knowledge whereas doctors are privileged. Thus, patients tend to rely on the expertise of doctors to make often life-altering decisions. It can be seen, implicitly, doctors are the ultimate decision makers in these cases.10 Furthermore, trust in decision makers is central to patient-centred care and directly related to positive clinical outcomes.11 Thus, if clinical decisions such as diagnosing and formulating management plans are to be AI-driven patients must also be able to trust AI in a similar way to healthcare professionals.

However, traditional theories generally do not account for trust in AI. Moreover, most philosophical accounts of human-AI relations are typically conceptualised in terms of consciousness and ontological agency.12 13 As the aim of this essay is not to debate the ontological nature of AI, I will assume a Heideggerian-derived framework of trust in AI established by Coeckelbergh.2 He argues our embedded existence within a socio-technical environment allows us to perceive technology as more than an ‘it’; our innate ability and tendency to anthropomorphise goal-directed entities permits us to perceive and treat them as ‘a type of entity we can related to as social beings’ or a ‘quasi-other’. In doing so, we can trust AI in the same way we trust other humans.2 How we assess competence in quasi-others is best described by Dennett’s intentional stance wherein we ascribe beliefs and desires to non-human entities to predict behaviour.3 I will therefore adopt the language of Dennett’s intentional systems in describing how we assess the competence of others and quasi-others.

It is noted that some scholars such as Hawley refute the idea of trust in technological artefacts such as AI.14 However, that discussion falls beyond the scope of this essay. I hope supporters of such a stance may still find merit in this essay by replacing uses of ‘trust’ with ‘reliance’.

The Code of Conduct

The aim of the Code is to provide ethical guidance to AI developers and implementers within the NHS to ensure the public will ‘understand how and when data about them is shared’ and also be reassured the data is used only for ‘public good, fairly and equitably’, where AI is stated to be the most impactful usage of data.1 Ten principles are provided for ethical guidance. The Code addresses trust in Principle 7 (‘Show what type of algorithm is being developed or deployed, the ethical examination of how the data are used, how its performance will be validated and how it will be integrated into health and care provision’). It proposes that to ‘build trust in incorporating machine-led decision-making into clinical care’, AI algorithms must have a ‘clear and transparent’ specification requiring an explanation of six conditions:

  1. The functionality of the algorithm.

  2. The strengths and limitations of the algorithm (as far as they are known).

  3. Its learning methodology.

  4. Whether it is ready for deployment or training.

  5. How the decision has been made on the acceptable use of the algorithm in the context it is being used (eg, is there a committee, evidence or equivalent that has contributed to this decision?)

  6. The potential resource implications.

It can be seen that requirements (1)–(5) suppose that given enough information about function, evidence base, strengths and limitations, individuals will be able to trust in AI. Function here means what clinical problem(s) does the AI intend to solve and how does it achieve this. Strengths and limitations encompasses two factors. First, the efficacy of the AI within its domain of function which can be assessed in quantifiable terms such as diagnostic accuracy; second, it can refer to the range of domains the AI is able to function in. Limitations could be introduced as a matter of safeguarding—a diagnostic chatbot may immediately refer a patient to a human if red flag symptoms are raised rather than continue to talk to the patient. There may also be undesired limitations such as narrow domain usage (ie, the AI only works in populations very similar to the training data), bias, discrimination and poor specificity and sensitivity. Figure 2 summarises where these may occur.

Figure 2

Where bias can arise in the process of selecting, labelling and using datasets in training AI algorithms. Adapted from Joler and Pasquinelli24 . AI, artificial intelligence; BME, black and minority ethnic.

Consequently, the Code assumes users are able to and will justify trust by weighing up risk and competence, where risk is the probability of an AI being incompetent at the function it is specified to fulfil, based on performative (ie, quantitative empirical data) information. As such, Principle 4—‘Be fair, transparent and accountable about what data is being used’—ensures the data used to train AI is available to all users. This conceptualisation of trust could be summarised by a rationalist principle: when deciding to trust, it is preferable to estimate the trustworthiness of the potential trustee and the risk of harm I place myself in by trusting them. It is best to estimate on sound epistemological grounds, that is, evidence that reflects the true purpose and competence of the trustee. To do so, access to external reasons, that is, empirical observations instead of preconceived beliefs, must be available.15 I call this principle rationally justified trust.

Another principle of justifying trust can be found in requirements (5) and (6). These requirements ask a meta-level question of trust: who has evaluated the evidence and decided an AI is trustworthy? The responsibility can lie at the level of the developers or the governing bodies responsible for introducing chatbot usage, for example, NHS Trusts. Justifications for trust can also stand at this level; an individual’s perceived competence of an AI can be derived from their perception of whether decision makers have their best interests in mind. Thus, this principle can be similarly summarised: given the difficulty of understanding AI, users should be able to justify trust using knowledge that their values are respected and coincide with whoever developed and allowed use of said AI. Therefore, to foster trust, developers and decision makers should provide information of how they encapsulate the interests of users; they should show their values are aligned with the users.15 I call this value-based trust.

Value-based trust is arguably implicit within the Code as the values of developers and decision makers are not explicitly demanded but can be inferred from the information provided by (5) and (6). For example, if a specification fulfilling (5) directs users to Google DeepMind’s research for evidence, users knowledgeable of DeepMind’s problematic conduct in data privacy may be less inclined to trust DeepMind’s AI.16 On interacting with DeepMind’s AI, a knowledgeable user may ascribe an intention to harvest data to the AI and distrust it. This could be problematic if it is a diagnostic AI that relies on a truthful and comprehensive medical history. Therefore, (5) and (6) support a justification of trust using the user’s perception of the values of the trustee from an all-things-considered point of view.

Evaluating the Code’s conceptualisation of trust

I will now look at the Code in context of the identified presuppositions and argue for two positions. First, I argue rationally justified trust is infeasible for lay-users; second, I argue value-based trust is in practice how lay-users conceptualise trust in AI and is therefore a more practical in fostering trust. I also illustrate how value-based trust accounts for individual user interests outlined in Principle 1 while rationally justified trust does not. I conclude by recommending adjustments to the Code to conceptualise trust using a principle of value-based trust.

One could argue that a sophisticated a posteriori interpretation of an algorithm’s outputs and knowledge of the developer’s intended function for the AI is sufficient to form a sound epistemological basis for rationally justifying trust. However, one could counter that the technical nature of AI and machine learning means an explanation of function that is clear to a lay-user will likely not provide enough information to form a sound epistemological basis for rationally justified trust. When complex information becomes translated into lay-explanations, the grounds on which trust is justified become less true to the AI’s designed purpose and instead become more perceptual, that is, it describes what function the AI appears to hold. For example, Babylon Health’s information page for their diagnostic AI never actually defines AI, instead opting to describe its function in four parts: ‘perceive, reason, simulate, learn’. Signposts to further information leads to external websites hosting technical papers.17 As seen in figure 3, complex AI such as diagnostic chatbots comprise subcomponents, each with its own functions. The technicality of these components renders a lay-explanation, such as Babylon Health’s, inadequate in grasping the various functions of complex AI. Thus, a ‘clear and transparent’ explanation that is understandable to a lay-user is likely unable to retain the fidelity of the developer’s intended function. If so, a sound epistemological basis—evidence that reflects the true purpose and competence of the trustee—cannot be reached.

Figure 3

The hierarchy of functions and complexity scale of a disgnostic chatbot. Potential clinical risks of components are shown. Adapted from Ream et al 25 . AI, artificial intelligence; BME, black and minority ethnic.

The possibility of sound epistemological grounds is further weakened as the black box problem necessarily means that even the developers themselves cannot wholly understand the function of their AI. Importantly, there is no a priori process through which to validate whether a developer’s given description wholly encompasses the actual functions of an AI.

A valid counterargument to this position may be to propose that knowing the performance of an AI algorithm, for instance, specificity and sensitivity, may be sufficient to form a sound epistemological basis for rationally justified trust. For example, patients do not need to know the exact procedures of surgeries and can instead make their decisions using outcome and side effect rates. It may thus be reasonable to believe AI experts, healthcare professionals and well-informed lay-persons are able to hold this stance.

To respond, I find two objections. First, a rigid application of this position may discriminate against patients from low literacy backgrounds who are less used to interpreting statistical risks; second, in practice patients do not make their decisions solely on the statistical probabilities of associated risks but rather in conjunction with the way a healthcare professional communicates risks, anecdotal evidence and their personal beliefs and values.18 Put differently, my objection finds that the principle of rationally justified trust cannot account for the influence of extraneous, subjective factors.

For instance, trust in Babylon Health’s chatbot could be partially a result of the endorsement of the chatbot by the Secretary of Health—the decision maker responsible for introducing the Code.19 The Secretary’s endorsement can only function on a principle of value-based trust because his endorsement provides no extra functional information, say specificity or sensitivity, of the chatbot not already provided by Babylon Health. Similarly, when we learn of a company’s unethical behaviour, we immediately tend to distrust them and their products regardless of the product’s actual efficacy. The principle of value-based trust argues trust is able to account for these factors. It supposes trust can be fostered outside of providing functional information of AI; trust can be justified on an all-things-considered basis that we intuitively use by virtue of our embedded existence in a socio-technical environment.2

Evaluating value-based trust

I have argued the principle of value-based trust is tacit in (5) and (6) and intuitively used by lay-users. I will now discuss whether the Code should hold a stronger position of value-based trust. As mentioned, an advantage of value-based trust over rationally justified trust (assuming a situation where both are possible) is that trust founded on an all-things-considered level can immediately account for new information. On the other hand, re-evaluating rationally justified trust requires time-consuming appraisal of evidence and risks which are often hidden in technical papers.17 Sceptics of this position may argue that the potential for inauthentic or superficial virtue signalling is problematic. I argue the reputational stake, loss of trust and the asymmetric difficulty of gaining trust serve as a safeguarding impetus for authentic virtue signalling by developers and decision makers.9

A further advantage of explicitly advocating a principle of value-based trust is with reference to Principle 1 of the Code, ‘(to) Understand users, their needs and the context’. If Principle 7 is followed using a principle of rationally justified trust, a one-size-fits-all provision of functional and performative data is adequate. Even if information is distilled to different levels of knowledge, for example, an explanation for lay-users, medical professionals and AI experts, each explanation would be inflexible in meeting specific user needs to build trust. Conversely, value-based trust would necessitate the accommodation of specific user needs because the principle conceptualises trust as a normative endeavour embedded and thus influenced by the socio-technic environment.2 15

Consider the hypothetical case of an elderly black and minority ethnic (BME) individual. Within BME communities, perceived barriers, that is, social factors rather than institutional regulations, exist in accessing health services. These can include: a cultural distrust against physicians; clinicians’ lack of cultural awareness and naive understanding of racism and inflexible clinicians incapable of providing culturally individualised support and care.20 For elderly individuals, Fischer et al found they were more reluctant to use healthcare technology due to technical illiteracy and a lack of ‘trust in the ability of healthcare technology to assist in decision-making’.21

Taken together, an elderly BME individual will have a complex, multifactorial attitude to trusting medical-AI such as Babylon Health’s chatbot. The qualities of the trust relation formed is difficult to theoretically generalise. A history of negative experiences with medical professionals could mean BME individuals ascribe similar negative beliefs to a chatbot. Contrarily, they may be more inclined to attribute neutral beliefs to a computer and deem a chatbot more trustworthy than a doctor. These attitudes cannot be explained solely by the epistemological performative quality of AI functions. A principle of value-based trust acknowledges users’ socio-technical contexts wherein different cultures have different norms and values. Value-based trust suggests presenting evidence to persuade users, in the sense that an emotional and normative rationale is evoked, that their positive values, for example, inclusion, confidentiality, good-will, are encompassed by developers and governing bodies. This would in practice be achieved through the elimination of biases, as illustrated in figure 2, and in conveying those inclusive values used to create the AI. This more intuitive and pragmatic conceptualisation of trust may therefore ease the fostering of trust in AI.

In application, by acknowledging individuals’ reliance on perceived values of developers and decision makers the Code could impose regulations to explicitly mandate value-based trust alongside their current requirements. This could be through a seventh requirement requiring AI developers and implementors to show evidence of previous ethical conduct in data privacy and usage. Another possible mandate could be to require AI developer’s explanations to clearly link to a centralised and independent ethical-conduct rating. A similar service is already in place by NHS Digital for assessing data quality used by NHS services which is updated monthly.22 Once established, simply omitting the link could draw attention to potential untrustworthiness.

Conclusion

In this essay, I aimed to evaluate how the Code conceptualises trust. I examined the Code, specifically Principle 7, and extracted two principles: an explicit principle of rationally justified trust, where trust is based on a sound epistemological understanding of function and competence, and an implicit position of value-based trust, where an individual trusts on an all-things-considered perspective of AI developers and decision makers and whether their values align with their own. Rationally justified trust is generally infeasible due to the complexity of AI meaning lay-explanations are unlikely to fulfil the epistemological requirements for rationally justified trust. Any subsequent trust relations are therefore likely to be value-based. Value-based trust is intuitively used and is therefore more practical for lay-users of health services. It can account for the influence of new information on an all-things-considered basis. Furthermore, in application, value-based trust more sufficiently accommodates for contextual user needs, as specified in Principle 1. The Code would therefore more adequately safeguard NHS user needs by also explicitly advocating a principle of value-based trust.

I was unable to approach important ethical discussions such as what kind of contexts and environments AI is appropriate to use in. Considering the multifaceted issues of trust in both healthcare and technology, the usage of AI in different communities and cultures needs to be carefully considered. I was also unable to consider what exactly is needed in an explanation of AI to adequately merit rationally justified trust. A subfield of AI research known as explainable-AI is focused on providing tools to understand and interpret the functions of AI more accurately, although there are currently no widely accepted solutions.23 An answer may be provided by this field of AI. I would predict that as AI become more explainable, rationally justified trust may be a more feasible stance for individuals to hold. However, further discussion is needed.

Data availability statement

No data are available.

Ethics statements

Patient consent for publication

References

Footnotes

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • See Hinton (2018) for a healthcare orientated technical explanation of AI.

Other content recommended for you