Article Text
Abstract
I analyse an argument according to which medical artificial intelligence (AI) represents a threat to patient autonomy—recently put forward by Rosalind McDougall in the Journal of Medical Ethics. The argument takes the case of IBM Watson for Oncology to argue that such technologies risk disregarding the individual values and wishes of patients. I find three problems with this argument: (1) it confuses AI with machine learning; (2) it misses machine learning’s potential for personalised medicine through big data; (3) it fails to distinguish between evidence-based advice and decision-making within healthcare. I conclude that how much and which tasks we should delegate to machine learning and other technologies within healthcare and beyond is indeed a crucial question of our time, but in order to answer it, we must be careful in analysing and properly distinguish between the different systems and different delegated tasks.
- ethics
Statistics from Altmetric.com
Artificial intelligence (AI) has been coming for so long now that in the meantime even Godot has arrived. But technological threats are once more the fashion of the time and so AI, robots and also—as a relative new entry—machine learning are about to take over, again; or so academic techno-apocalypsts argue.1
Every domain has its own favourite menace: autonomous weapons (or, as the campaign against them likes to call them, ‘killer robots’2 ,i), self-driving cars,3 IBM Watson for Oncology.4 This latter healthcare technology—a decision-support system that ranks cancer therapeutic options—is apparently a ‘significant threat to patient autonomy’ (4: p. 1).ii
The argument against IBM Watson for Oncology (from now on Watson for short) goes as follows: “AI systems that recommend treatment options present a potential threat to shared decision making, because the individual patient’s values do not drive the ranking of treatment options… So there are two main reasons to see these types of AI systems as a potential threat to shared decision making. First, the values driving the treatment rankings are not specific to the individual patient… Second, these types of AI systems currently do not encourage doctors and patients to recognise treatment decision making as value-laden at all. There is a danger that the computer is seen as figuring out the right answer” (4: pp. 2–3).
There are at least three problems with this argument:
The argument confuses AI and machine learning.
The argument underestimates the potential of personalised medicine and big data.
The argument fails to distinguish between evidence and decision-making within healthcare.
I will analyse these three problems in turn to show that Watson is not the threat it is made out to be. To begin with, AI and machine learning are different and this difference matters, especially when it comes to ethical arguments about possible threats. Watson is, at best, a machine learning algorithm; it is not, nor does it want to be, AI. Indeed, the whole point of decision-support systems like Watson is that they have the computational capacity to do things that humans do not have the time, space or cognitive ability to do; on the other hand, the point of AI is to develop artefacts that can do the kinds of fancy and creative things only humans were thought able to perform.
There is, in short, an important asymmetry: machine learning is in principle much more powerful—computationally—than AI because its capacity goes well beyond human capacity; but AI might be a lot scarier than machine learning because it mimics humans without being alive. And so it matters in ethical and political debates about the risks and threats of a certain technology (and this applies to both academia and also the media) whether we talk of AI or machine learning because using the former terminology instead of the latter might suggest a bigger threat that machine learning—with all the important ethical issues that it itself throws up—does not necessarily pose. So we should keep AI and machine learning separate, particularly when evaluating possible risks and threats of these technologies from an ethical point of view.
Having said that, it is also important not to underestimate machine learning algorithms by, for example, failing to distinguish them from mere computational algorithms—the difference being, basically, that the latter is fully programmed while the former can learn and ‘program itself’ from a combination of initial programming plus new and large data sets.5 There are questions—for example, issues of responsibility—which are raised by machine learning algorithms but not by mere computational algorithms; so that it is important, from an ethical point of view, that Watson—despite not qualifying for AI—does use machine learning algorithms (which is more precise than just saying that Watson is a machine learning algorithm).
The first conclusion is, then, that techno-apocalyptic arguments about threats and risks of a particular innovation should not oversell that innovation, as in claiming that it is AI while it is—only—machine learning. This is also important because—as we have already emphasised—machine learning is computationally so much more powerful than an artefact that mimics human functions; otherwise, there would be no point in delegating the task in question—ranking of therapy options—to the machine; we could just have a human doctor do it instead (that would certainly be cheaper than AI). So properly identifying the technology in question, Watson, for what it is—namely machine learning—is also important in order to correctly understand the clinical and economic motivation behind using that technology: that it is computationally so much more powerful at a relatively lower cost.iii
Let me make clear that the distinction between AI and machine learning is neither a purely terminological matter nor just a conceptual one: even if we accept the distinction between general AI and narrow AI, there are differences in both benefits and risks between artificial intelligence and machine learning algorithms. So whether some technological innovation—Watson in this case—counts as one, machine learning, rather than the other, AI, makes a crucial difference to its ethical evaluation.iv
The second problem has to do with the argument against Watson claiming that the advice it gives is not specific to the patient’s individual values, for example, a patient’s preference for further therapy as opposed to palliative care. This is an important issue, particularly given that we are talking about delegating to technology delicate tasks that have forever been performed by human doctors. Within this argument, one could easily take one more step and accuse Watson of the dark side of delegation, namely outsourcing. If Watson is cheaper and more effective—and if its being cheaper and more effective comes at the cost of the individual patient and her wishes and values being lost in all the computation—that would be a serious ethical and political concern because the human really would be lost in the corporate machine.
Tempting as this techno-apocalypticism is, it misses the point of why there is so much interest in machine learning, big data and personalised medicine6 7: basically, that the computational possibilities of machine learning and big data are of a size such that we can personalise healthcare in ways previously unimaginable. It is in fact true that Watson must take into account personal values and wishes, but the point is that Watson—with the computational possibilities opened up by its machine learning algorithms—is particularly well placed to do exactly that—and definitely better placed than overworked oncologists anyway.
Specifically, McDougall claims that “the values driving the treatment rankings are not specific to the individual patient”.4 But her argument aims at a degree of generality—against the risks of too much delegation to ‘AI systems’4—such that it would be better off not being based on a particular version or update of a particular system by a particular company. So the more interesting question is whether machine learning algorithms like Watson’s have the capabilities to include the complex diversity of patients’ values and preferences in their computation—and the evidence points not just towards a positive answer to this question but also towards the stronger claim that such systems—if used responsibly—are particularly well placed to deal with human diversity and uniqueness.5–9 ,v
Interestingly, we can now see that while the first problem was that the argument against Watson overestimates it by talking of AI instead of machine learning, the second problem is that the argument against Watson underestimates it by accusing it of making medicine impersonal, which is the very opposite of what Watson is—supposedly—particularly good at, namely using its computational might and big data to offer personalised individual recommendations.
The third problem is closely related: what is it that, in the end, Watson and other machine learning systems deliver? Or, to put this question in action-theoretical terms: what is it that we delegate to Watson? It is important to distinguish between delegating to Watson the task of advising us—for example, by providing evidence that would have otherwise not been available within a reasonable timeframe—and delegating to Watson the decision-making task. The latter would indeed be potentially ethically problematic, even more so once we think of autonomous weapons or self-driving trolley-ology, especially with relation to responsibility.
But we do not need to delegate that much to Watson, and this point is closely related to the first two problems: if Watson was an ‘AI system’, then maybe we would be tempted to delegate decision-making; and then maybe we should even have to be afraid that such a system would take it on itself to make its own decisions. But that is not what Watson is particularly good at, namely using its computational capacities to provide quick and cheap evidence-based advice to the clinician that will then ultimately make herself the therapeutic decision, together with the patient. And indeed here a further distinction is in order between diagnostic systems and therapeutic support systems like Watson, as the former might raise some issues in terms of threats and risks that the latter do not give rise to.9
Here, McDougall’s worry is that “patient values should not be discussed as a reaction to an already ranked list. Such an approach diminishes the patient’s role and represents a backward step in respecting patient autonomy”.4 ,vi This objection is interesting because it mirrors a classic techno-apocalyptic move: worrying about X only once it is delegated to technology while X should have worried us all along with respect to humans performing the task too.vii Namely, it is indeed true (and has always been true) that medical professionals should not exercise excessive pressure in giving advice and should avoid framing and biassing patient decision-making. But hopefully, the supposed neutrality of algorithms can counter this worry instead of exacerbating it.viii
Concluding, it is indeed a crucial ethical and political question—possibly one of the most important questions of this century—how much we should delegate to technology and in particular to complex technologies such as machine learning algorithms.10 But exactly because that question is so important, it is paramount that we properly distinguish between different kinds and levels of tasks that we may or may not legitimately delegate to machine learning algorithms. And here evidence-based advice and decision-making are importantly different: for example, because respect for autonomy requires that patients ought to be involved in the latter but not the former—which would otherwise constitute an irresponsible outsourcing of responsibility.
This leads us to the following surprising conclusion: the real worry might turn out to be that including the patient’s point of view in Watson’s calculations might compromise its advice by making it opaque and also not purely medical.7 And then we really could not responsibly base shared decision-making on it.
Acknowledgments
I would like to thank the following colleagues for very helpful comments: Claudia Bagge-Petersen, Rasmus Thybo Jensen, Aaro Tupasela and Thomas Grote - plus two referees and an editor for JME.
Footnotes
↵ ii Not to mention, I might add, the risk of medical unemployment, which really would be a first.
↵ iii The economic argument is much more complicated than that—especially when it comes to IBM’s attempts to sell Watson to smaller healthcare systems and healthcare systems from third-world countries. Indeed, I believe that political and economic concerns that have to do with the back-door privatisation of healthcare through increasing dependence on technology are even more serious than ethical ones, but that is beyond the scope here (references removed).
↵ iv Many thanks to an anonymous referee for pressing me on this point.
↵ v Many thanks to an anonymous referee for pressing me on this point.
↵ vi Many thanks to an anonymous referee for pressing me on this point.
↵ vii The implicit assumption here being that medical professionals already present patients with ranked options and patients’ values only enter at this stage—thanks to an anonymous referee for suggesting this.
↵ viii On the other hand, here we should be mindful of so-called algorithmic bias: algorithms are programmed by humans and we must be careful to avoid human bias being entrenched by being programmed into software.
Funding no relevant funding
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.
Correction notice This article has been corrected since it was first published. The corrected final proof has now been published.
Patient consent for publication Not required.