Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Like many, I find the idea of relying on patient preference predictors (PPP) in life-or-death cases ethically troubling. As part of his stimulating discussion, Sharadin1 diagnoses such unease as a worry that using PPPs disrespects patients’ autonomy, by treating their most intimate and significant desires as if they were caused by their demographic traits. I agree entirely with Sharadin’s ‘debunking’ response to this concern: we can use statistical correlations to predict others’ preferences without thereby assuming any causal claim (although I am worried that blocking the conversational implicatures may be far harder in emotionally charged life-and-death contexts than at dinner parties). However, I suspect that, for at least some of us, our unease about PPPs stems from a different kind of ‘autonomy’ concern. In this commentary, then, I will explore this concern, and show how it relates to Sharadin’s work.
Very many of our preferences are caused, ultimately, by facts which are outside our control, such as our demographic features. However, I suggest that we can still act autonomously on the basis of such preferences, when they are preferences which we endorse. Imagine, for example, that Jane has grown up in a church-growing environment, which has shaped many of her preferences, including in matters of life and death. In later life, she has reflected on her religion and has decided that its tenets are true. Even if her preferences result—in a causal sense—from demographic facts, she is no less autonomous when she acts on those preferences. Indeed, I think this is true even if Jane concedes that her current preferences depend on her upbringing; say, she holds that her past has made her receptive to the truth of doctrine.
With this picture of autonomy in mind, let us now turn back to the case of PPPs. Imagine that an algorithm predicts of Jane, on the basis of her religious upbringing, that, in some particularly convoluted case, she would have a certain preference. Jane has never considered such a complex case, but, in fact, would have such a preference were she to do so. In this case, I suggest, using the algorithm to predict Jane’s preferences does not undermine her autonomy, because its use meshes with Jane’s self-understanding. To make this vivid, we could imagine Jane herself might happily use the algorithm as a way of figuring out what she thinks. Imagine, by contrast, that our algorithm can also accurately predict Jane’s preferences by appeal to another demographic factor, such as her age. Jane, however, never thinks of herself or her actions in terms of her age. In this case, by contrast, it seems to me that using the algorithm does not respect Jane’s autonomy, because its use does not mesh with her self-understanding.
To return to PPPs: some proponents of PPPs claim that their use promotes patient autonomy by increasing the chance that decisions are in line with their preferences. Sharadin suggests a worry about this ‘thin’ conception of respect of autonomy: respecting agents’ autonomy involves not treating agents’ preferences as if they were caused by demographic factors. On this proposal, all uses of PPPs seem problematic (until we get to the rebuttal!) I agree with Sharadin that the ‘thin’ conception is wrong, but suggest a different ‘thick’ concern: respect for autonomy involves using categories which agents themselves would endorse when figuring out what they would want. This worry does not show that all uses of all PPPs are problematic. Some PPPs might use only demographic features which mesh with agents’ self-understanding. In practice, however, most PPPs will use both ‘endorsed’ and ‘non-endorsed’ categories, and, as such, using these tools to predict preferences will fail fully to respect autonomy. Cancelling the causal implicatures of prediction talk does nothing to resolve these concerns.
Why, though, think that the ‘endorsement’ concern, rather than Sharadin’s version of the autonomy concern, does underlie our unease about PPPs? I cannot answer that challenge here, but, I suggest, the approach above explains why there seems to be something preferable about letting (well-intentioned!) relatives decide about treatment, rather than PPPs, even if they are more likely to be wrong. Even if your sister or husband is more likely to be wrong than a PPP, at least he or she is more likely to (try) to reason in terms of categories and concepts which you would use yourself—categories you would endorse—than is a PPP. By analogy, even if Amazon is more likely to figure out what I want for my birthday than is my son, I might prefer my son chose me a present by trying to ‘think as I do’ than by deferring to Alexa. The answer might be wrong, but at least it would be wrong for the right reasons!
Still, one might think, there would be some benefit in deferring to Alexa, if doing so would get me the birthday present I truly desire. Something similar is true, I suggest, in the case of PPPs: ensuring that incapacitated patients’ treatment is in line with their preferences does capture one important aspect of respect for autonomy. Hence, just as I suggested above that the ‘endorsement’ concern does not necessarily rule out all uses of PPPs, it may be that there are good autonomy-based reasons to use even those PPPs which employ unendorsed categories. PPPs make us uneasy not so much because we have an autonomy-based objection to their use, but because their use creates a tension between different aspects of respect for autonomy.
In short, the relationship between concerns about PPPs and the notion of respect for autonomy may be far messier than Sharadin suggests. Quite where that leaves us, either normatively or with regard to the intriguing analogy with the legal setting, I leave to better minds.
Thanks to Nate Sharadin for very useful feedback on an earlier draft of this response.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Provenance and peer review Commissioned; internally peer reviewed.
Other content recommended for you
- Autonomy-based criticisms of the patient preference predictor
- Patient preference predictors and the problem of naked statistical evidence
- Clarifying substituted judgement: the endorsed life approach
- What you believe you want, may not be what the algorithm knows
- The bioethical principles and Confucius’ moral philosophy
- Substituted decision making and the dispositional choice account
- Ethics of the algorithmic prediction of goal of care preferences: from theory to practice
- Sovereignty, authenticity and the patient preference predictor
- First among equals? Adaptive preferences and the limits of autonomy in medical ethics
- To be, or not to be? The role of the unconscious in transgender transitioning: identity, autonomy and well-being