TY - JOUR T1 - What you believe you want, may not be what the algorithm knows JF - Journal of Medical Ethics JO - J Med Ethics SP - 177 LP - 178 DO - 10.1136/jme-2022-108778 VL - 49 IS - 3 AU - Seppe Segers Y1 - 2023/03/01 UR - http://jme.bmj.com/content/49/3/177.abstract N2 - Tensions between respect for autonomy and paternalism loom large in Ferrario et al’s discussion of artificial intelligence (AI)-based preference predictors.1 To be sure, their analysis (rightfully) brings out the moral matter of respecting patient preferences. My point here, however, is that their consideration of AI-based preference predictors in treatment of incapacitated patients opens more fundamental moral questions about the desirability of over-ruling considered patient preferences, not only if these are disclosed by surrogates, but possibly also in treating competent patients.I do not advocate such an evolution—the moral desirability of that calls for a much broader debate, one in which the meaning of ‘doing good’ in medicine, and how this intersects with normative views on ‘the goal(s) of medicine’ would be central elements. While my aim in this piece is more modest, I nonetheless hope to approach it sideways, by indicating how the contribution by Ferrario et al reopens discussion about paternalism and the normativity of preferences in medicine.This follows from what these authors give as reason for endorsing this technology in care for incapacitated patients. Their argument for employing such tools in case of incapacitation is based on the premise that advice of surrogate decision-makers about care preferences is suboptimal because of their biased perception … ER -