TY - JOUR T1 - For the sake of multifacetedness. Why artificial intelligence patient preference prediction systems shouldn’t be for next of kin JF - Journal of Medical Ethics JO - J Med Ethics SP - 175 LP - 176 DO - 10.1136/jme-2022-108775 VL - 49 IS - 3 AU - Max Tretter AU - David Samhammer Y1 - 2023/03/01 UR - http://jme.bmj.com/content/49/3/175.abstract N2 - In their contribution ‘Ethics of the algorithmic prediction of goal of care preferences’1 Ferrario et al elaborate a from theory to practice contribution concerning the realisation of artificial intelligence (AI)-based patient preference prediction (PPP) systems. Such systems are intended to help find the treatment that the patient would have chosen in clinical situations—especially in the intensive care or emergency units—where the patient is no longer capable of making that decision herself.The authors identify several challenges that complicate their effective development, application and evaluation—and offer solutions to them. One of these issues is the question of who should ultimately use said systems. While it is undisputed that clinicians should use these AI systems for their decision-making process, there is an ongoing debate about whether next of kin should use them as well. The authors advocate that ‘’access should be provided to both clinicians and loved ones with due explanations and as desired’. We will disagree with this assessment and explain in our commentary why it is important that surrogates provide their own assessments with as little external (AI) influence as possible.Why do next of kin actually participate in the process of preference finding and treatment decision-making? A key reason is that clinicians usually … ER -