RT Journal Article SR Electronic T1 Primer on an ethics of AI-based decision support systems in the clinic JF Journal of Medical Ethics JO J Med Ethics FD BMJ Publishing Group Ltd and Institute of Medical Ethics SP e3 OP e3 DO 10.1136/medethics-2019-105860 VO 47 IS 12 A1 Matthias Braun A1 Patrik Hummel A1 Susanne Beck A1 Peter Dabrock YR 2021 UL http://jme.bmj.com/content/47/12/e3.abstract AB Making good decisions in extremely complex and difficult processes and situations has always been both a key task as well as a challenge in the clinic and has led to a large amount of clinical, legal and ethical routines, protocols and reflections in order to guarantee fair, participatory and up-to-date pathways for clinical decision-making. Nevertheless, the complexity of processes and physical phenomena, time as well as economic constraints and not least further endeavours as well as achievements in medicine and healthcare continuously raise the need to evaluate and to improve clinical decision-making. This article scrutinises if and how clinical decision-making processes are challenged by the rise of so-called artificial intelligence-driven decision support systems (AI-DSS). In a first step, this article analyses how the rise of AI-DSS will affect and transform the modes of interaction between different agents in the clinic. In a second step, we point out how these changing modes of interaction also imply shifts in the conditions of trustworthiness, epistemic challenges regarding transparency, the underlying normative concepts of agency and its embedding into concrete contexts of deployment and, finally, the consequences for (possible) ascriptions of responsibility. Third, we draw first conclusions for further steps regarding a ‘meaningful human control’ of clinical AI-DSS.