TY - JOUR T1 - Are physicians requesting a second opinion really engaging in a reason-giving dialectic? Normative questions on the standards for second opinions and AI JF - Journal of Medical Ethics JO - J Med Ethics SP - 234 LP - 235 DO - 10.1136/medethics-2022-108246 VL - 48 IS - 4 AU - Benjamin H Lang Y1 - 2022/04/01 UR - http://jme.bmj.com/content/48/4/234.abstract N2 - In their article, ‘Responsibility, Second Opinions, and Peer-Disagreement—Ethical and Epistemological Challenges of Using AI in Clinical Diagnostic Contexts,’ Kempt and Nagel argue for a ‘rule of disagreement’ for the integration of diagnostic AI in healthcare contexts. The type of AI in question is a ‘decision support system’ (DSS), the purpose of which is to augment human judgement and decision-making in the clinical context by automating or supplementing parts of the cognitive labor. Under the authors’ proposal, artificial decision support systems (AI-DSS) which produce automated diagnoses should serve chiefly as confirmatory tools; so long as the physician and AI agree, the matter is settled, and the physician’s initial judgement is considered epistemically justified. If, however, the AI-DSS and physician disagree, then a second physician’s opinion is called on to resolve the dispute. While the cognitive labour of the decision is shared between the physicians and AI, the final decision remains at the discretion of the first physician, and with it the moral and legal culpability.The putative benefits of this approach are twofold: (1) healthcare administration can improve diagnostic performance by introducing AI-DSS without the unintended byproduct of a responsibility gap, and (2) assuming the physician and AI disagree less than the general rate of requested second opinions, and the AI’s diagnostic accuracy supersedes or at least … ER -