Article Text

Download PDFPDF
Are physicians requesting a second opinion really engaging in a reason-giving dialectic? Normative questions on the standards for second opinions and AI
  1. Benjamin H Lang
  1. Baylor College of Medicine, Houston, TX, USA
  1. Correspondence to Dr Benjamin H Lang, Baylor College of Medicine, Houston, TX 77030, USA; Benjamin.Lang{at}bcm.edu

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

In their article, ‘Responsibility, Second Opinions, and Peer-Disagreement—Ethical and Epistemological Challenges of Using AI in Clinical Diagnostic Contexts,’ Kempt and Nagel argue for a ‘rule of disagreement’ for the integration of diagnostic AI in healthcare contexts. The type of AI in question is a ‘decision support system’ (DSS), the purpose of which is to augment human judgement and decision-making in the clinical context by automating or supplementing parts of the cognitive labor. Under the authors’ proposal, artificial decision support systems (AI-DSS) which produce automated diagnoses should serve chiefly as confirmatory tools; so long as the physician and AI agree, the matter is settled, and the physician’s initial judgement is considered epistemically justified. If, however, the AI-DSS and physician disagree, then a second physician’s opinion is called on to resolve the dispute. While the cognitive labour of the decision is shared between the physicians and AI, the final decision remains at the discretion of the first physician, and with it the moral and legal culpability.

The putative benefits of this approach are twofold: (1) healthcare administration can improve diagnostic performance by introducing AI-DSS without the unintended byproduct of a responsibility gap, and (2) assuming the physician and AI disagree less than the general rate of requested second opinions, and the AI’s diagnostic accuracy supersedes or at least …

View Full Text

Footnotes

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; internally peer reviewed.

  • Moreover, in cases of patient-initiated second opinions where the patient contacts an entirely different clinic of their own accord, it is unlikely that physician will have any contact with the first physician before rendering their own verdict, and this is by design; the patient wants an independent assessment performed by a physician with no ties to the first.

  • There are two ways of failing to meet the explanatory demands. The first concerns instances when physicians are incapable of the necessary reason-giving (ie, when a difference in clinical gestalt is impressionistic rather than explicitly reason or evidence-based). The second concerns instances wherein physicians are capable of the necessary reason-giving, but do not actually rely on or seek it out when asking for a second opinion, which calls into question the indispensability of the equal-view peer-disagreement model.

Other content recommended for you