Article Text
Statistics from Altmetric.com
In their article, ‘Responsibility, Second Opinions, and Peer-Disagreement—Ethical and Epistemological Challenges of Using AI in Clinical Diagnostic Contexts,’ Kempt and Nagel argue for a ‘rule of disagreement’ for the integration of diagnostic AI in healthcare contexts. The type of AI in question is a ‘decision support system’ (DSS), the purpose of which is to augment human judgement and decision-making in the clinical context by automating or supplementing parts of the cognitive labor. Under the authors’ proposal, artificial decision support systems (AI-DSS) which produce automated diagnoses should serve chiefly as confirmatory tools; so long as the physician and AI agree, the matter is settled, and the physician’s initial judgement is considered epistemically justified. If, however, the AI-DSS and physician disagree, then a second physician’s opinion is called on to resolve the dispute. While the cognitive labour of the decision is shared between the physicians and AI, the final decision remains at the discretion of the first physician, and with it the moral and legal culpability.
The putative benefits of this approach are twofold: (1) healthcare administration can improve diagnostic performance by introducing AI-DSS without the unintended byproduct of a responsibility gap, and (2) assuming the physician and AI disagree less than the general rate of requested second opinions, and the AI’s diagnostic accuracy supersedes or at least …
Footnotes
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Provenance and peer review Not commissioned; internally peer reviewed.
↵Moreover, in cases of patient-initiated second opinions where the patient contacts an entirely different clinic of their own accord, it is unlikely that physician will have any contact with the first physician before rendering their own verdict, and this is by design; the patient wants an independent assessment performed by a physician with no ties to the first.
↵There are two ways of failing to meet the explanatory demands. The first concerns instances when physicians are incapable of the necessary reason-giving (ie, when a difference in clinical gestalt is impressionistic rather than explicitly reason or evidence-based). The second concerns instances wherein physicians are capable of the necessary reason-giving, but do not actually rely on or seek it out when asking for a second opinion, which calls into question the indispensability of the equal-view peer-disagreement model.
Read the full text or download the PDF:
Other content recommended for you
- Responsibility, second opinions and peer-disagreement: ethical and epistemological challenges of using AI in clinical diagnostic contexts
- Agree to disagree: the symmetry of burden of proof in human–AI collaboration
- Second opinion programmes in Germany: a mixed-methods study protocol
- AI decision-support: a dystopian future of machine paternalism?
- Patient-initiated second medical consultations—patient characteristics and motivating factors, impact on care and satisfaction: a systematic review
- Second opinion utilization by healthcare insurance type in a mixed private-public healthcare system: a population-based study
- Characteristics associated with requests by pathologists for second opinions on breast biopsies
- AI support for ethical decision-making around resuscitation: proceed with care
- Responsibility and decision-making authority in using clinical decision support systems: an empirical-ethical exploration of German prospective professionals’ preferences and concerns
- The value of personalised risk information: a qualitative study of the perceptions of patients with prostate cancer