Article Text

Download PDFPDF
AI and XAI second opinion: the danger of false confirmation in human–AI collaboration
  1. Rikard Rosenbacke1,
  2. Åsa Melhus2,
  3. Martin McKee3,
  4. David Stuckler4
  1. 1Centre for Corporate Governance, Department of Accounting, Copenhagen Business School, Frederiksberg, Denmark
  2. 2Department of Medical Sciences, Uppsala University, Uppsala, Sweden
  3. 3Department of Health Services Research and Policy, London School of Hygiene and Tropical Medicine, London, UK
  4. 4Department of Social and Political Science, Bocconi University, Milano, Italy
  1. Correspondence to Rikard Rosenbacke, Centre for Corporate Governance, Department of Accounting, Copenhagen Business School, Frederiksberg 2000, Denmark; rikard{at}rosenbacke.com

Abstract

Can AI substitute a human physician’s second opinion? Recently the Journal of Medical Ethics published two contrasting views: Kempt and Nagel advocate for using artificial intelligence (AI) for a second opinion except when its conclusions significantly diverge from the initial physician’s while Jongsma and Sand argue for a second human opinion irrespective of AI’s concurrence or dissent. The crux of this debate hinges on the prevalence and impact of ‘false confirmation’—a scenario where AI erroneously validates an incorrect human decision. These errors seem exceedingly difficult to detect, reminiscent of heuristics akin to confirmation bias. However, this debate has yet to engage with the emergence of explainable AI (XAI), which elaborates on why the AI tool reaches its diagnosis. To progress this debate, we outline a framework for conceptualising decision-making errors in physician–AI collaborations. We then review emerging evidence on the magnitude of false confirmation errors. Our simulations show that they are likely to be pervasive in clinical practice, decreasing diagnostic accuracy to between 5% and 30%. We conclude with a pragmatic approach to employing AI as a second opinion, emphasising the need for physicians to make clinical decisions before consulting AI; employing nudges to increase awareness of false confirmations and critically engaging with XAI explanations. This approach underscores the necessity for a cautious, evidence-based methodology when integrating AI into clinical decision-making.

  • Ethics- Medical
  • Decision Making
  • Medical Errors
  • Information Technology

Data availability statement

All data relevant to the study are included in the article or uploaded as online supplemental information.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Data availability statement

All data relevant to the study are included in the article or uploaded as online supplemental information.

View Full Text

Footnotes

  • Contributors RR: the paper’s main author and guarantor, developed the initial research concept and drafted the paper. ÅM, MM and DS contributed to the refinement of the research idea and critical revisions of the manuscript.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.