Article Text

Download PDFPDF
Agree to disagree: the symmetry of burden of proof in human–AI collaboration
  1. Karin Rolanda Jongsma1,
  2. Martin Sand2
  1. 1 Medical Humanities, University Medical Center Utrecht, Utrecht, The Netherlands
  2. 2 Department of Values, Technology and Innovation, TU Delft, Delft, Netherlands
  1. Correspondence to Dr Karin Rolanda Jongsma, Medical Humanities, University Medical Center Utrecht, Utrecht 3508 GA, Netherlands; K.R.Jongsma{at}umcutrecht.nl

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

In their paper ‘Responsibility, second opinions and peer-disagreement: ethical and epistemological challenges of using AI in clinical diagnostic contexts’, Kempt and Nagel discuss the use of medical AI systems and the resulting need for second opinions by human physicians, when physicians and AI disagree, which they call the rule of disagreement (RoD).1 The authors defend RoD based on three premises: First, they argue that in cases of disagreement in medical practice, there is an increased burden of proof (better to be conceived as a burden for justification) for the physician in charge, to defend why the opposing view is adopted or overridden. This burden for justification can be understood as an increased responsibility. In contrast, such burden does allegedly not arise, when physicians agree in their judgement. Second, in those medical contexts where humans collaborate with humans such justification can be provided, since human experts can discuss the evidence and reasons that have led them to their judgement, through which the sources of disagreement can be found and a justified decision can be made by the physician in charge. Third, unlike human-to-human collaboration, such communicative exchange is not possible with an AI system. Due to AI’s opacity, the physician in charge has no means of illuminating why the AI disagrees. Conclusively, the authors propose RoD as a solution. RoD suggests that a second human expert should be consulted for advise in cases of human–AI disagreement. Once AI systems become more widespread in clinical practice, it can be expected that such type of disagreement occurs more frequently. AI, after all, is being implemented, because it promises, among others, higher accuracy, which implies that some abnormalities will be detected that the physician would have missed.2 Hence, it is laudable to discuss the moral implications of …

View Full Text

Footnotes

  • KRJ and MS contributed equally.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; internally peer reviewed.

Other content recommended for you