Article Text
Statistics from Altmetric.com
In their paper ‘Responsibility, second opinions and peer-disagreement: ethical and epistemological challenges of using AI in clinical diagnostic contexts’, Kempt and Nagel discuss the use of medical AI systems and the resulting need for second opinions by human physicians, when physicians and AI disagree, which they call the rule of disagreement (RoD).1 The authors defend RoD based on three premises: First, they argue that in cases of disagreement in medical practice, there is an increased burden of proof (better to be conceived as a burden for justification) for the physician in charge, to defend why the opposing view is adopted or overridden. This burden for justification can be understood as an increased responsibility. In contrast, such burden does allegedly not arise, when physicians agree in their judgement. Second, in those medical contexts where humans collaborate with humans such justification can be provided, since human experts can discuss the evidence and reasons that have led them to their judgement, through which the sources of disagreement can be found and a justified decision can be made by the physician in charge. Third, unlike human-to-human collaboration, such communicative exchange is not possible with an AI system. Due to AI’s opacity, the physician in charge has no means of illuminating why the AI disagrees. Conclusively, the authors propose RoD as a solution. RoD suggests that a second human expert should be consulted for advise in cases of human–AI disagreement. Once AI systems become more widespread in clinical practice, it can be expected that such type of disagreement occurs more frequently. AI, after all, is being implemented, because it promises, among others, higher accuracy, which implies that some abnormalities will be detected that the physician would have missed.2 Hence, it is laudable to discuss the moral implications of …
Footnotes
KRJ and MS contributed equally.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Provenance and peer review Not commissioned; internally peer reviewed.
Read the full text or download the PDF:
Other content recommended for you
- Responsibility, second opinions and peer-disagreement: ethical and epistemological challenges of using AI in clinical diagnostic contexts
- Are physicians requesting a second opinion really engaging in a reason-giving dialectic? Normative questions on the standards for second opinions and AI
- Second opinion programmes in Germany: a mixed-methods study protocol
- Second opinion utilization by healthcare insurance type in a mixed private-public healthcare system: a population-based study
- Getting rights right: implementing ‘Martha’s Rule’
- Characteristics associated with requests by pathologists for second opinions on breast biopsies
- Patient-initiated second medical consultations—patient characteristics and motivating factors, impact on care and satisfaction: a systematic review
- AI support for ethical decision-making around resuscitation: proceed with care
- How do patients respond to safety problems in ambulatory care? Results of a retrospective cross-sectional telephone survey
- Second opinion and time to knee arthroplasty: a prospective cohort study of 142 patients