Article Text

Download PDFPDF
AI decision-support: a dystopian future of machine paternalism?
  1. David D Luxton
  1. Department of Psychiatry & Behavioral Sciences, University of Washington School of Medicine, Seattle, Washington, USA
  1. Correspondence to Dr. David D Luxton, Department of Psychiatry & Behavioral Sciences, University of Washington School of Medicine, Seattle, Washington, USA; ddluxton{at}uw.edu

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Physicians and other healthcare professionals are increasingly finding ways to use artificial intelligent decision support systems (AI-DSS) in their work. IBM Watson Health, for example, is a commercially available technology that is providing AI-DDS services in genomics, oncology, healthcare management and more.1 AI’s ability to scan massive amounts of data, detect patterns, and derive solutions from data is vastly more superior than that of humans. AI technology is undeniably integral to the future of healthcare and public health, and thoughtful consideration of the legal, ethical and moral issues surrounding this technology is a must.

The authors of the article, Responsibility, Second Opinions, and Peer-Disagreement—Ethical and Epistemological Challenges of Using AI in Clinical Diagnostic Contexts, provide an informed discussion of how AI-DSS may be used, both practically and ethically, to assist healthcare professionals in cooperative diagnostic processes2. The authors propose a process, whereby AI-DSS would provide a physician a second opinion, and when there is a mismatch between opinions, another physician would provide a third opinion. This approach maintains a ‘physician-in-charge’ perspective, and suggests that decision-making must ultimately be made by a person. This approach is also consistent with the ‘physician in-the-loop’ concept, such that even when increasingly autonomous AI-DSSs are put to use, physicians will still be providing checks and overseeing clinical decisions.

The authors conceptualise AI as a replacement of human cognitive labour. That is, AI is used to supplant a human professional in particular functions …

View Full Text

Footnotes

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; internally peer reviewed.

Other content recommended for you