Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Physicians and other healthcare professionals are increasingly finding ways to use artificial intelligent decision support systems (AI-DSS) in their work. IBM Watson Health, for example, is a commercially available technology that is providing AI-DDS services in genomics, oncology, healthcare management and more.1 AI’s ability to scan massive amounts of data, detect patterns, and derive solutions from data is vastly more superior than that of humans. AI technology is undeniably integral to the future of healthcare and public health, and thoughtful consideration of the legal, ethical and moral issues surrounding this technology is a must.
The authors of the article, Responsibility, Second Opinions, and Peer-Disagreement—Ethical and Epistemological Challenges of Using AI in Clinical Diagnostic Contexts, provide an informed discussion of how AI-DSS may be used, both practically and ethically, to assist healthcare professionals in cooperative diagnostic processes2. The authors propose a process, whereby AI-DSS would provide a physician a second opinion, and when there is a mismatch between opinions, another physician would provide a third opinion. This approach maintains a ‘physician-in-charge’ perspective, and suggests that decision-making must ultimately be made by a person. This approach is also consistent with the ‘physician in-the-loop’ concept, such that even when increasingly autonomous AI-DSSs are put to use, physicians will still be providing checks and overseeing clinical decisions.
The authors conceptualise AI as a replacement of human cognitive labour. That is, AI is used to supplant a human professional in particular functions …
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Provenance and peer review Not commissioned; internally peer reviewed.
Read the full text or download the PDF:
Other content recommended for you
- Responsibility, second opinions and peer-disagreement: ethical and epistemological challenges of using AI in clinical diagnostic contexts
- Students’ perceptions on their use of an EHR: pilot questionnaire study
- Medicine and the rise of the robots: a qualitative review of recent advances of artificial intelligence in health
- Use of digital technologies by users of psychiatric inpatient services in Berlin, Germany: a cross-sectional patient survey
- A knee monitoring device and the preferences of patients living with osteoarthritis: a qualitative study
- Opinion leader empowered patients about the era of digital health: a qualitative study
- Perceptions of healthcare professionals about the adoption and use of EHR in Gulf Cooperation Council countries: a systematic review
- Exploring the role of the nurse manager in supporting point-of-care nurses’ adoption of electronic health records: protocol for a qualitative research study
- Second opinion utilization by healthcare insurance type in a mixed private-public healthcare system: a population-based study
- Randomised trial of personalised computer based information for cancer patients