Article Text
Statistics from Altmetric.com
Introduction
In the article ‘Testimonial injustice in medical machine learning’, Pozzi argues that the prescription drug monitoring programme (PDMP) leads to testimonial injustice as physicians are more inclined to trust the PDMP’s risk scores over the patient’s own account of their medication history.1 Pozzi further develops this argument by discussing how credibility shifts from patients to machine learning (ML) systems that are supposedly neutral. As a result, a sense of distrust is now formed between patients and physicians. While there are merits to Pozzi’s main argument of epistemic injustice caused by PDMPs, Pozzi mentions but ultimately glosses over the problem of automation bias. In this commentary, I will discuss automation bias and the affect it has on clinical decision making as well as a technical problem exacerbated by the usage of PDMPs that can potentially cause physical harms.
Unaccounted problems with ML systems
It is reiterated in the article that the confidence physicians have in the PDMP’s risk scores over the patient’s testimony leads to misplaced trust in the ML systems. What Pozzi describes here is known as automation bias, which occurs when there is an over-reliance on ML systems. …
Footnotes
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Provenance and peer review Not commissioned; internally peer reviewed.
Linked Articles
- Feature article