Article Text
Statistics from Altmetric.com
Some physicians, in their care of patients at risk of misusing opioids, use machine learning (ML)-based prediction drug monitoring programmes (PDMPs) to guide their decision making in the prescription of opioids. This can cause a conflict: a PDMP Score can indicate a patient is at a high risk of opioid abuse while a patient expressly reports oppositely. The prescriber is then left to balance the credibility and trust of the patient with the PDMP Score.
Pozzi1 argues that a prescriber who downgrades the credibility of a patient’s testimony based on a low PDMP Score is epistemically and morally unjustified and contributes to a form of testimonial injustice. This results in patients being silenced, excluded from decision-making processes and subjected to structural injustices. Additionally, the use of ML systems in medical practices raises concerns about perpetuating existing inequalities, overestimating their capabilities and displacing human authority. However, almost the very same critiques apply to human-based systems. Formalisation, ML systems included, should instead be viewed positively,2 and precisely as a powerful means to begin eroding these and other problems in ethically sensitive domains. In this case, the epistemic virtues of formalisation include promoting transparency, consistency and replicability in decision making. Rigorous ML systems can also help ensure that models …
Footnotes
Twitter @tfburns
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Provenance and peer review Not commissioned; internally peer reviewed.
Linked Articles
- Feature article
Read the full text or download the PDF:
Other content recommended for you
- Testimonial injustice in medical machine learning
- Artificial intelligence, bias and clinical safety
- PDMP causes more than just testimonial injustice
- Can medical algorithms be fair? Three ethical quandaries and one dilemma
- Governing the safety of artificial intelligence in healthcare
- Artificial intelligence in gastroenterology and hepatology: how to advance clinical practice while ensuring health equity
- Machine learning in GI endoscopy: practical guidance in how to interpret a novel field
- Artificial intelligence in GI endoscopy: stumbling blocks, gold standards and the role of endoscopy societies
- Effective resource management using machine learning in medicine: an applied example
- Predicting need for hospital admission in patients with traumatic brain injury or skull fractures identified on CT imaging: a machine learning approach