Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Some physicians, in their care of patients at risk of misusing opioids, use machine learning (ML)-based prediction drug monitoring programmes (PDMPs) to guide their decision making in the prescription of opioids. This can cause a conflict: a PDMP Score can indicate a patient is at a high risk of opioid abuse while a patient expressly reports oppositely. The prescriber is then left to balance the credibility and trust of the patient with the PDMP Score.
Pozzi1 argues that a prescriber who downgrades the credibility of a patient’s testimony based on a low PDMP Score is epistemically and morally unjustified and contributes to a form of testimonial injustice. This results in patients being silenced, excluded from decision-making processes and subjected to structural injustices. Additionally, the use of ML systems in medical practices raises concerns about perpetuating existing inequalities, overestimating their capabilities and displacing human authority. However, almost the very same critiques apply to human-based systems. Formalisation, ML systems included, should instead be viewed positively,2 and precisely as a powerful means to begin eroding these and other problems in ethically sensitive domains. In this case, the epistemic virtues of formalisation include promoting transparency, consistency and replicability in decision making. Rigorous ML systems can also help ensure that models …
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Provenance and peer review Not commissioned; internally peer reviewed.