Article Text
Statistics from Altmetric.com
Some physicians, in their care of patients at risk of misusing opioids, use machine learning (ML)-based prediction drug monitoring programmes (PDMPs) to guide their decision making in the prescription of opioids. This can cause a conflict: a PDMP Score can indicate a patient is at a high risk of opioid abuse while a patient expressly reports oppositely. The prescriber is then left to balance the credibility and trust of the patient with the PDMP Score.
Pozzi1 argues that a prescriber who downgrades the credibility of a patient’s testimony based on a low PDMP Score is epistemically and morally unjustified and contributes to a form of testimonial injustice. This results in patients being silenced, excluded from decision-making processes and subjected to structural injustices. Additionally, the use of ML systems in medical practices raises concerns about perpetuating existing inequalities, overestimating their capabilities and displacing human authority. However, almost the very same critiques apply to human-based systems. Formalisation, ML systems included, should instead be viewed positively,2 and precisely as a powerful means to begin eroding these and other problems in ethically sensitive domains. In this case, the epistemic virtues of formalisation include promoting transparency, consistency and replicability in decision making. Rigorous ML systems can also help ensure that models …
Footnotes
Twitter @tfburns
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Provenance and peer review Not commissioned; internally peer reviewed.
Linked Articles
- Feature article
Read the full text or download the PDF:
Other content recommended for you
- Are clinicians ethically obligated to disclose their use of medical machine learning systems to patients?
- Testimonial injustice in medical machine learning
- Artificial intelligence, bias and clinical safety
- PDMP causes more than just testimonial injustice
- Designing AI for mental health diagnosis: challenges from sub-Saharan African value-laden judgements on mental health disorders
- Machine learning for mental health diagnosis: tackling contributory injustice and epistemic oppression
- Can medical algorithms be fair? Three ethical quandaries and one dilemma
- Governing the safety of artificial intelligence in healthcare
- Grand rounds in methodology: key considerations for implementing machine learning solutions in quality improvement initiatives
- Artificial intelligence in gastroenterology and hepatology: how to advance clinical practice while ensuring health equity