Article Text
Statistics from Altmetric.com
Giorgia Pozzi’s feature article1 on the risks of testimonial injustice when using automated prediction drug monitoring programmes (PDMPs) turns the spotlight on a pressing and well-known clinical problem: physicians’ challenges to predict patient behaviour, so that treatment decisions can be made based on this information, despite any fallibility. Currently, as one possible way to improve prognostic assessments of patient behaviour, Machine Learning-driven clinical decision support systems (ML-CDSS) are being developed and deployed. To make her point, Pozzi discusses ML-CDSSs that are supposed to deliver physicians an accurate estimation of the likelihood of narcotic, sedative and stimulant opioid misuse by a given patient (like, e.g., ‘NarxCare’).
Regarding cases of deviating assessment between human evaluators and automated systems, the medico-ethical discussion has so far mainly focused on disagreement between clinicians and Machine Learning (ML) algorithms,2 for example in ‘second opinions’ that have been reconstructed as disagreements between physician testimony and ML-CDSS testimony.3 Examples such as those of PDMPs, by contrast, draw attention to situations in which patient testimony potentially contradicts ML-CDSS testimony. Pozzi examines decision-making situations in which ML-CDSS testimony factually overrides the testimony of the person affected. Thus, in the presented case of patient Kathryn, the testimony of her own physical and mental condition was insufficient to overturn the unfavourable outcome for …
Footnotes
Contributors FF and SS conceptualised the commentary. FF drafted the first version. FF and SS contributed equally to its revision. FF and SS both approved the final version for submission.
Funding The authors are funded by the German Federal Ministry of Education and Research (Project DESIREE, Grant ID 01GP1911A-D). FF was additionally supported by the VolkswagenStiftung (Digital Medical Ethics Network, Grant ID 9B 233). The funders had no involvement in the content of the manuscript.
Competing interests None declared.
Provenance and peer review Not commissioned; internally peer reviewed.
Read the full text or download the PDF:
Other content recommended for you
- Epistemic injustice in healthcare encounters: evidence from chronic fatigue syndrome
- Evidence, ethics and the promise of artificial intelligence in psychiatry
- Epistemic injustice, children and mental illness
- Testimonial injustice in medical machine learning
- Further remarks on testimonial injustice in medical machine learning: a response to commentaries
- PDMP causes more than just testimonial injustice
- Outcomes of incoming and outgoing second opinions from a UK liver transplant centre
- Artificial intelligence, bias and clinical safety
- Clinical decision support systems to improve drug prescription and therapy optimisation in clinical practice: a scoping review
- Patients, clinicians and open notes: information blocking as a case of epistemic injustice