Article Text

Download PDFPDF
‘Can I trust my patient?’ Machine Learning support for predicting patient behaviour
  1. Florian Funer1,
  2. Sabine Salloch2
  1. 1 Institute for Ethics and History of Medicine, Eberhard Karls Universitat Tubingen, Tübingen, Baden-Württemberg, Germany
  2. 2 Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Niedersachsen, Germany
  1. Correspondence to Dr Florian Funer, Institute for Ethics and History of Medicine, Eberhard Karls Universitat Tubingen, Tübingen, Baden-Württemberg, Germany; florian.funer{at}uni-tuebingen.de

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Giorgia Pozzi’s feature article1 on the risks of testimonial injustice when using automated prediction drug monitoring programmes (PDMPs) turns the spotlight on a pressing and well-known clinical problem: physicians’ challenges to predict patient behaviour, so that treatment decisions can be made based on this information, despite any fallibility. Currently, as one possible way to improve prognostic assessments of patient behaviour, Machine Learning-driven clinical decision support systems (ML-CDSS) are being developed and deployed. To make her point, Pozzi discusses ML-CDSSs that are supposed to deliver physicians an accurate estimation of the likelihood of narcotic, sedative and stimulant opioid misuse by a given patient (like, e.g., ‘NarxCare’).

Regarding cases of deviating assessment between human evaluators and automated systems, the medico-ethical discussion has so far mainly focused on disagreement between clinicians and Machine Learning (ML) algorithms,2 for example in ‘second opinions’ that have been reconstructed as disagreements between physician testimony and ML-CDSS testimony.3 Examples such as those of PDMPs, by contrast, draw attention to situations in which patient testimony potentially contradicts ML-CDSS testimony. Pozzi examines decision-making situations in which ML-CDSS testimony factually overrides the testimony of the person affected. Thus, in the presented case of patient Kathryn, the testimony of her own physical and mental condition was insufficient to overturn the unfavourable outcome for …

View Full Text

Footnotes

  • Contributors FF and SS conceptualised the commentary. FF drafted the first version. FF and SS contributed equally to its revision. FF and SS both approved the final version for submission.

  • Funding The authors are funded by the German Federal Ministry of Education and Research (Project DESIREE, Grant ID 01GP1911A-D). FF was additionally supported by the VolkswagenStiftung (Digital Medical Ethics Network, Grant ID 9B 233). The funders had no involvement in the content of the manuscript.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; internally peer reviewed.

Other content recommended for you