Article Text

Download PDFPDF
Testimonial injustice in medical machine learning: a perspective from psychiatry
  1. George Gillett
  1. Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, UK
  1. Correspondence to George Gillett, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London WC2R 2LS, UK; george.1.gillett{at}kcl.ac.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Pozzi provides a thought-provoking account of how machine-learning clinical prediction models (such as Prediction Drug Monitoring Programmes (PDMPs)) may exacerbate testimonial injustice.1 In this response, I generalise Pozzi’s concerns about PDMPs to traditional models of clinical practice and question the claim that inaccurate clinicians are necessarily preferential to inaccurate machine-learning models. I then explore Pozzi’s concern that such models may deprive patients of a right to ‘convey information’. I suggest that machine-learning tools may be used to enhance, rather than frustrate, this right, through the perspective of hermeneutical justice.

A false equivalence?

Pozzi objects to the introduction of machine-learning risk prediction tools in clinical care, due to their being ‘epistemically opaque’, often inaccurate, and a threat to patients’ testimonial justice. Through the example of psychiatry, I suggest this stance idealises traditional models of clinical practice. I propose that clinicians’ judgements are often equally vulnerable to opaque subjectivity, bias and epistemic injustice. For instance, the use of clinical observation or collateral interviewing might be conceptualised as a threat to testimonial justice, by neglecting patients’ own perspectives or voices. In practice, clinical risk prediction often follows inconsistent and somewhat subjective clinical heuristics, which are difficult to summarise or evaluate. Further, although Pozzi diminishes machine-learning …

View Full Text

Footnotes

  • Twitter @george_gillett

  • Contributors Sole contributor.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; internally peer reviewed.

Linked Articles