Article info
Commentary
Machine learning for mental health diagnosis: tackling contributory injustice and epistemic oppression
- Correspondence to Dr Giorgia Pozzi, Delft University of Technology, Delft, Netherlands; g.pozzi{at}tudelft.nl
Citation
Machine learning for mental health diagnosis: tackling contributory injustice and epistemic oppression
Publication history
- Received April 4, 2024
- Accepted May 20, 2024
- First published May 30, 2024.
Online issue publication
August 21, 2024
Article Versions
- Previous version (30 May 2024).
- You are viewing the most recent version of this article.
Request permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Copyright information
© Author(s) (or their employer(s)) 2024. No commercial re-use. See rights and permissions. Published by BMJ.
Other content recommended for you
- Testimonial injustice in medical machine learning
- Evidence, ethics and the promise of artificial intelligence in psychiatry
- Epistemic injustice in psychiatric practice: epistemic duties and the phenomenological approach
- Designing AI for mental health diagnosis: challenges from sub-Saharan African value-laden judgements on mental health disorders
- Epistemic injustice in healthcare encounters: evidence from chronic fatigue syndrome
- PDMP causes more than just testimonial injustice
- Are clinicians ethically obligated to disclose their use of medical machine learning systems to patients?
- Patients, clinicians and open notes: information blocking as a case of epistemic injustice
- Epistemic virtues of harnessing rigorous machine learning systems in ethically sensitive domains
- Negotiating cultural sensitivity in medical AI