Article Text
Statistics from Altmetric.com
Introduction
In their contribution, Ugar and Malele1 shed light on an often overlooked but crucial aspect of the ethical development of machine learning (ML) systems to support the diagnosis of mental health disorders. The authors restrain their focus on pointing to the danger of misdiagnosing mental health pathologies that do not qualify as such within sub-Saharan African communities and argue for the need to include population-specific values in these technologies’ design. However, an analysis of the nature of the harm caused to said populations once their values remain unrecognised is not offered.
Building on Ugar and Malele’s considerations, we add a further perspective to their analysis by showing the need to design considering intended values to avoid the occurrence of epistemic injustices.2 First, we argue that failing to acknowledge the hermeneutical offerings of the populations interacting with these systems can qualify as contributory injustice.3 Second, we show that this form of injustice paves the way to patterns of epistemic oppression that need scrutiny, particularly given the epistemic authority these systems tend to increasingly acquire.
Contributory injustice in ML for mental health support
Dotson’s concept of contributory injustice 3 points out that in the case of blind spots in collectively shared epistemic resources, people in marginalised social positions often develop …
Footnotes
Contributors GP and MDP contributed equally to the manuscript.
Funding MDP’s contribution to this work was supported by the H2020 European Research Council (grant number 949841).
Competing interests None declared.
Provenance and peer review Not commissioned; internally peer reviewed.
Linked Articles
Read the full text or download the PDF:
Other content recommended for you
- Testimonial injustice in medical machine learning
- Evidence, ethics and the promise of artificial intelligence in psychiatry
- Epistemic injustice in psychiatric practice: epistemic duties and the phenomenological approach
- Designing AI for mental health diagnosis: challenges from sub-Saharan African value-laden judgements on mental health disorders
- Epistemic injustice in healthcare encounters: evidence from chronic fatigue syndrome
- PDMP causes more than just testimonial injustice
- Are clinicians ethically obligated to disclose their use of medical machine learning systems to patients?
- Patients, clinicians and open notes: information blocking as a case of epistemic injustice
- Epistemic virtues of harnessing rigorous machine learning systems in ethically sensitive domains
- Negotiating cultural sensitivity in medical AI