Article Text
Statistics from Altmetric.com
Ugar and Malele1 critique the use of ‘generic’ technologies like artificial intelligence (AI) and machine learning (ML) for mental health diagnoses, particularly in sub-Saharan African countries. They highlight how these AI medical tools often overlook traditional perspectives and local contexts. The article has the merit of working on ethical issues regarding the particularities and risks of using AI and ML for health diagnosis in the Global South, an urgent and neglected topic.
According to the authors, the use of these AI technologies leads to overgeneralisation in diagnosing mental disorders, which could be especially problematic in the mental health field because of value-laden judgements intrinsic to the definition of mental health disorders. This argument is theoretically grounded in the hybrid conceptualisation of mental disorders proposed by Wakefield.2 This author’s perspective incorporates both factual and value components in defining mental disorders, framing them as context-dependent ‘harmful dysfunctions’ that are sensitive to social norms and cultural perspectives. However, there is no elaborate discussion in the article that justifies or substantiates the adherence to this hybrid conceptualisation and not to other theoretical conceptualisations …
Footnotes
Contributors MPLS conducted the research and prepared the manuscript.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Provenance and peer review Not commissioned; internally peer reviewed.
Linked Articles
Read the full text or download the PDF:
Other content recommended for you
- Designing AI for mental health diagnosis: challenges from sub-Saharan African value-laden judgements on mental health disorders
- Public perceptions on the application of artificial intelligence in healthcare: a qualitative meta-synthesis
- AI enabled suicide prediction tools: a qualitative narrative review
- AI-enabled suicide prediction tools: ethical considerations for medical leaders
- Mental health risk factors for suicides in the US Army, 2007–8
- Artificial intelligence (AI) and global health: how can AI contribute to health in resource-poor settings?
- Implications of conscious AI in primary healthcare
- Evidence, ethics and the promise of artificial intelligence in psychiatry
- Role of pre-existing adversity and child maltreatment on mental health outcomes for children involved in child protection: population-based data linkage study
- Building a house without foundations? A 24-country qualitative interview study on artificial intelligence in intensive care medicine