Article Text
Statistics from Altmetric.com
Durán et al argue in ‘Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI’1 that traditionally proposed solutions to make black box machine learning models in medicine less opaque and more transparent are, though necessary, ultimately not sufficient to establish their overall trustworthiness. This is because transparency procedures currently employed, such as the use of an interpretable predictor (IP),2 cannot fully overcome the opacity of such models. Computational reliabilism (CR), an alternate approach to adjudicating trustworthiness that goes beyond transparency solutions, is argued to be a more promising approach. CR can bring the benefits of traditional process reliabilism in epistemology to bear on this problem of model trustworthiness.
Durán et al’s explicitly reliabilist epistemology to assess the trustworthiness of black box models is a timely addition to current transparency-focused approaches in the literature. Their delineation of the epistemic from the ethical also serves the debate by clarifying the nature of the different problems. However, their overall account underestimates the epistemic value of certain transparency-enabling approaches by conflating different types of opacity and also oversimplifies transparency-advocating arguments in the literature.
First, it is unclear why Durán et al consider transparency approaches as insufficient to overcome epistemic opacity, if heiraccount of opacity is the traditional one from the machine learning literature: opacity stemming from the mismatch between (1) mathematical optimisation in high dimensionality that is characteristic of machine learning and (2) the demands of human-scale reasoning and styles of semantic interpretation.3 …
Footnotes
Contributors AM is the sole contributor.
Funding Wellcome Trust Doctoral Studentship Grant 212708/Z/18/Z.
Competing interests None declared.
Provenance and peer review Commissioned; internally peer reviewed.
Linked Articles
Read the full text or download the PDF:
Other content recommended for you
- Machine learning algorithm can provide assistance for the diagnosis of non-ST-segment elevation myocardial infarction
- Applications and challenges of AI-based algorithms in the COVID-19 pandemic
- Defining the undefinable: the black box problem in healthcare artificial intelligence
- Adverse pregnancy outcomes in women with systemic lupus erythematosus: can we improve predictions with machine learning?
- Review of deep learning algorithms for the automatic detection of intracranial hemorrhages on computed tomography head imaging
- An interpretable model predicts visual outcomes of no light perception eyes after open globe injury
- Machine learning approaches improve risk stratification for secondary cardiovascular disease prevention in multiethnic patients
- Assessment of inflammation in patients with rheumatoid arthritis using thermography and machine learning: a fast and automated technique
- Clinical applications of machine learning algorithms: beyond the black box
- Consistency of variety of machine learning and statistical models in predicting clinical risks of individual patients: longitudinal cohort study using cardiovascular disease as exemplar