Article Text

Download PDFPDF
Transparent AI: reliabilist and proud
  1. Abhishek Mishra
  1. Uehiro Centre for Practical Ethics, University of Oxford, Oxford, Oxfordshire, UK
  1. Correspondence to Abhishek Mishra, University of Oxford, Oxford OX1 2JD, Oxfordshire, UK; abhishek.vsm{at}gmail.com

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Durán et al argue in ‘Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI’1 that traditionally proposed solutions to make black box machine learning models in medicine less opaque and more transparent are, though necessary, ultimately not sufficient to establish their overall trustworthiness. This is because transparency procedures currently employed, such as the use of an interpretable predictor (IP),2 cannot fully overcome the opacity of such models. Computational reliabilism (CR), an alternate approach to adjudicating trustworthiness that goes beyond transparency solutions, is argued to be a more promising approach. CR can bring the benefits of traditional process reliabilism in epistemology to bear on this problem of model trustworthiness.

Durán et al’s explicitly reliabilist epistemology to assess the trustworthiness of black box models is a timely addition to current transparency-focused approaches in the literature. Their delineation of the epistemic from the ethical also serves the debate by clarifying the nature of the different problems. However, their overall account underestimates the epistemic value of certain transparency-enabling approaches by conflating different types of opacity and also oversimplifies transparency-advocating arguments in the literature.

First, it is unclear why Durán et al consider transparency approaches as insufficient to overcome epistemic opacity, if heiraccount of opacity is the traditional one from the machine learning literature: opacity stemming from the mismatch between (1) mathematical optimisation in high dimensionality that is characteristic of machine learning and (2) the demands of human-scale reasoning and styles of semantic interpretation.3 …

View Full Text

Footnotes

  • Contributors AM is the sole contributor.

  • Funding Wellcome Trust Doctoral Studentship Grant 212708/Z/18/Z.

  • Competing interests None declared.

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles

Other content recommended for you