Article Text

Download PDFPDF
Concerning a seemingly intractable feature of the accountability gap
  1. Benjamin Lang
  1. Bioethics, New York University College of Global Public Health, New York, NY 10039, USA
  1. Correspondence to Benjamin Lang, Bioethics, New York University College of Global Public Health, New York, NY 10039, USA; bhl7144{at}

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The authors put forward an interesting response to detractors of black box algorithms. According to the authors, what is of ethical relevance for medical artificial intelligence is not so much their transparency, but rather their reliability as a process capable of producing accurate and trustworthy results. The implications of this view are twofold. First, it is permissible to implement a black box algorithm in clinical settings, provided the algorithm’s epistemic authority is tempered by physician expertise and consideration of patient autonomy. Second, physicians are not expected to possess exhaustive knowledge or understanding of the algorithmic computation by which they verify or augment their medical opinions. The potential of these algorithms to improve diagnostic and procedural accuracy alongside the quality of patient decision-making is undoubtedly a boon to modern medicine, but blind deference to them is neither feasible nor responsible, as several logistical and ethical quagmires noted by the authors still remain inherent in algorithmic software.

I concur …

View Full Text


  • Contributors BL is the sole author of this commentary.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Commissioned; internally peer reviewed.

  • Diagnostic software designed by Google to perform mammogram cancer screenings produced a 9.4% reduction in false negatives, and a 5.7% reduction in false positives relative to the standard margin of error among US radiologists. See McKinney et al. 2

Linked Articles

Other content recommended for you