Article Text

Download PDFPDF

On the ethics of algorithmic decision-making in healthcare
  1. Thomas Grote1,2,
  2. Philipp Berens3
  1. 1 Ethics and Philosophy Lab; Cluster of Excellence: "Machine Learning: New Perspectives for Science", University of Tübingen, Tübingen, Germany
  2. 2 International Center for Ethics in the Sciences and Humanities (IZEW), University of Tübingen, Tübingen, Germany
  3. 3 Institute for Ophthalmic Research, University of Tübingen, Tubingen, Germany
  1. Correspondence to Thomas Grote, Ethics and Philosophy Lab, Cluster of Excellence: "Machine Learning: New Perspectives for Science", University of Tübingen, Tübingen 72076, Germany; thomas.grote{at}


In recent years, a plethora of high-profile scientific publications has been reporting about machine learning algorithms outperforming clinicians in medical diagnosis or treatment recommendations. This has spiked interest in deploying relevant algorithms with the aim of enhancing decision-making in healthcare. In this paper, we argue that instead of straightforwardly enhancing the decision-making capabilities of clinicians and healthcare institutions, deploying machines learning algorithms entails trade-offs at the epistemic and the normative level. Whereas involving machine learning might improve the accuracy of medical diagnosis, it comes at the expense of opacity when trying to assess the reliability of given diagnosis. Drawing on literature in social epistemology and moral responsibility, we argue that the uncertainty in question potentially undermines the epistemic authority of clinicians. Furthermore, we elucidate potential pitfalls of involving machine learning in healthcare with respect to paternalism, moral responsibility and fairness. At last, we discuss how the deployment of machine learning algorithms might shift the evidentiary norms of medical diagnosis. In this regard, we hope to lay the grounds for further ethical reflection of the opportunities and pitfalls of machine learning for enhancing decision-making in healthcare.

  • machine learning
  • autonomy
  • paternalism
  • decision-making
  • uncertainty

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See:

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.


  • Contributors Thomas Grote is the main author of the article. Philipp Berens did write the second section and did comment on the paper’s other parts.

  • Funding This research was supported by German Research Foundation (DFG, Excellence Cluster ‘Machine Learning—New Perspectives for Sciences’, EXC 2064, project no. 390727645; BE5601/4-1). Philipp Berens was additionally supported by the German Ministry for Education and Research (BMBF, 01GQ1601; 01IS18039A).

  • Competing interests None declared.

  • Patient consent for publication Not required.

  • Provenance and peer review Not commissioned; externally peer reviewed.

Other content recommended for you