Article Text

Download PDFPDF
Medical AI: is trust really the issue?
  1. Jakob Thrane Mainz
  1. Aarhus Universitet, Aarhus, Denmark
  1. Correspondence to Dr Jakob Thrane Mainz, Aarhus Universitet, Aarhus, Midtjylland, Denmark; jakob-mainz{at}hotmail.com

Abstract

I discuss an influential argument put forward by Hatherley in the Journal of Medical Ethics. Drawing on influential philosophical accounts of interpersonal trust, Hatherley claims that medical artificial intelligence is capable of being reliable, but not trustworthy. Furthermore, Hatherley argues that trust generates moral obligations on behalf of the trustee. For instance, when a patient trusts a clinician, it generates certain moral obligations on behalf of the clinician for her to do what she is entrusted to do. I make three objections to Hatherley’s claims: (1) At least one philosophical account of interagent trust implies that medical AI is capable of being trustworthy. (2) Even if this account should ultimately be rejected, it does not matter much because what we care mostly about is that medical AI is reliable. (3) It is false that trust in itself generates moral obligations on behalf of the trustee.

  • Ethics- Medical

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Footnotes

  • Funding This study was funded by Carlsbergfondet (CF20-0257).

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

Linked Articles

  • Original research
    Joshua James Hatherley

Other content recommended for you