Article Text

PDF
Should we be afraid of medical AI?
  1. Ezio Di Nucci
  1. University of Copenhagen, Copenhagen, Denmark
  1. Correspondence to Ezio Di Nucci, University of Copenhagen; ezio{at}sund.ku.dk

Abstract

I analyse an argument according to which medical artificial intelligence (AI) represents a threat to patient autonomy—recently put forward by Rosalind McDougall in the Journal of Medical Ethics. The argument takes the case of IBM Watson for Oncology to argue that such technologies risk disregarding the individual values and wishes of patients. I find three problems with this argument: (1) it confuses AI with machine learning; (2) it misses machine learning’s potential for personalised medicine through big data; (3) it fails to distinguish between evidence-based advice and decision-making within healthcare. I conclude that how much and which tasks we should delegate to machine learning and other technologies within healthcare and beyond is indeed a crucial question of our time, but in order to answer it, we must be careful in analysing and properly distinguish between the different systems and different delegated tasks.

  • ethics
View Full Text

Statistics from Altmetric.com

Footnotes

  • Funding no relevant funding

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Correction notice This article has been corrected since it was first published. The corrected final proof has now been published.

  • Patient consent for publication Not required.

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.