Article Text
Abstract
I analyse an argument according to which medical artificial intelligence (AI) represents a threat to patient autonomy—recently put forward by Rosalind McDougall in the Journal of Medical Ethics. The argument takes the case of IBM Watson for Oncology to argue that such technologies risk disregarding the individual values and wishes of patients. I find three problems with this argument: (1) it confuses AI with machine learning; (2) it misses machine learning’s potential for personalised medicine through big data; (3) it fails to distinguish between evidence-based advice and decision-making within healthcare. I conclude that how much and which tasks we should delegate to machine learning and other technologies within healthcare and beyond is indeed a crucial question of our time, but in order to answer it, we must be careful in analysing and properly distinguish between the different systems and different delegated tasks.
- ethics
Statistics from Altmetric.com
Linked Articles
Read the full text or download the PDF:
Other content recommended for you
- Artificial intelligence in healthcare: past, present and future
- Artificial intelligence (AI) for neurologists: do digital neurones dream of electric sheep?
- Medicine and the rise of the robots: a qualitative review of recent advances of artificial intelligence in health
- Artificial intelligence and inflammatory bowel disease: practicalities and future prospects
- Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI
- Artificial intelligence (AI) and global health: how can AI contribute to health in resource-poor settings?
- Applications and challenges of AI-based algorithms in the COVID-19 pandemic
- Framing the challenges of artificial intelligence in medicine
- AI-based clinical decision-making systems in palliative medicine: ethical challenges
- Implications of conscious AI in primary healthcare