Article info
Original research
Limits of trust in medical AI
- Correspondence to Joshua James Hatherley, School of Historical, Philosophical, and International Studies, Monash University, Clayton, VIC 3194, Australia; joshua.hatherley{at}monash.edu
Citation
Limits of trust in medical AI
Publication history
- Received November 1, 2019
- Revised February 24, 2020
- Accepted March 11, 2020
- First published March 27, 2020.
Online issue publication
June 29, 2020
Article Versions
- Previous version (27 March 2020).
- You are viewing the most recent version of this article.
Request permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Copyright information
© Author(s) (or their employer(s)) 2020. No commercial re-use. See rights and permissions. Published by BMJ.
Other content recommended for you
- Randomised controlled trials in medical AI: ethical considerations
- Public perceptions on the application of artificial intelligence in healthcare: a qualitative meta-synthesis
- Computer knows best? The need for value-flexibility in medical AI
- Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI
- Evidence, ethics and the promise of artificial intelligence in psychiatry
- Trust does not need to be human: it is possible to trust medical AI
- AI support for ethical decision-making around resuscitation: proceed with care
- Ethics of the algorithmic prediction of goal of care preferences: from theory to practice
- Threats by artificial intelligence to human health and human existence
- Design publicity of black box algorithms: a support to the epistemic and ethical justifications of medical AI systems