Article Text
Statistics from Altmetric.com
There is much to learn from Durán and Jongsma’s paper.1 One particularly important insight concerns the relationship between epistemology and ethics in medical artificial intelligence (AI). In clinical environments, the task of AI systems is to provide risk estimates or diagnostic decisions, which then need to be weighed by physicians. Hence, while the implementation of AI systems might give rise to ethical issues—for example, overtreatment, defensive medicine or paternalism2—the issue that lies at the heart is an epistemic problem: how can physicians know whether to trust decisions made by AI systems? In this manner, various studies examining the interaction of AI systems and physicians have shown that without being able to evaluate their trustworthiness, especially novice physicians become over-reliant on algorithmic support—and ultimately are led astray by incorrect decisions.3–5
This leads to a second insight from the paper, namely that even if some (deep learning-based) AI system happens to be opaque, it is still not built on the moon. To assess its trustworthiness, AI developers or physicians have different sorts of higher order evidence at hand. Most importantly, …
Footnotes
Funding TG is supported by the Deutsche Forschungsgemeinschaft (BE5601/4-1; Cluster of Excellence ‘Machine Learning—New Perspectives for Science’, EXC 2064, project number 390727645).
Competing interests None declared.
Provenance and peer review Commissioned; internally peer reviewed.
↵Note, while Durán and Jongma emphasise that physicians should not blindly defer to algorithmic decisions due to choices of value, my account is confined to the epistemic part of his paper.
Linked Articles
Read the full text or download the PDF:
Other content recommended for you
- Randomised controlled trials in medical AI: ethical considerations
- Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI
- Public perceptions on the application of artificial intelligence in healthcare: a qualitative meta-synthesis
- Design publicity of black box algorithms: a support to the epistemic and ethical justifications of medical AI systems
- Limits of trust in medical AI
- AI support for ethical decision-making around resuscitation: proceed with care
- Evaluation framework to guide implementation of AI systems into healthcare settings
- Development and validation of a deep learning system to screen vision-threatening conditions in high myopia using optical coherence tomography images
- Does “AI” stand for augmenting inequality in the era of covid-19 healthcare?
- Use of artificial intelligence for image analysis in breast cancer screening programmes: systematic review of test accuracy