Article Text
Response
No we shouldn’t be afraid of medical AI; it involves risks and opportunities
Abstract
In contrast to Di Nucci’s characterisation, my argument is not a technoapocalyptic one. The view I put forward is that systems like IBM’s Watson for Oncology create both risks and opportunities from the perspective of shared decision-making. In this response, I address the issues that Di Nucci raises and highlight the importance of bioethicists engaging critically with these developing technologies.
- information technology
- decision-making
Statistics from Altmetric.com
Linked Articles
Read the full text or download the PDF:
Other content recommended for you
- Computer knows best? The need for value-flexibility in medical AI
- Should we be afraid of medical AI?
- Medicine and the rise of the robots: a qualitative review of recent advances of artificial intelligence in health
- Artificial intelligence and inflammatory bowel disease: practicalities and future prospects
- Governing the safety of artificial intelligence in healthcare
- Artificial intelligence (AI) and global health: how can AI contribute to health in resource-poor settings?
- Understanding and interpreting artificial intelligence, machine learning and deep learning in Emergency Medicine
- Machine learning in cardiovascular medicine: are we there yet?
- Evidence, ethics and the promise of artificial intelligence in psychiatry
- Future of machine learning in paediatrics