Article Text
Abstract
Artificial intelligence (AI) is increasingly being developed for use in medicine, including for diagnosis and in treatment decision making. The use of AI in medical treatment raises many ethical issues that are yet to be explored in depth by bioethicists. In this paper, I focus specifically on the relationship between the ethical ideal of shared decision making and AI systems that generate treatment recommendations, using the example of IBM’s Watson for Oncology. I argue that use of this type of system creates both important risks and significant opportunities for promoting shared decision making. If value judgements are fixed and covert in AI systems, then we risk a shift back to more paternalistic medical care. However, if designed and used in an ethically informed way, AI could offer a potentially powerful way of supporting shared decision making. It could be used to incorporate explicit value reflection, promoting patient autonomy. In the context of medical treatment, we need value-flexible AI that can both respond to the values and treatment goals of individual patients and support clinicians to engage in shared decision making.
- decision-making
- clinical ethics
- information technology
Statistics from Altmetric.com
Footnotes
Contributors RM is the sole author.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Patient consent Not required.
Provenance and peer review Not commissioned; externally peer reviewed.