RT Journal Article SR Electronic T1 Computer knows best? The need for value-flexibility in medical AI JF Journal of Medical Ethics JO J Med Ethics FD BMJ Publishing Group Ltd and Institute of Medical Ethics SP medethics-2018-105118 DO 10.1136/medethics-2018-105118 A1 Rosalind J McDougall YR 2018 UL http://jme.bmj.com/content/early/2018/11/22/medethics-2018-105118.abstract AB Artificial intelligence (AI) is increasingly being developed for use in medicine, including for diagnosis and in treatment decision making. The use of AI in medical treatment raises many ethical issues that are yet to be explored in depth by bioethicists. In this paper, I focus specifically on the relationship between the ethical ideal of shared decision making and AI systems that generate treatment recommendations, using the example of IBM’s Watson for Oncology. I argue that use of this type of system creates both important risks and significant opportunities for promoting shared decision making. If value judgements are fixed and covert in AI systems, then we risk a shift back to more paternalistic medical care. However, if designed and used in an ethically informed way, AI could offer a potentially powerful way of supporting shared decision making. It could be used to incorporate explicit value reflection, promoting patient autonomy. In the context of medical treatment, we need value-flexible AI that can both respond to the values and treatment goals of individual patients and support clinicians to engage in shared decision making.