RT Journal Article SR Electronic T1 Machine learning in medicine: should the pursuit of enhanced interpretability be abandoned? JF Journal of Medical Ethics JO J Med Ethics FD BMJ Publishing Group Ltd and Institute of Medical Ethics SP 581 OP 585 DO 10.1136/medethics-2020-107102 VO 48 IS 9 A1 Chang Ho Yoon A1 Robert Torrance A1 Naomi Scheinerman YR 2022 UL http://jme.bmj.com/content/48/9/581.abstract AB We argue why interpretability should have primacy alongside empiricism for several reasons: first, if machine learning (ML) models are beginning to render some of the high-risk healthcare decisions instead of clinicians, these models pose a novel medicolegal and ethical frontier that is incompletely addressed by current methods of appraising medical interventions like pharmacological therapies; second, a number of judicial precedents underpinning medical liability and negligence are compromised when ‘autonomous’ ML recommendations are considered to be en par with human instruction in specific contexts; third, explainable algorithms may be more amenable to the ascertainment and minimisation of biases, with repercussions for racial equity as well as scientific reproducibility and generalisability. We conclude with some reasons for the ineludible importance of interpretability, such as the establishment of trust, in overcoming perhaps the most difficult challenge ML will face in a high-stakes environment like healthcare: professional and public acceptance.There are no data in this work.