Article Text

Download PDFPDF
Trust criteria for artificial intelligence in health: normative and epistemic considerations
  1. Kristin Kostick-Quenet1,
  2. Benjamin H Lang1,2,
  3. Jared Smith1,
  4. Meghan Hurley1,
  5. Jennifer Blumenthal-Barby1
  1. 1 Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, Texas, USA
  2. 2 Department of Philosophy, University of Oxford, Oxford, Oxfordshire, UK
  1. Correspondence to Dr Kristin Kostick-Quenet, Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, Texas, USA; kristin.kostick{at}bcm.edu

Abstract

Rapid advancements in artificial intelligence and machine learning (AI/ML) in healthcare raise pressing questions about how much users should trust AI/ML systems, particularly for high stakes clinical decision-making. Ensuring that user trust is properly calibrated to a tool’s computational capacities and limitations has both practical and ethical implications, given that overtrust or undertrust can influence over-reliance or under-reliance on algorithmic tools, with significant implications for patient safety and health outcomes. It is, thus, important to better understand how variability in trust criteria across stakeholders, settings, tools and use cases may influence approaches to using AI/ML tools in real settings. As part of a 5-year, multi-institutional Agency for Health Care Research and Quality-funded study, we identify trust criteria for a survival prediction algorithm intended to support clinical decision-making for left ventricular assist device therapy, using semistructured interviews (n=40) with patients and physicians, analysed via thematic analysis. Findings suggest that physicians and patients share similar empirical considerations for trust, which were primarily epistemic in nature, focused on accuracy and validity of AI/ML estimates. Trust evaluations considered the nature, integrity and relevance of training data rather than the computational nature of algorithms themselves, suggesting a need to distinguish ‘source’ from ‘functional’ explainability. To a lesser extent, trust criteria were also relational (endorsement from others) and sometimes based on personal beliefs and experience. We discuss implications for promoting appropriate and responsible trust calibration for clinical decision-making use AI/ML.

  • Decision Making
  • Ethics- Research
  • Quality of Health Care

Data availability statement

Data are available upon reasonable request.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Data availability statement

Data are available upon reasonable request.

View Full Text

Footnotes

  • X @kkostick

  • Contributors KK-Q wrote the initial draft that was reviewed by all other authors, who contributed additions and modifications. JB-B, KK-Q and BHL collaboratively developed the codebook. Interviews were conducted by BHL and MH. KK-Q, BHL, MH and JS contributed to coding. All authors reviewed and approved this manuscript and KK-Q is the guarantor of this work.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

Other content recommended for you