Article Text

Download PDFPDF
Exploring the potential utility of AI large language models for medical ethics: an expert panel evaluation of GPT-4
  1. Michael Balas1,
  2. Jordan Joseph Wadden2,3,
  3. Philip C Hébert1,4,
  4. Eric Mathison5,
  5. Marika D Warren6,
  6. Victoria Seavilleklein7,
  7. Daniel Wyzynski8,
  8. Alison Callahan9,
  9. Sean A Crawford10,
  10. Parnian Arjmand11,
  11. Edsel B Ing12
  1. 1Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
  2. 2Centre for Clinical Ethics, Unity Health Toronto, Toronto, Ontario, Canada
  3. 3Clinical Ethics, Scarborough Health Network, Scarborough, Ontario, Canada
  4. 4Department of Family and Community Medicine, University of Toronto, Toronto, Ontario, Canada
  5. 5Philosophy, University of Toronto, Toronto, Ontario, Canada
  6. 6Bioethics, Dalhousie University, Halifax, Nova scotia, Canada
  7. 7Clinical Ethics Service, Alberta Health Services, Edmonton, Alberta, Canada
  8. 8Office of Health Ethics, London Health Sciences Centre, London, Ontario, Canada
  9. 9Ethics Department, Ontario Shores Centre for Mental Health Sciences, Whitby, Ontario, Canada
  10. 10Division of Vascular Surgery, Department of Surgery, University Health Network, Toronto, Ontario, Canada
  11. 11Mississauga Retina Institute, Toronto, Ontario, Canada
  12. 12Ophthalmology, University of Alberta, Edmonton, Alberta, Canada
  1. Correspondence to Michael Balas, University of Toronto, Toronto, Canada; 1michaelbalas{at}gmail.com

Abstract

Integrating large language models (LLMs) like GPT-4 into medical ethics is a novel concept, and understanding the effectiveness of these models in aiding ethicists with decision-making can have significant implications for the healthcare sector. Thus, the objective of this study was to evaluate the performance of GPT-4 in responding to complex medical ethical vignettes and to gauge its utility and limitations for aiding medical ethicists. Using a mixed-methods, cross-sectional survey approach, a panel of six ethicists assessed LLM-generated responses to eight ethical vignettes.

The main outcomes measured were relevance, reasoning, depth, technical and non-technical clarity, as well as acceptability of GPT-4’s responses. The readability of the responses was also assessed. Of the six metrics evaluating the effectiveness of GPT-4’s responses, the overall mean score was 4.1/5. GPT-4 was rated highest in providing technical (4.7/5) and non-technical clarity (4.4/5), whereas the lowest rated metrics were depth (3.8/5) and acceptability (3.8/5). There was poor-to-moderate inter-rater reliability characterised by an intraclass coefficient of 0.54 (95% CI: 0.30 to 0.71). Based on panellist feedback, GPT-4 was able to identify and articulate key ethical issues but struggled to appreciate the nuanced aspects of ethical dilemmas and misapplied certain moral principles.

This study reveals limitations in the ability of GPT-4 to appreciate the depth and nuanced acceptability of real-world ethical dilemmas, particularly those that require a thorough understanding of relational complexities and context-specific values. Ongoing evaluation of LLM capabilities within medical ethics remains paramount, and further refinement is needed before it can be used effectively in clinical settings.

  • Decision-making
  • Ethics- Medical
  • Information Technology

Data availability statement

All data relevant to the study are included in the article or uploaded as supplementary information.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Data availability statement

All data relevant to the study are included in the article or uploaded as supplementary information.

View Full Text

Footnotes

  • Twitter @theMichaelBalas, @BioethicsBeau

  • Contributors MB contributed to study design, data collection and analysis, manuscript write-up, manuscript revisions; JJW contributed to study design, ethical vignette creation, manuscript write-up, manuscript revisions; PCH, EM, MDW, VS, DW and AC are panel members and response raters and contributed to manuscript revisions. SAC, PA and EBI contributed to manuscript revisions. MB is responsible for the overall content as the guarantor.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.