Article Text
Abstract
The UK Government’s Code of Conduct for data-driven health and care technologies, specifically artificial intelligence (AI)-driven technologies, comprises 10 principles that outline a gold-standard of ethical conduct for AI developers and implementers within the National Health Service. Considering the importance of trust in medicine, in this essay I aim to evaluate the conceptualisation of trust within this piece of ethical governance. I examine the Code of Conduct, specifically Principle 7, and extract two positions: a principle of rationally justified trust that posits trust should be made on sound epistemological bases and a principle of value-based trust that views trust in an all-things-considered manner. I argue rationally justified trust is largely infeasible in trusting AI due to AI’s complexity and inexplicability. Contrarily, I show how value-based trust is more feasible as it is intuitively used by individuals. Furthermore, it better complies with Principle 1. I therefore conclude this essay by suggesting the Code of Conduct to hold the principle of value-based trust more explicitly.
- information technology
- ethics
- philosophy of medicine
Data availability statement
No data are available.
Statistics from Altmetric.com
Data availability statement
No data are available.
Footnotes
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.
Read the full text or download the PDF:
Other content recommended for you
- Medicine and the rise of the robots: a qualitative review of recent advances of artificial intelligence in health
- Before and beyond trust: reliance in medical AI
- Threats by artificial intelligence to human health and human existence
- Artificial intelligence (AI) for neurologists: do digital neurones dream of electric sheep?
- (De)troubling transparency: artificial intelligence (AI) for clinical applications
- Human factors challenges for the safe use of artificial intelligence in patient care
- Women’s attitudes to the use of AI image readers: a case study from a national breast screening programme
- Evidence, ethics and the promise of artificial intelligence in psychiatry
- Teleophthalmology-enabled and artificial intelligence-ready referral pathway for community optometry referrals of retinal disease (HERMES): a Cluster Randomised Superiority Trial with a linked Diagnostic Accuracy Study—HERMES study report 1—study protocol
- Biased intelligence: on the subjectivity of digital objectivity