Article Text
Abstract
It is commonly accepted that clinicians are ethically obligated to disclose their use of medical machine learning systems to patients, and that failure to do so would amount to a moral fault for which clinicians ought to be held accountable. Call this ‘the disclosure thesis.’ Four main arguments have been, or could be, given to support the disclosure thesis in the ethics literature: the risk-based argument, the rights-based argument, the materiality argument and the autonomy argument. In this article, I argue that each of these four arguments are unconvincing, and therefore, that the disclosure thesis ought to be rejected. I suggest that mandating disclosure may also even risk harming patients by providing stakeholders with a way to avoid accountability for harm that results from improper applications or uses of these systems.
- Informed Consent
- Decision Making
- Ethics
- Information Technology
- Risk Assessment
Data availability statement
No data are available.
Statistics from Altmetric.com
Linked Articles
Read the full text or download the PDF:
Other content recommended for you
- Testimonial injustice in medical machine learning
- Epistemic virtues of harnessing rigorous machine learning systems in ethically sensitive domains
- PDMP causes more than just testimonial injustice
- Artificial intelligence, bias and clinical safety
- Designing AI for mental health diagnosis: challenges from sub-Saharan African value-laden judgements on mental health disorders
- Materiality and practicality: a response to - are clinicians ethically obligated to disclose their use of medical machine learning systems to patients?
- Can medical algorithms be fair? Three ethical quandaries and one dilemma
- Governing the safety of artificial intelligence in healthcare
- Machine learning for mental health diagnosis: tackling contributory injustice and epistemic oppression
- Grand rounds in methodology: key considerations for implementing machine learning solutions in quality improvement initiatives