Article Text
Statistics from Altmetric.com
Introduction
Most jurisdictions require a patient to consent to any medical intervention. Clinicians ask a patient, ‘Given the pain and distress associated with our intervention and the predicted likelihood of this best-case outcome, do you want to accept the treatment?’
When a patient is incapable of deciding, clinicians may ask people who know the patient to say what the patient would decide; this is substituted judgement. In contrast, asking the same people to say how the person would make the decision is substituted interests.1 The UK Mental Capacity Act 2005 uses the latter approach, and my comments appertain to that approach.
When a patient lacks capacity, the question facing clinicians is, ‘Given the likely outcome of this treatment plan for the patient’s life in the long term, would the patient decide to accept the plan?’ This answer depends on the patient’s wishes, values and other factors they would use when deciding.
The proposal
The proposed development of an algorithm involves the following:2
Studying choices made by healthy people about a variety of interventions, given quantified disbenefits and outcomes.
Using information about the people studied, such as age, cultural factors, previous experience, etc, to discover what influence recordable data items have on …
Footnotes
Twitter @derickwaderehab
Correction notice Since first publication, the title of this commentary has been updated.
Contributors This is my own work and there are no other authors.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Provenance and peer review Not commissioned; internally peer reviewed.
Linked Articles
Read the full text or download the PDF:
Other content recommended for you
- Public perceptions on the application of artificial intelligence in healthcare: a qualitative meta-synthesis
- Use of artificial intelligence for image analysis in breast cancer screening programmes: systematic review of test accuracy
- Glaucoma management in the era of artificial intelligence
- Ethics of the algorithmic prediction of goal of care preferences: from theory to practice
- Artificial intelligence (AI) for neurologists: do digital neurones dream of electric sheep?
- Human factors challenges for the safe use of artificial intelligence in patient care
- Artificial intelligence (AI) and global health: how can AI contribute to health in resource-poor settings?
- Improving Skin cancer Management with ARTificial Intelligence (SMARTI): protocol for a preintervention/postintervention trial of an artificial intelligence system used as a diagnostic aid for skin cancer management in a specialist dermatology setting
- Implications of conscious AI in primary healthcare
- AI-enabled suicide prediction tools: ethical considerations for medical leaders