Article Text
Abstract
Some patients, following brain injury, do not outwardly respond to spoken commands, yet show patterns of brain activity that indicate responsiveness. This is ‘cognitive-motor dissociation’ (CMD). Recent research has used machine learning to diagnose CMD from electroencephalogram recordings. These techniques have high false discovery rates, raising a serious problem of inductive risk. It is no solution to communicate the false discovery rates directly to the patient’s family, because this information may confuse, alarm and mislead. Instead, we need a procedure for generating case-specific probabilistic assessments that can be communicated clearly. This article constructs a possible procedure with three key elements: (1) A shift from categorical ‘responding or not’ assessments to degrees of evidence; (2) The use of patient-centred priors to convert degrees of evidence to probabilistic assessments; and (3) The use of standardised probability yardsticks to convey those assessments as clearly as possible.
- Consciousness
- Decision Making
- Ethics- Medical
- Philosophy- Medical
Data availability statement
Data sharing is not applicable as no data sets were generated and/or analysed for this study.
Statistics from Altmetric.com
Data availability statement
Data sharing is not applicable as no data sets were generated and/or analysed for this study.
Footnotes
Twitter @birchlse
Contributors JB conducted all elements of the research, including writing the article, and is the guarantor.
Funding This research is part of a project that has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme, Grant Number 851145.
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.
Read the full text or download the PDF:
Other content recommended for you
- Public perceptions on the application of artificial intelligence in healthcare: a qualitative meta-synthesis
- Randomised controlled trials in medical AI: ethical considerations
- Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI
- Developing, implementing and governing artificial intelligence in medicine: a step-by-step approach to prevent an artificial intelligence winter
- Trust does not need to be human: it is possible to trust medical AI
- Computer knows best? The need for value-flexibility in medical AI
- Limits of trust in medical AI
- Ethics of the algorithmic prediction of goal of care preferences: from theory to practice
- Design publicity of black box algorithms: a support to the epistemic and ethical justifications of medical AI systems
- Comparative survey among paediatricians, nurses and health information technicians on ethics implementation knowledge of and attitude towards social experiments based on medical artificial intelligence at children’s hospitals in Shanghai: a cross-sectional study