Article Text
Commentary
Concerning a seemingly intractable feature of the accountability gap
Statistics from Altmetric.com
Footnotes
Contributors BL is the sole author of this commentary.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Provenance and peer review Commissioned; internally peer reviewed.
↵Diagnostic software designed by Google to perform mammogram cancer screenings produced a 9.4% reduction in false negatives, and a 5.7% reduction in false positives relative to the standard margin of error among US radiologists. See McKinney et al. 2
Linked Articles
Read the full text or download the PDF:
Other content recommended for you
- Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI
- We might be afraid of black-box algorithms
- Design publicity of black box algorithms: a support to the epistemic and ethical justifications of medical AI systems
- Response to our reviewers
- Triage, consent and trusting black boxes
- Evidence, ethics and the promise of artificial intelligence in psychiatry
- Responsibility, second opinions and peer-disagreement: ethical and epistemological challenges of using AI in clinical diagnostic contexts
- Transparent AI: reliabilist and proud
- On the ethics of algorithmic decision-making in healthcare
- Limits of trust in medical AI