Article info
Response
Design publicity of black box algorithms: a support to the epistemic and ethical justifications of medical AI systems
- Correspondence to Dr Andrea Ferrario, Management Technology and Economics, ETH Zürich, Zürich, Switzerland; aferrario{at}ethz.ch
Citation
Design publicity of black box algorithms: a support to the epistemic and ethical justifications of medical AI systems
Publication history
- Received April 7, 2021
- Accepted April 20, 2021
- First published May 12, 2021.
Online issue publication
November 16, 2022
Article Versions
- Previous version (16 November 2022).
- You are viewing the most recent version of this article.
Request permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Copyright information
© Author(s) (or their employer(s)) 2022. Re-use permitted under CC BY-NC. No commercial re-use. See rights and permissions. Published by BMJ. http://creativecommons.org/licenses/by-nc/4.0/This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Other content recommended for you
- Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI
- Randomised controlled trials in medical AI: ethical considerations
- Public perceptions on the application of artificial intelligence in healthcare: a qualitative meta-synthesis
- Response to our reviewers
- Limits of trust in medical AI
- Ethics of the algorithmic prediction of goal of care preferences: from theory to practice
- Trust does not need to be human: it is possible to trust medical AI
- Does “AI” stand for augmenting inequality in the era of covid-19 healthcare?
- Development and validation of a deep learning system to screen vision-threatening conditions in high myopia using optical coherence tomography images
- Trustworthy medical AI systems need to know when they don’t know