TY - JOUR T1 - When the frameworks don’t work: data protection, trust and artificial intelligence JF - Journal of Medical Ethics JO - J Med Ethics SP - 213 LP - 214 DO - 10.1136/medethics-2022-108263 VL - 48 IS - 4 AU - Zoë Fritz Y1 - 2022/04/01 UR - http://jme.bmj.com/content/48/4/213.abstract N2 - With new technologies come new ethical (and legal) challenges. Often, we can apply previously established principles, even though it may take some time to fully understand the detail of the new technology - or the questions that arise from it. The International Commission on Radiological Protection, for example, was founded in 1928 and has based its advice on balancing the radiation exposure associated with X-rays and CT scans with the diagnostic benefits of the new investigations. They have regularly updated their advice as evidence has accumulated and technologies have changed,1 and have been able to extrapolate from well-established ethical principles.Other new technologies lend themselves less well to off-the-peg ethical solutions. In several articles in this edition the ethical challenges associated with the use of artificial intelligence (AI) in medicine are addressed. Although multiple ethical codes and guidelines have been written on the use and development of AI, Hagendorf noted that many of them reiterated a ‘deontologically oriented, action-restricting ethic based on universal abidance of principles and rules’. 2 Applying pre-existing ethical frameworks to artificial intelligence is problematic for several reasons. In particular, AI has two characteristics which are very different from the current clinical practice on which traditional medical ethics are based:The so called ‘black box’ of deep learning, whereby a deep neural network is trained to iteratively adapt to make … ER -