Article Text

Download PDFPDF

Ethics of speculation
Free
  1. Jennifer Blumenthal-Barby
  1. Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, Texas, USA
  1. Correspondence to Dr Jennifer Blumenthal-Barby, Baylor College of Medicine, Houston 77030, TX, USA; Jennifer.blumenthal-barby{at}bcm.edu

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

In an April 2023 article in JAMA Pediatrics, ‘Life Support System for the Fetonate and the Ethics of Speculation’, authors De Bie, Flake and Feudtner critique bioethicists for practising what they call ‘speculative ethics’.

The authors refer to a 2017 article that they published on the Extra-uterine Environment of Neonatal Development (EXTEND) system. This system was able to keep fetonatal (newborn, but in a fetal physiological state) lambs alive outside of the parent lamb’s womb for 4 weeks. The article has been accessed almost 300 000 times and received significant media attention. It also resulted in an explosion in the bioethics literature about ‘the ethics of the artificial womb’. However, the authors bemoan, these ethics discussions focused largely on unrealistic use cases of the new technology—from elective shortening of pregnancies to complete ectogenesis. The authors dubbed these discussions as ‘a technically and developmentally naive, yet sensationally speculative, pipe dream’.

The use case of the EXTEND technology that is most realistic involves supporting extremely premature newborns (23–24 weeks). Other probable uses include supporting infants in later gestational ages (25–28 weeks) with pathological conditions, potentially allowing for surgeries or other therapeutics to be delivered without imposing risks on pregnant patients. To be sure, these use cases have ethical issues worthy of discussion, but these are not the issues that most bioethicists discussed.

The gap between what ethicists chose to focus on and discuss with respect to this new technology (ie, applications which were unlikely but sensational) and that which was most in need of discussion (though perhaps more mundane) is an instance of ‘the ethics of speculation’. De Bie, Flake and Feudtner’s critical point is that this sort of speculative ethics is problematic and harmful. How so? They write:

  1. ‘First, the sentinel function of ethics risks losing accuracy, and thus value, when based on more speculative and less likely future scenarios.’

  2. ‘Second, much less likely far-fetched scenarios can divert vital attention from the ethical work that should be done to evaluate the much more likely scenarios.’

  3. ‘Third, a confounded confusion can arise when the ethical concerns raised about an unlikely scenario are foisted upon a more likely scenario when this more likely scenario would not raise such concerns.’

Now, there is a delicate balance to be struck here (which the authors of the article themselves recognise). Certainly, some amount of looking into the future of uses of new technologies and anticipating ethical pitfalls and developing strategies to avoid or mitigate them is desirable and work that bioethicists ought to do. But too much of it (for the sake of hype, funding, attention, etc) is a loss at best and harmful at worst.

We bioethicists find ourselves in an age of new and rapidly developing technologies—brain computer interfaces, neuroenhancements, artificial intelligence and machine learning, generative language tools, genomics, personalised medicine, space medicine, etc. Indeed, several articles in this August issue address topics in artificial intelligence and machine learning in medicine. While it is an exciting time, we must also remind ourselves of the potential harms of speculative ethics in the face of new technologies.

One way to work to prevent this is for bioethicists to collaborate and embed with scientists, clinicians and technology developers. For example, in our neuroethics research projects, our team works closely with practising clinicians (eg, psychiatrists, psychologists) and technology developers to understand what they identify as likely or probable use cases and then identity and address associated ethical issues. This is often the first aim of our research projects. This allows us to create ‘bins’ of ethical issues to be addressed: unexpected or unlikely ethics issues, common ethical issues potentially exacerbated by the new technology, new and unique ethical issues that are probable or likely, frequent or common ethical issues. This ‘binning’ led us to focus on addressing common ethical issues such as informed decision-making about deep brain stimulation in situations of limited data (eg, paediatric dystonia, obsessive-compulsive disorder) as well as ethical issues related to cost and access (ethics issues exacerbated by this technology). We de-emphasised discussion of ethical issues such as brain-hacking or control by third parties (unexpected or unlikely).

A second example concerns a research project that we have on a ‘black box’ artificially intelligent/machine learning-based survival and risk prognosticators for left ventricular assist device (LVAD) therapy for patients with advanced heart failure. We collaborate closely with machine learning developers as well as heart failure specialists (physicians, nurses) and have learnt that one of the most pressing ethical issues is how to communicate this information to patients in a meaningful and honest way. Do patients need to know how the predictor/algorithm works? How much do they need to know? What do they need to know about the data that the algorithm was trained on (eg, bias, representation, its source)? If it truly ‘black box’, what can they know? How do we present the information (results/predictions) in a neutral manner? Should we aim to present it in a neutral manner given that we know that patients come to the table overestimating their odds of survival without the LVAD? Is neutral presentation even possible given what we know about effects? Do patients need to know how well the algorithm works (eg, its margin of error, performance) and how do we communicate that in a patient-friendly way? In this case, we chose to prioritise working on these issues (among others) even though they were less seductive than ethical concerns about artificial intelligence tools like these ‘taking over’ the role of human clinicians.

Again, to be sure, I do not mean to imply there is no role for speculative ethics in the sense of giving some thought to potential far-off use cases, but bioethics would serve itself well to keep a firm ground in reality.

Ethics statements

Patient consent for publication

Ethics approval

Not applicable.

Footnotes

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Commissioned; internally peer reviewed.