Article Text

Download PDFPDF

Stoking fears of AI X-Risk (while forgetting justice here and now)
  1. Nancy S Jecker1,2,3,
  2. Caesar Alimsinya Atuire4,5,
  3. Jean-Christophe Bélisle-Pipon6,
  4. Vardit Ravitsky7,8,
  5. Anita Ho9,10
  1. 1Department of Bioethics & Humanities, University of Washington School of Medicine, Seattle, Washington, USA
  2. 2African Centre for Epistemology and Philosophy of Science, University of Johannesburg, Johannesburg, Gauteng, South Africa
  3. 3Centre for Bioethics, Chinese University of Hong Kong, Shatin, New Territories, Hong Kong
  4. 4Centre for Tropical Medicine and Global Health, University of Oxford, Oxford, UK
  5. 5Department of Philosophy and Classics, University of Ghana, Accra, Ghana
  6. 6Faculty of Health Sciences, Simon Fraser University, Burnaby, British Columbia, Canada
  7. 7The Hastings Center, Garrison, New York, USA
  8. 8Department of Global Health and Social Medicine, Harvard University, Cambridge, Massachusetts, USA
  9. 9The University of British Columbia W Maurice Young Centre for Applied Ethics, Vancouver, British Columbia, Canada
  10. 10Bioethics Program, University of California San Francisco, San Francisco, California, USA
  1. Correspondence to Dr Nancy S Jecker; nsjecker{at}uw.edu

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

We appreciate the helpful commentaries on our paper, ‘AI and the falling sky: interrogating X-Risk’.1 We agree with many points commentators raise, which opened our eyes to concerns we had not previously considered. This reply focuses on the tension many commentators noted between AI’s existential risks (X-Risks) and justice here and now.

‘Justice turn’

In ‘Existential risk and the justice turn in bioethics’, Corsico frames the tension between AI X-Risk and justice here and now as part of a larger shift within bioethics.2 They think the field is increasingly turning away from ‘big picture’ questions new technologies raise and focusing on narrower justice concerns of less significance. They compare our paper’s emphasis on justly transitioning to more AI-centred societies with the approach of environmentalists fretting about human protection against climate change while losing sight of the need to protect the planet and all living things. Corsico doubts there is much point to pressing for ‘justice on a dead planet’, so too Corsico questions our concern with just transitions. Presumably, that matters little if intelligent life on Earth is destroyed. Corsico’s recommends bioethicists return to big questions, like: ‘Should we develop AI at all, given AI X-Risk?’. Yet this question is increasingly moot. The genie is already out of the bottle. The Future of Life Institute’s 2023 call for a temporary pause on training AI systems more powerful than ChatGPT fell on deaf ears, in part because freely accessible source codes enable anyone to train systems and create AI applications.3 The focus now must be managing the genie.

While we take AI X-Risk seriously, we also stress the importance of placing it within a wider context of multiple X-Risks (eg, nuclear weaponry, antimicrobial resistance and climate change); AI X-benefits (eg, AI-powered advances in drug development, personalised medicine and green technologies); and AI non-X-Risks (eg, algorithmic bias, privacy threats, displacing human artistry, and deep fakes, to name a few). Taking AI X-Risk seriously does not preclude taking seriously these other salient threats. It is the fallacy of false choice to assume we must take sides.

Corsico also asserts that our paper devotes too much energy to taking down the messenger and ought to focus more on the message. Apparently, they are not concerned by the fact that the demographic of tech luminaries, longtermists and effective altruists driving attention to AI X-Risk—privileged, young, males from the global north—is non-representative of the global population. We respectfully take issue with their position. Messages do not exist in in a vacuum, untethered from any social, economic and historic context. Instead they reflect the experiences and lives of their messengers. To illustrate, consider Hadley Duvall. She spoke at the 2024 Democratic National Convention, telling the world about child sexual abuse she survived and her positive pregnancy test at age 12.4 Hadley shared her story to challenge laws proliferating across America that would have forced her, as a child, to carry her stepfather’s child. This narrative was herstory, fundamentally different from histstory. Only by listening to diverse voices can tech leaders understand and benefit their diverse constituencies.

Health equity

Like us, Fleck thinks there are plenty of AI concerns happening now that bioethicists must engage with. In ‘AI diagnoses terminal illness care limits: just, or just stingy?’, Fleck underscores the avalanche of costs that AI-driven advances in medicine are expected to unleash in areas such as personalised and predictive medicine.5 Fleck asks pointedly, what will happen to people buried beneath the avalanche? Fleck’s concern is slippage in society’s commitment to basic care. We agree with Fleck about the need to stress and restress supporting people’s ability to lead decent lives and underwrite access to basic healthcare services. To this we would add supporting other basic social goods, like education, food security and housing that enable healthy lives.

Yet how should societies balance AI innovations against primary health services? Fleck has long been an advocate of solving healthcare allocation challenges by participatory methods, such as democratic deliberation that brings ethics questions before the people whose lives will be most impacted.6 Ye emphasises participatory methods too, stressing the importance of garnering support for health equity at global, national and local levels.7 Like us, Ye presses for more attention to health equity within tech companies. The balancing act need not be the same in every society, but should ideally match the values that people in a particular society can support.

Bao and Zeng are less convinced that AI-powered products are diverting resources away from other goods.8 They find a discrepancy between the attention lavished on AI issues and the resources diverted to them. According to their analysis, for example, attention to AI X-Risk does not always translate to resources diverted to AI-Risk. In reply, Bao and Zeng’s conclusions rely on empirical evidence, which they do not furnish. Moreover, even if their claims hold up in some instances, they do not necessarily generalise. In the case of AI-driven medical products that save people’s lives, attention probably will translate into high demand from patients and families who stand to benefit. Fleck hints that technologies like CAR T-cell therapies for advanced haematological cancers, which were developed without AI, are like the canary in the coal mine. While they dramatically reduced morbidity and mortality for certain patient groups, their price tag continues to be exorbitant. AI is forecast to usher in many medical miracles like CAR T-cells, and Fleck is right to worry about runaway costs for societies ill-prepared to address this.

Dual use

In ‘The longtermist’s sleight of hand’, Tumilty observes that AI tools can spinoff in surprising ways.9 For example, AI-driven drug discovery can be converted with ease into methods for designing chemical weaponry. The prospect of dual use in this area has led experts to call for greater vigilance, including a hotline for notifying authorities of lapses and misuses of the technology, and ethics training to raise awareness of the issue.10 We support evidence-based efforts to reduce the threat of dual risk and agree with Tumilty that these concerns are often overlooked.

Tumilty also shines a light on other risks that might not be front and centre in X-Risk debates, including AI’s own ‘horrific consumption of energy and water’. Both during the machine learning stage, when models consume large volumes of data, and at the inference stage, when they test and apply what they learn, energy use is high. The 2024 World Economic Forum reported that the computational power required for AI is doubling about every 100 days and urged ‘a meticulously planned strategy to align AI practice with sustainability’.11 The group also cautioned that by 2026, overall electricity consumption from data centres, AI and cryptocurrency could double, and recommended enhancing regulations and efficiency to curb energy use in these sectors. Tumilty’s examples serve to illustrate how AI significantly impacts immediate justice concerns. We support prioritising efforts to reduce these risks over risks that are more remote.

Conclusion

In summary, when stoking AI X-Risk diverts attention away from justice problems here and now, we should not jump on the bandwagon, but instead, situate X-Risk along a continuum of many risks. While our reply to commentaries puts a premium on serious justice concerns happening now, it does not preclude due consideration to AI X-Risk.

References

Footnotes

  • X @profjecker, @atuire

  • Correction notice In September 2024, this commentary was resupplied under an open access CC-BY-NC licence.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; internally peer reviewed.

Linked Articles

  • Feature article
    Nancy S Jecker Caesar Alimsinya Atuire Jean-Christophe Bélisle-Pipon Vardit Ravitsky Anita Ho

Other content recommended for you