Article Text

Download PDFPDF
Artificial intelligence risks, attention allocation and priorities
  1. Aorigele Bao1,2,3,4,
  2. Yi Zeng1,2,3,4
  1. 1Department of Philosophy, School of Humanities, University of Chinese Academy of Sciences, Beijing, China
  2. 2Institute of Philosophy, Chinese Academy of Sciences, Beijing, China
  3. 3Center for Long-term Artificial Intelligence, Beijing, China
  4. 4Institute of Automation, Chinese Academy of Sciences, Beijing, China
  1. Correspondence to Professor Yi Zeng; yi.zeng{at}ia.ac.cn

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Jecker et al critically analysed the predominant focus on existential risk (X-Risk) in artificial intelligence (AI) ethics, advocating for a balanced communication of AI’s risks and benefits and urging serious consideration of other urgent ethical issues alongside X-Risk.1 Building on this analysis, we argue for the necessity of acknowledging the unique attention-grabbing attributes of X-Risk and leveraging these traits to foster a comprehensive focus on AI ethics.

First, we need to consider a discontinuous situation that is overlooked in the article by Jecker et al. This discontinuity refers to the phenomenon where X-Risk is perceived as dominating the discourse, yet contrary to expectations, it does not lead to a significant allocation of social resources for specific risk management and practical initiatives. In both the specific realm of ethical AI initiatives and the broader scope of AI risk management, the responses to X-Risk do not have the advantage of prioritising the allocation of resources over other related risks.2

This discrepancy suggests that, in terms of actual social resource allocation, X-Risks do not receive commensurate resources relative to the attention they attract. Unlike other types of risks, X-Risk is perceived as a distant threat, which does not correspond with actual initiatives in terms of media exposure. Despite the prominence of the longtermism view in various media and public discourse, the X-Risk of AI often serves merely as a cautionary note or opinion about the current situation. This suggests that concerns about an AI-driven catastrophe have not been effectively translated into practical initiatives. The gap between the attention drawn and …

View Full Text

Footnotes

  • Contributors AB wrote the manuscript. YZ provided guidance and reviewed and revised the manuscript. Both authors have agreed to the submission. YZ is responsible for the overall content and is the guarantor of the published manuscript.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; internally peer reviewed.

Linked Articles

  • Feature article
    Nancy S Jecker Caesar Alimsinya Atuire Jean-Christophe Bélisle-Pipon Vardit Ravitsky Anita Ho

Other content recommended for you