Article Text
Statistics from Altmetric.com
Introduction
Jecker et al critically analysed the predominant focus on existential risk (X-Risk) in artificial intelligence (AI) ethics, advocating for a balanced communication of AI’s risks and benefits and urging serious consideration of other urgent ethical issues alongside X-Risk.1 Building on this analysis, we argue for the necessity of acknowledging the unique attention-grabbing attributes of X-Risk and leveraging these traits to foster a comprehensive focus on AI ethics.
First, we need to consider a discontinuous situation that is overlooked in the article by Jecker et al. This discontinuity refers to the phenomenon where X-Risk is perceived as dominating the discourse, yet contrary to expectations, it does not lead to a significant allocation of social resources for specific risk management and practical initiatives. In both the specific realm of ethical AI initiatives and the broader scope of AI risk management, the responses to X-Risk do not have the advantage of prioritising the allocation of resources over other related risks.2
This discrepancy suggests that, in terms of actual social resource allocation, X-Risks do not receive commensurate resources relative to the attention they attract. Unlike other types of risks, X-Risk is perceived as a distant threat, which does not correspond with actual initiatives in terms of media exposure. Despite the prominence of the longtermism view in various media and public discourse, the X-Risk of AI often serves merely as a cautionary note or opinion about the current situation. This suggests that concerns about an AI-driven catastrophe have not been effectively translated into practical initiatives. The gap between the attention drawn and …
Footnotes
Contributors AB wrote the manuscript. YZ provided guidance and reviewed and revised the manuscript. Both authors have agreed to the submission. YZ is responsible for the overall content and is the guarantor of the published manuscript.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Provenance and peer review Not commissioned; internally peer reviewed.
Linked Articles
- Feature article
Read the full text or download the PDF:
Other content recommended for you
- AI and the falling sky: interrogating X-Risk
- Governing the safety of artificial intelligence in healthcare
- Does “AI” stand for augmenting inequality in the era of covid-19 healthcare?
- Artificial intelligence for anterior segment diseases: Emerging applications in ophthalmology
- ChatGPT: a novel AI assistant for healthcare messaging—a commentary on its potential in addressing patient queries and reducing clinician burnout
- Artificial intelligence (AI) for neurologists: do digital neurones dream of electric sheep?
- Threats by artificial intelligence to human health and human existence
- Implications of conscious AI in primary healthcare
- AI support for ethical decision-making around resuscitation: proceed with care
- Randomised controlled trials in medical AI: ethical considerations