Article Text
Statistics from Altmetric.com
I agree with Jecker et al that “the headline-grabbing nature of existential risk (X-risk) diverts attention away from immediate artificial intelligence (AI) threats…”1 Focusing on very long-term speculative risks associated with AI is both ethically distracting and ethically dangerous, especially in a healthcare context. More specifically, AI in healthcare is generating healthcare justice challenges that are real, imminent and pervasive. These are challenges generated by AI that deserve immediate ethical attention, more than any X-risk issues in the distant future.
Almost 50 years ago, John Knowles edited a volume titled Doing Better and Feeling Worse: Health in the United States. We are ‘doing better’ because numerous advances in medical technologies are saving more lives and improving the quality of our lives. But we are ‘feeling worse’ because the additional costs are unsustainable, are threatening funding for other social goods and are increasing injustices regarding the allocation of those resources. This is precisely the situation we are faced with regarding AI in medicine today. Exploding healthcare costs have generated increasing pressure to control those costs, often with unjust consequences. AI is touted as a critical mechanism for controlling healthcare costs, often with little thought given to justice-relevant consequences.
Consider, for example, …
Footnotes
Contributors LMF solely responsible for this study.
Funding The author has not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Provenance and peer review Not commissioned; internally peer reviewed.
Linked Articles
- Feature article