Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Nietzche claimed that once we know why to live, we’ll suffer almost any how.1 Artificial intelligence (AI) is used widely for the how, but Ferrario et al now advocate using AI for the why.2 Here, I offer my doubts on practical grounds but foremost on ethical ones. Practically, individuals already vacillate over the why, wavering with time and circumstance. That AI could provide prosthetics (or orthotics) for human agency feels unrealistic here, not least because ‘answers’ would be largely unverifiable. Ethically, the concern is that AI stands to frack our humanity. We form a fragile ecosystem of ethical subjects, our responsiveness to others’ suffering, enabled by our own. To deliberate together for incapacitated others is among those solemn privileges that verify our humanity. Having AI mine these delicate pain-forests risks treating our suffering as the new oil—to be extracted and exploited, but beyond our vision and at our cost.
Let’s briefly develop each idea, starting with the how/why distinction. This is palpable, even for more prosaic questions like how or why to drive. The former admits of increasingly sophisticated technological fixes and nudge; the latter often remains very particular and personal. How much greater then, the difference between …
Contributors EJ is the sole author and guarantor.
Funding The author has not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Provenance and peer review Not commissioned; internally peer reviewed.