Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

This paper argues that the headline-grabbing nature of existential risk (X-Risk) diverts attention away from immediate artificial intelligence (AI) threats, including fairly disseminating AI risks and benefits and justly transitioning towards AI-centred societies. Section I introduces a working definition of X-Risk, considers its likelihood and explores possible subtexts. It highlights conflicts of interest that arise when tech luminaries lead ethics debates in the public square. Section II flags AI ethics concerns brushed aside by focusing on X-Risk, including AI existential benefits (X-Benefits), non-AI X-Risk and AI harms occurring now. Taking the entire landscape of X-Risk into account requires considering how big risks compare, combine and rank relative to one another. As we transition towards more AI-centred societies, which we, the authors, would like to be fair, we urge embedding fairness in the transition process, especially with respect to groups historically disadvantaged and marginalised. Section III concludes by proposing a wide-angle lens that takes X-Risk seriously alongside other urgent ethics concerns.

More information Original publication

DOI

10.1136/jme-2023-109702

Type

Journal article

Publication Date

2024-12-23T00:00:00+00:00

Volume

50

Pages

811 - 817

Total pages

6

Keywords

Cultural Diversity, Ethics, Information Technology, Minority Groups, Resource Allocation