Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

As artificial intelligence (AI) systems begin to take on social roles traditionally filled by humans, it will be crucial to understand how this affects people's cooperative expectations. In the case of human-human dyads, different relationships are governed by different norms: For example, how two strangers-versus two friends or colleagues-should interact when faced with a similar coordination problem often differs. How will the rise of 'social' artificial intelligence (and ultimately, superintelligent AI) complicate people's expectations about the cooperative norms that should govern different types of relationships, whether human-human or human-AI? Do people expect AI to adhere to the same cooperative dynamics as humans when in a given social role? Conversely, will they begin to expect humans in certain types of relationships to act more like AI? Here, we consider how people's cooperative expectations may pull apart between human-human and human-AI relationships, detailing an empirical proposal for mapping these distinctions across relationship types. We see the data resulting from our proposal as relevant for understanding people's relationship-specific cooperative expectations in an age of social AI, which may also forecast potential resistance towards AI systems occupying certain social roles. Finally, these data can form the basis for ethical evaluations: What relationship-specific cooperative norms we should adopt for human-AI interactions, or reinforce through responsible AI design, depends partly on empirical facts about what norms people find intuitive for such interactions (along with the costs and benefits of maintaining these). Toward the end of the paper, we discuss how these relational norms may change over time and consider the implications of this for the proposed research program.

Original publication

DOI

10.1007/s43681-024-00631-2

Type

Journal article

Journal

AI Ethics

Publication Date

2025

Volume

5

Pages

71 - 80

Keywords

Human–AI interaction, Moral psychology, Norms, Relationships