Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

AI-enhanced reasoning enables robots to create detailed accounts of their own situated behaviour as well as the behaviour of other people. This capability is currently employed by robot designers to achieve transparency, trust, and enhance robot social and communicative capabilities. Furthermore, robots may be designed to resemble humans both in their physical appearance and their behaviour. This approach is intended to facilitate more effective interactions with people. In this article we identify and examine some of the ethical, social and legal implications of these capabilities for the investigation of robot accidents. We consider two aspects in particular. The first of these is the role of robots as subjects in a testimony regarding an incident in which they are directly or indirectly involved. This can be described as a case of robots acting as witnesses. The second aspect is the role of robots as objects in a human testimony. This can be described as a case of robots being witnessed.

Type

Conference paper

Publisher

IOS Press

Publication Date

18/10/2024

Keywords

human-like appearance and behaviour, robot accident investigation, robots as being witnessed, explainable AI, reasoning capabilities, robots as witnesses