A transparency paradox? Investigating the impact of explanation specificity and autonomous vehicle imperfect detection capabilities on passengers
Omeiza D., Bhattacharyya R., Jirotka M., Hawes N., Kunze L.
Transparency in automated systems could be afforded through the provision of intelligible explanations. While transparency is desirable, might it lead to catastrophic outcomes (such as anxiety) that could outweigh its benefits? It's quite unclear how the specificity of explanations (level of transparency) influences recipients, especially in autonomous driving (AD). In this work, we examined the effects of transparency mediated through varying levels of explanation specificity in AD. We first extended a data-driven explainer model by adding a rule-based option for explanation generation in AD and then conducted a within-subject lab study with 39 participants in an immersive driving simulator to study the effect of the resulting explanations. Specifically, our investigation focused on: (1) how different types of explanations (specific vs. abstract) affect passengers' perceived safety, anxiety, and willingness to take control of the vehicle when the vehicle perception system makes erroneous predictions; and (2) the relationship between passengers' behavioural cues and their feelings during the autonomous drives. Our findings showed that abstract explanations did not make passengers safer despite being vague enough to conceal all perception system detection errors compared to specific explanations having a minimal amount of exposed perception system detection errors. Anxiety levels increased when specific explanations revealed perception system detection errors (high transparency). We found no significant link between passengers' visual patterns and their anxiety levels. We advocate for explanation systems in autonomous vehicles (AV) that can adapt to different stakeholders' transparency needs.