Several studies have addressed the importance of context and users’ knowledge and experience in quantifying the usability and effectiveness of the explanations generated by explainable artificial intelligence (XAI) systems. However, to the best of our knowledge, no component-agnostic system that accounts for this need has yet been built. This paper describes an approach called ConvXAI, which can create a dialogical multimodal interface for any black-box explainer by considering the knowledge and experience of the user. First, we formally extend the state-of-the-art conversational explanation framework by introducing clarification dialogue as an additional dialogue type. We then implement our approach as an off-the-shelf Python tool. To evaluate our framework, we performed a user study including 45 participants divided into three groups based on their level of technology use and job function. Experimental results show that (i) different groups perceive explanations differently; (ii) all groups prefer textual explanations over graphical ones; and (iii) ConvXAI provides clarifications that enhance the usefulness of the original explanations.

Malandri, L., Mercorio, F., Mezzanzanica, M., Nobani, N. (2023). ConvXAI: a System for Multimodal Interaction with Any Black-box Explainer. COGNITIVE COMPUTATION, 15(2), 613-644 [10.1007/s12559-022-10067-7].

ConvXAI: a System for Multimodal Interaction with Any Black-box Explainer

Malandri, Lorenzo;Mercorio, Fabio;Mezzanzanica, Mario;Nobani, Navid
2023

Abstract

Several studies have addressed the importance of context and users’ knowledge and experience in quantifying the usability and effectiveness of the explanations generated by explainable artificial intelligence (XAI) systems. However, to the best of our knowledge, no component-agnostic system that accounts for this need has yet been built. This paper describes an approach called ConvXAI, which can create a dialogical multimodal interface for any black-box explainer by considering the knowledge and experience of the user. First, we formally extend the state-of-the-art conversational explanation framework by introducing clarification dialogue as an additional dialogue type. We then implement our approach as an off-the-shelf Python tool. To evaluate our framework, we performed a user study including 45 participants divided into three groups based on their level of technology use and job function. Experimental results show that (i) different groups perceive explanations differently; (ii) all groups prefer textual explanations over graphical ones; and (iii) ConvXAI provides clarifications that enhance the usefulness of the original explanations.
Articolo in rivista - Articolo scientifico
Black-box model; Conversational system; Explainable artificial intelligence; Human–AI interaction;
English
7-nov-2022
2023
15
2
613
644
none
Malandri, L., Mercorio, F., Mezzanzanica, M., Nobani, N. (2023). ConvXAI: a System for Multimodal Interaction with Any Black-box Explainer. COGNITIVE COMPUTATION, 15(2), 613-644 [10.1007/s12559-022-10067-7].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/396170
Citazioni
  • Scopus 11
  • ???jsp.display-item.citation.isi??? 7
Social impact