In this paper, we present a fundamental framework for defining different types of explanations of AI systems and the criteria for evaluating their quality. Starting from a structural view of how explanations can be constructed, i.e., in terms of an explanandum (what needs to be explained), multiple explanantia (explanations, clues, or parts of information that explain), and a relationship linking explanandum and explanantia, we propose an explanandum-based typology and point to other possible typologies based on how explanantia are presented and how they relate to explanandia. We also highlight two broad and complementary perspectives for defining possible quality criteria for assessing explainability: epistemological and psychological (cognitive). These definition attempts aim to support the three main functions that we believe should attract the interest and further research of XAI scholars: clear inventories, clear verification criteria, and clear validation methods.

Cabitza, F., Campagner, A., Malgieri, G., Natali, C., Schneeberger, D., Stoeger, K., et al. (2023). Quod erat demonstrandum? - Towards a typology of the concept of explanation for the design of explainable AI. EXPERT SYSTEMS WITH APPLICATIONS, 213(1 March 2023) [10.1016/j.eswa.2022.118888].

Quod erat demonstrandum? - Towards a typology of the concept of explanation for the design of explainable AI

Cabitza F.
;
Campagner A.
;
Natali C.
;
2023

Abstract

In this paper, we present a fundamental framework for defining different types of explanations of AI systems and the criteria for evaluating their quality. Starting from a structural view of how explanations can be constructed, i.e., in terms of an explanandum (what needs to be explained), multiple explanantia (explanations, clues, or parts of information that explain), and a relationship linking explanandum and explanantia, we propose an explanandum-based typology and point to other possible typologies based on how explanantia are presented and how they relate to explanandia. We also highlight two broad and complementary perspectives for defining possible quality criteria for assessing explainability: epistemological and psychological (cognitive). These definition attempts aim to support the three main functions that we believe should attract the interest and further research of XAI scholars: clear inventories, clear verification criteria, and clear validation methods.
Articolo in rivista - Articolo scientifico
Artificial intelligence; Explainable AI; Explanations; Machine learning; Taxonomy; XAI;
English
24-set-2022
2023
213
1 March 2023
118888
none
Cabitza, F., Campagner, A., Malgieri, G., Natali, C., Schneeberger, D., Stoeger, K., et al. (2023). Quod erat demonstrandum? - Towards a typology of the concept of explanation for the design of explainable AI. EXPERT SYSTEMS WITH APPLICATIONS, 213(1 March 2023) [10.1016/j.eswa.2022.118888].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/394389
Citazioni
  • Scopus 31
  • ???jsp.display-item.citation.isi??? 24
Social impact