This paper addresses the notable gap in evaluating eXplainable Artificial Intelligence (XAI) methods for text classification. While existing frameworks focus on assessing XAI in areas such as recommender systems and visual analytics, a comprehensive evaluation is missing. Our study surveys and categorises recent post hoc XAI methods according to their scope of explanation and output format. We then conduct a systematic evaluation, assessing the effectiveness of these methods across varying scopes and levels of output granularity using a combination of objective metrics and user studies. Key findings reveal that feature-based explanations exhibit higher fidelity than rule-based ones. While global explanations are perceived as more satisfying and trustworthy, they are less practical than local explanations. These insights enhance understanding of XAI in text classification and offer valuable guidance for developing effective XAI systems, enabling users to evaluate each explainer's pros and cons and select the most suitable one for their needs.

Cesarini, M., Malandri, L., Pallucchini, F., Seveso, A., Xing, F. (2024). Explainable AI for Text Classification: Lessons from a Comprehensive Evaluation of Post Hoc Methods. COGNITIVE COMPUTATION [10.1007/s12559-024-10325-w].

Explainable AI for Text Classification: Lessons from a Comprehensive Evaluation of Post Hoc Methods

Cesarini, Mirko;Malandri, Lorenzo
;
Pallucchini, Filippo;Seveso, Andrea;
2024

Abstract

This paper addresses the notable gap in evaluating eXplainable Artificial Intelligence (XAI) methods for text classification. While existing frameworks focus on assessing XAI in areas such as recommender systems and visual analytics, a comprehensive evaluation is missing. Our study surveys and categorises recent post hoc XAI methods according to their scope of explanation and output format. We then conduct a systematic evaluation, assessing the effectiveness of these methods across varying scopes and levels of output granularity using a combination of objective metrics and user studies. Key findings reveal that feature-based explanations exhibit higher fidelity than rule-based ones. While global explanations are perceived as more satisfying and trustworthy, they are less practical than local explanations. These insights enhance understanding of XAI in text classification and offer valuable guidance for developing effective XAI systems, enabling users to evaluate each explainer's pros and cons and select the most suitable one for their needs.
Articolo in rivista - Review Essay
Explainable AI; Human-computer interaction; Interpretability; Text classification; XAI evaluation;
English
6-ago-2024
2024
open
Cesarini, M., Malandri, L., Pallucchini, F., Seveso, A., Xing, F. (2024). Explainable AI for Text Classification: Lessons from a Comprehensive Evaluation of Post Hoc Methods. COGNITIVE COMPUTATION [10.1007/s12559-024-10325-w].
File in questo prodotto:
File Dimensione Formato  
Cesarini-2024-CognComp-VoR.pdf

accesso aperto

Descrizione: CC BY 4.0 This article is licensed under a Creative Commons Attribution 4.0 International License To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Licenza: Creative Commons
Dimensione 807.06 kB
Formato Adobe PDF
807.06 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/509524
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
Social impact