Recent studies in deep learning aim to understand how intermediate representations learned from attention mechanisms motivates the decisions of a predictive model and, consequently, provides information on the model's decision-making process. In fact, while the effectiveness of attention is a well-established topic, the power of attention coefficients to express explanations remains a somewhat controversial issue in the literature. In this work, we empirically evaluate the possibility of using attention coefficients to obtain faithful explanations for recommender systems. In particular, after showing how to use attention for explaining recommendations, we examine the robustness of our proposal.

Matamoros Aragon, R., Zoppis, I., Manzoni, S. (2023). When Attention Turn To Be Explanation. A Case Study in Recommender Systems. In xAI-2023:LB-D-DC - xAI-2023 Late-breaking Work, Demos and Doctoral Consortium Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023), Lisbon, Portugal, July 26-28, 2023 (pp.129-134). Aachen : CEUR-WS.

When Attention Turn To Be Explanation. A Case Study in Recommender Systems

Matamoros Aragon, RA;Zoppis, I;Manzoni, SL
2023

Abstract

Recent studies in deep learning aim to understand how intermediate representations learned from attention mechanisms motivates the decisions of a predictive model and, consequently, provides information on the model's decision-making process. In fact, while the effectiveness of attention is a well-established topic, the power of attention coefficients to express explanations remains a somewhat controversial issue in the literature. In this work, we empirically evaluate the possibility of using attention coefficients to obtain faithful explanations for recommender systems. In particular, after showing how to use attention for explaining recommendations, we examine the robustness of our proposal.
abstract + poster
Attention Mechanism; Explainability; Recommender System
English
Joint 1st World Conference on eXplainable Artificial Intelligence: Late-Breaking Work, Demos and Doctoral Consortium, xAI-2023: LB-D-DC
2023
Longo, L
xAI-2023:LB-D-DC - xAI-2023 Late-breaking Work, Demos and Doctoral Consortium Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023), Lisbon, Portugal, July 26-28, 2023
2023
3554
129
134
none
Matamoros Aragon, R., Zoppis, I., Manzoni, S. (2023). When Attention Turn To Be Explanation. A Case Study in Recommender Systems. In xAI-2023:LB-D-DC - xAI-2023 Late-breaking Work, Demos and Doctoral Consortium Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023), Lisbon, Portugal, July 26-28, 2023 (pp.129-134). Aachen : CEUR-WS.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/470998
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
Social impact