We discuss a perspective aimed at making black box models more eXplainable, within the eXplainable AI (XAI) strand of research. We argue that the traditional end-to-end learning approach used to train Deep Learning (DL) models does not fit the tenets and aims of XAI. Going back to the idea of hand-crafted feature engineering, we suggest a hybrid DL approach to XAI: instead of employing end-to-end learning, we suggest to use DL for the automatic detection of meaningful, hand-crafted high-level symbolic features, which are then to be used by a standard and more interpretable learning model. We exemplify this hybrid learning model in a proof of concept, based on the recently proposed Kandinsky Patterns benchmark, that focuses on the symbolic learning part of the pipeline by using both Logic Tensor Networks and interpretable rule ensembles. After showing that the proposed methodology is able to deliver highly accurate and explainable models, we then discuss potential implementation issues and future directions that can be explored.

Campagner, A., Cabitza, F. (2020). Back to the Feature: A Neural-Symbolic Perspective on Explainable AI. In A. Holzinger, P. Kieseberg, A.M. Tjoa, E. Weippl (a cura di), Machine Learning and Knowledge Extraction. 4th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2020, Dublin, Ireland, August 25–28, 2020, Proceedings (pp. 39-55). Springer [10.1007/978-3-030-57321-8_3].

Back to the Feature: A Neural-Symbolic Perspective on Explainable AI

Campagner, A
;
Cabitza, F
2020

Abstract

We discuss a perspective aimed at making black box models more eXplainable, within the eXplainable AI (XAI) strand of research. We argue that the traditional end-to-end learning approach used to train Deep Learning (DL) models does not fit the tenets and aims of XAI. Going back to the idea of hand-crafted feature engineering, we suggest a hybrid DL approach to XAI: instead of employing end-to-end learning, we suggest to use DL for the automatic detection of meaningful, hand-crafted high-level symbolic features, which are then to be used by a standard and more interpretable learning model. We exemplify this hybrid learning model in a proof of concept, based on the recently proposed Kandinsky Patterns benchmark, that focuses on the symbolic learning part of the pipeline by using both Logic Tensor Networks and interpretable rule ensembles. After showing that the proposed methodology is able to deliver highly accurate and explainable models, we then discuss potential implementation issues and future directions that can be explored.
Capitolo o saggio
Deep Learning; Explainable AI; Kandinsky Patterns; Symbolic machine learning;
English
Machine Learning and Knowledge Extraction. 4th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2020, Dublin, Ireland, August 25–28, 2020, Proceedings
Holzinger, A; Kieseberg, P; Tjoa, AM; Weippl, E
2020
9783030573201
12279
Springer
39
55
Campagner, A., Cabitza, F. (2020). Back to the Feature: A Neural-Symbolic Perspective on Explainable AI. In A. Holzinger, P. Kieseberg, A.M. Tjoa, E. Weippl (a cura di), Machine Learning and Knowledge Extraction. 4th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2020, Dublin, Ireland, August 25–28, 2020, Proceedings (pp. 39-55). Springer [10.1007/978-3-030-57321-8_3].
reserved
File in questo prodotto:
File Dimensione Formato  
Campagner-2020-Machine Learning Knowledge Extraction-VoR.pdf

Solo gestori archivio

Descrizione: Contributo in libro
Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Licenza: Tutti i diritti riservati
Dimensione 606.79 kB
Formato Adobe PDF
606.79 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/443018
Citazioni
  • Scopus 2
  • ???jsp.display-item.citation.isi??? ND
Social impact