The need for explanations of Machine Learning (ML) systems is growing as new models outperform their predecessors while becoming more complex and less comprehensible for their end-users. An essential step in eXplainable Artificial Intelligence (XAI) research is to create interpretable models that aim at approximating the decision function of a black box algorithm. Though several XAI methods have been proposed in recent years, not enough attention was paid to explaining how models change their behaviour in contrast with other versions (e.g., due to retraining or data shifts). In such cases, an XAI system should explain why the model changes its predictions concerning past outcomes. In several practical situations, human decision-makers deal with more than one machine learning model. Consequently, the importance of understanding how two machine learning models work beyond their prediction performances is growing, to understand their behavior, their differences, and their likeness. To date, interpretable models are synthesised for explaining black boxes and their predictions and can be beneficial for formally representing and measuring the differences in the retrained model's behaviour in dealing with new and different data. Capturing and understanding such differences is crucial, as the need for trust is key in any application to support human-Artificial Intelligence (AI) decision-making processes. This is the idea of ContrXT, a novel approach that (i) traces the decision criteria of a black box classifier by encoding the changes in the decision logic through Binary Decision Diagrams. Then (ii) it provides global, model-agnostic, Model-Contrastive (M-contrast) explanations in natural language, estimating why -and to what extent- the model has modified its behaviour over time. We implemented and evaluated this approach over several supervised ML models trained on benchmark datasets and a real-life application, showing it is effective in catching majorly changed classes and in explaining their variation through a user study. The approach has been implemented, and it is available to the community both as a python package and through REST API, providing contrastive explanations as a service.

La necessità di spiegazioni sui sistemi di Machine Learning (ML) sta crescendo man mano che i nuovi modelli superano in performance i loro predecessori, diventando più complessi e meno comprensibili per gli utenti finali. Un passaggio essenziale nella ricerca in ambito eXplainable Artificial Intelligence (XAI) è la creazione di modelli interpretabili che mirano ad approssimare la funzione decisionale di un algoritmo black box. Sebbene negli ultimi anni siano stati proposti diversi metodi di XAI, non è stata prestata sufficiente attenzione alla spiegazione di come i modelli modificano il loro comportamento in contrasto con altre versioni (ad esempio, a causa di nuovi addestramenti dei modelli o modifica dei dati sottostanti). In questi casi, un sistema XAI dovrebbe spiegare perché il modello cambia le sue previsioni sui risultati passati. In diverse situazioni pratiche, i decisori umani si confrontano con più di un modello di apprendimento automatico. Di conseguenza, sta crescendo l'importanza di capire come funzionano due modelli di Machine Learning al di là delle loro performance predittive, per comprendere il loro comportamento, le loro differenze e la loro somiglianza. Ad oggi, i modelli interpretabili sono sintetizzati per spiegare i cosiddetti modelli black-box e le loro previsioni, e possono essere utili per rappresentare formalmente e misurare le differenze nel comportamento del modello ri-addestrato nel trattare dati nuovi e diversi. Catturare e comprendere tali differenze è fondamentale, poiché la necessità di fiducia è fondamentale in qualsiasi applicazione a supporto dei processi decisionali umani-IA. Questa è l'idea di ContrXT, un nuovo approccio che (i) traccia i criteri decisionali di un classificatore black box codificando i cambiamenti nella logica decisionale attraverso Binary Decision Diagrams. Quindi (ii) fornisce spiegazioni globali, agnostici dalla tipologia di modello, Model-Contrastive (M-contrast) in linguaggio naturale, stimando perché -e in quale misura- il modello ha modificato il suo comportamento nel tempo. Abbiamo implementato e valutato questo approccio su diversi modelli ML supervisionati addestrati su set di dati di benchmark e un'applicazione reale, dimostrando che è efficace nel rilevare classi notevolmente modificate e nello spiegare la loro variazione attraverso un user study. L'approccio è stato implementato ed è disponibile per la comunità sia come pacchetto Python che tramite API REST, fornendo contrastive explanations as a service.

(2023). Symbolic Reasoning for Contrastive Explanations. (Tesi di dottorato, Università degli Studi di Milano-Bicocca, 2023).

Symbolic Reasoning for Contrastive Explanations

SEVESO, ANDREA
2023

Abstract

The need for explanations of Machine Learning (ML) systems is growing as new models outperform their predecessors while becoming more complex and less comprehensible for their end-users. An essential step in eXplainable Artificial Intelligence (XAI) research is to create interpretable models that aim at approximating the decision function of a black box algorithm. Though several XAI methods have been proposed in recent years, not enough attention was paid to explaining how models change their behaviour in contrast with other versions (e.g., due to retraining or data shifts). In such cases, an XAI system should explain why the model changes its predictions concerning past outcomes. In several practical situations, human decision-makers deal with more than one machine learning model. Consequently, the importance of understanding how two machine learning models work beyond their prediction performances is growing, to understand their behavior, their differences, and their likeness. To date, interpretable models are synthesised for explaining black boxes and their predictions and can be beneficial for formally representing and measuring the differences in the retrained model's behaviour in dealing with new and different data. Capturing and understanding such differences is crucial, as the need for trust is key in any application to support human-Artificial Intelligence (AI) decision-making processes. This is the idea of ContrXT, a novel approach that (i) traces the decision criteria of a black box classifier by encoding the changes in the decision logic through Binary Decision Diagrams. Then (ii) it provides global, model-agnostic, Model-Contrastive (M-contrast) explanations in natural language, estimating why -and to what extent- the model has modified its behaviour over time. We implemented and evaluated this approach over several supervised ML models trained on benchmark datasets and a real-life application, showing it is effective in catching majorly changed classes and in explaining their variation through a user study. The approach has been implemented, and it is available to the community both as a python package and through REST API, providing contrastive explanations as a service.
MARIANI, LEONARDO
MEZZANZANICA, MARIO
eXplainable AI; Contrastive Explanat; Post-hoc explainabil; XAI Interpretability; LMI
eXplainable AI; Contrastive Explanat; Post-hoc explainabil; XAI Interpretability; LMI
INF/01 - INFORMATICA
English
27-feb-2023
INFORMATICA
35
2021/2022
open
(2023). Symbolic Reasoning for Contrastive Explanations. (Tesi di dottorato, Università degli Studi di Milano-Bicocca, 2023).
File in questo prodotto:
File Dimensione Formato  
phd_unimib_781856.pdf

accesso aperto

Descrizione: Symbolic Reasoning for Contrastive Explanations
Tipologia di allegato: Doctoral thesis
Dimensione 4.56 MB
Formato Adobe PDF
4.56 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/404830
Citazioni
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
Social impact