Feature importance evaluation is one of the prevalent approaches to interpreting Machine Learning (ML) models. A drawback of using these methods for high-dimensional datasets is that they often lead to high-dimensional explanation output that hinders human analysis. This is especially true for explaining multimodal ML models, where the problem's complexity is further exacerbated by the inclusion of multiple data modalities and an increase in the overall number of features. This work proposes a novel approach to lower the complexity of feature-based explanations. The proposed approach is based on uncertainty quantification techniques, allowing for a principled way of reducing the number of modalities required to explain the model's predictions. We evaluated our method in three multimodal datasets comprising physiological time series. Results show that the proposed method can reduce the complexity of the explanations while maintaining a high level of accuracy in the predictions. This study illustrates an innovative example of the intersection between the disciplines of uncertainty quantification and explainable artificial intelligence.

Folgado, D., Barandas, M., Famiglini, L., Santos, R., Cabitza, F., Gamboa, H. (2023). Explainability meets uncertainty quantification: Insights from feature-based model fusion on multimodal time series. INFORMATION FUSION, 100(December 2023) [10.1016/j.inffus.2023.101955].

Explainability meets uncertainty quantification: Insights from feature-based model fusion on multimodal time series

Famiglini L.;Cabitza F.;
2023

Abstract

Feature importance evaluation is one of the prevalent approaches to interpreting Machine Learning (ML) models. A drawback of using these methods for high-dimensional datasets is that they often lead to high-dimensional explanation output that hinders human analysis. This is especially true for explaining multimodal ML models, where the problem's complexity is further exacerbated by the inclusion of multiple data modalities and an increase in the overall number of features. This work proposes a novel approach to lower the complexity of feature-based explanations. The proposed approach is based on uncertainty quantification techniques, allowing for a principled way of reducing the number of modalities required to explain the model's predictions. We evaluated our method in three multimodal datasets comprising physiological time series. Results show that the proposed method can reduce the complexity of the explanations while maintaining a high level of accuracy in the predictions. This study illustrates an innovative example of the intersection between the disciplines of uncertainty quantification and explainable artificial intelligence.
Articolo in rivista - Articolo scientifico
Complexity; Explainable AI; Feature-based explanations; Multimodal; SHAP; Uncertainty quantification;
English
29-lug-2023
2023
100
December 2023
101955
none
Folgado, D., Barandas, M., Famiglini, L., Santos, R., Cabitza, F., Gamboa, H. (2023). Explainability meets uncertainty quantification: Insights from feature-based model fusion on multimodal time series. INFORMATION FUSION, 100(December 2023) [10.1016/j.inffus.2023.101955].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/492720
Citazioni
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 2
Social impact