Background: The increasing use of Deep Learning (DL) in healthcare has highlighted the critical need for improved transparency and interpretability. While Explainable Artificial Intelligence (XAI) methods provide insights into model predictions, reliability cannot be guaranteed by simply relying on explanations. Objectives: This position paper proposes the integration of Uncertainty Quantification (UQ) with XAI methods to improve model reliability and trustworthiness in healthcare applications. Methods: We examine state-of-the-art XAI and UQ techniques, discuss implementation challenges, and suggest solutions to combine UQ with XAI methods. We propose a framework for estimating both aleatoric and epistemic uncertainty in the XAI context, providing illustrative examples of their potential application. Results: Our analysis indicates that integrating UQ with XAI could significantly enhance the reliability of DL models in practice. This approach has the potential to reduce interpretation biases and over-reliance, leading to more cautious and conscious use of AI in healthcare.

Salvi, M., Seoni, S., Campagner, A., Gertych, A., Acharya, U., Molinari, F., et al. (2025). Explainability and uncertainty: Two sides of the same coin for enhancing the interpretability of deep learning models in healthcare. INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS, 197(May 2025) [10.1016/j.ijmedinf.2025.105846].

Explainability and uncertainty: Two sides of the same coin for enhancing the interpretability of deep learning models in healthcare

Campagner A.;Cabitza F.
2025

Abstract

Background: The increasing use of Deep Learning (DL) in healthcare has highlighted the critical need for improved transparency and interpretability. While Explainable Artificial Intelligence (XAI) methods provide insights into model predictions, reliability cannot be guaranteed by simply relying on explanations. Objectives: This position paper proposes the integration of Uncertainty Quantification (UQ) with XAI methods to improve model reliability and trustworthiness in healthcare applications. Methods: We examine state-of-the-art XAI and UQ techniques, discuss implementation challenges, and suggest solutions to combine UQ with XAI methods. We propose a framework for estimating both aleatoric and epistemic uncertainty in the XAI context, providing illustrative examples of their potential application. Results: Our analysis indicates that integrating UQ with XAI could significantly enhance the reliability of DL models in practice. This approach has the potential to reduce interpretation biases and over-reliance, leading to more cautious and conscious use of AI in healthcare.
Articolo in rivista - Articolo scientifico
AI Trustworthiness; Deep Learning; DL Model Interpretability; Explainable AI; Healthcare; Uncertainty Quantification;
English
21-feb-2025
2025
197
May 2025
105846
open
Salvi, M., Seoni, S., Campagner, A., Gertych, A., Acharya, U., Molinari, F., et al. (2025). Explainability and uncertainty: Two sides of the same coin for enhancing the interpretability of deep learning models in healthcare. INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS, 197(May 2025) [10.1016/j.ijmedinf.2025.105846].
File in questo prodotto:
File Dimensione Formato  
Salvi-2025-International Journal of Medical Informatics-VoR.pdf

accesso aperto

Descrizione: This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Licenza: Creative Commons
Dimensione 3.32 MB
Formato Adobe PDF
3.32 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/546550
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
Social impact