The rapid growth of clinical explainable AI (XAI) models raised concerns over unclear purposes and false hope regarding explanations. Currently, no standardised metrics exist for XAI evaluation. We developed a clinician-informed, 14-item checklist including clinical, machine and decision attributes. This is the first step toward XAI standardisation and transparent reporting XAI methods to enhance trust, reduce risks, foster AI adoption, and improve decisions to determine the true clinical potential of applied XAI.
Brankovic, A., Cook, D., Rahman, J., Delaforce, A., Li, J., Magrabi, F., et al. (2025). Clinician-informed XAI evaluation checklist with metrics (CLIX-M) for AI-powered clinical decision support systems. NPJ DIGITAL MEDICINE, 8(1) [10.1038/s41746-025-01764-2].
Clinician-informed XAI evaluation checklist with metrics (CLIX-M) for AI-powered clinical decision support systems
Cabitza F.;
2025
Abstract
The rapid growth of clinical explainable AI (XAI) models raised concerns over unclear purposes and false hope regarding explanations. Currently, no standardised metrics exist for XAI evaluation. We developed a clinician-informed, 14-item checklist including clinical, machine and decision attributes. This is the first step toward XAI standardisation and transparent reporting XAI methods to enhance trust, reduce risks, foster AI adoption, and improve decisions to determine the true clinical potential of applied XAI.| File | Dimensione | Formato | |
|---|---|---|---|
|
Brankovic et al-2025-npj Digit. Med-VoR.pdf
accesso aperto
Tipologia di allegato:
Publisher’s Version (Version of Record, VoR)
Licenza:
Creative Commons
Dimensione
460.67 kB
Formato
Adobe PDF
|
460.67 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


