This paper examines a kind of explainable AI, centered around what we term pro-hoc explanations, that is a form of support that consists of offering alternative explanations (one for each possible outcome) instead of a specific post-hoc explanation following specific advice. Specifically, our support mechanism utilizes explanations by examples, featuring analogous cases for each category in a binary setting. Pro-hoc explanations are an instance of what we called frictional AI, a general class of decision support aimed at achieving a useful compromise between the increase of decision effectiveness and the mitigation of cognitive risks, such as over-reliance, automation bias and deskilling. To illustrate an instance of frictional AI, we conducted an empirical user study to investigate its impact on the task of radiological detection of vertebral fractures in x-rays. Our study engaged 16 orthopedists in a ‘human-first, second-opinion’ interaction protocol. In this protocol, clinicians first made initial assessments of the x-rays without AI assistance and then provided their final diagnosis after considering the pro-hoc explanations. Our findings indicate that physicians, particularly those with less experience, perceived pro-hoc XAI support as significantly beneficial, even though it did not notably enhance their diagnostic accuracy. However, their increased confidence in final diagnoses suggests a positive overall impact. Given the promisingly high effect size observed, our results advocate for further research into pro-hoc explanations specifically, and into the broader concept of frictional AI.

Cabitza, F., Natali, C., Famiglini, L., Campagner, A., Caccavella, V., Gallazzi, E. (2024). Never tell me the odds: Investigating pro-hoc explanations in medical decision making. ARTIFICIAL INTELLIGENCE IN MEDICINE, 150(April 2024), 1-11 [10.1016/j.artmed.2024.102819].

Never tell me the odds: Investigating pro-hoc explanations in medical decision making

Cabitza, F
Primo
;
Natali, C
Secondo
;
Famiglini, L;Campagner, A;
2024

Abstract

This paper examines a kind of explainable AI, centered around what we term pro-hoc explanations, that is a form of support that consists of offering alternative explanations (one for each possible outcome) instead of a specific post-hoc explanation following specific advice. Specifically, our support mechanism utilizes explanations by examples, featuring analogous cases for each category in a binary setting. Pro-hoc explanations are an instance of what we called frictional AI, a general class of decision support aimed at achieving a useful compromise between the increase of decision effectiveness and the mitigation of cognitive risks, such as over-reliance, automation bias and deskilling. To illustrate an instance of frictional AI, we conducted an empirical user study to investigate its impact on the task of radiological detection of vertebral fractures in x-rays. Our study engaged 16 orthopedists in a ‘human-first, second-opinion’ interaction protocol. In this protocol, clinicians first made initial assessments of the x-rays without AI assistance and then provided their final diagnosis after considering the pro-hoc explanations. Our findings indicate that physicians, particularly those with less experience, perceived pro-hoc XAI support as significantly beneficial, even though it did not notably enhance their diagnostic accuracy. However, their increased confidence in final diagnoses suggests a positive overall impact. Given the promisingly high effect size observed, our results advocate for further research into pro-hoc explanations specifically, and into the broader concept of frictional AI.
Articolo in rivista - Articolo scientifico
Decision support; eXplainable AI; Frictional AI; Machine learning;
English
1-mar-2024
2024
150
April 2024
1
11
102819
open
Cabitza, F., Natali, C., Famiglini, L., Campagner, A., Caccavella, V., Gallazzi, E. (2024). Never tell me the odds: Investigating pro-hoc explanations in medical decision making. ARTIFICIAL INTELLIGENCE IN MEDICINE, 150(April 2024), 1-11 [10.1016/j.artmed.2024.102819].
File in questo prodotto:
File Dimensione Formato  
Cabitza-2024-AI Med-VoR.pdf

accesso aperto

Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Licenza: Creative Commons
Dimensione 1.73 MB
Formato Adobe PDF
1.73 MB Adobe PDF Visualizza/Apri
Cabitza-2024-AI Med-preprint.pdf

accesso aperto

Tipologia di allegato: Submitted Version (Pre-print)
Licenza: Altro
Dimensione 5.59 MB
Formato Adobe PDF
5.59 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/466918
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
Social impact