Augmented Intelligence (AuI) refers to the use of artificial intelligence (AI) to amplify certain cognitive tasks performed by human decision-makers. However, there are concerns that AI’s increasing capability and alignment with human values may undermine user agency, autonomy, and responsible decision-making. To address these concerns, we conducted a user study in the field of orthopedic radiology diagnosis, introducing a reflective XAI (explainable AI) support that aimed to stimulate human reflection, and we evaluated its impact of in terms of decision performance, decision confidence and perceived utility. Specifically, the reflective XAI support system prompted users to reflect on the dependability of AI-generated advice by presenting evidence both in favor of and against its recommendation. This evidence was presented via two cases that closely resembled a given base case, along with pixel attribution maps. These cases were associated with the same AI advice for the base case, but one case was accurate while the other was erroneous with respect to the ground truth. While the introduction of this support system did not significantly enhance diagnostic accuracy, it was highly valued by more experienced users. Based on the findings of this study, we advocate for further research to validate the potential of reflective XAI in fostering more informed and responsible decision-making, ultimately preserving human agency.

Cabitza, F., Campagner, A., Famiglini, L., Natali, C., Caccavella, V., Gallazzi, E. (2023). Let Me Think! Investigating the Effect of Explanations Feeding Doubts About the AI Advice. In Machine Learning and Knowledge Extraction 7th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2023, Benevento, Italy, August 29 – September 1, 2023, Proceedings (pp.155-169). Springer Science and Business Media Deutschland GmbH [10.1007/978-3-031-40837-3_10].

Let Me Think! Investigating the Effect of Explanations Feeding Doubts About the AI Advice

Cabitza F.
;
Campagner A.;Famiglini L.;Natali C.;
2023

Abstract

Augmented Intelligence (AuI) refers to the use of artificial intelligence (AI) to amplify certain cognitive tasks performed by human decision-makers. However, there are concerns that AI’s increasing capability and alignment with human values may undermine user agency, autonomy, and responsible decision-making. To address these concerns, we conducted a user study in the field of orthopedic radiology diagnosis, introducing a reflective XAI (explainable AI) support that aimed to stimulate human reflection, and we evaluated its impact of in terms of decision performance, decision confidence and perceived utility. Specifically, the reflective XAI support system prompted users to reflect on the dependability of AI-generated advice by presenting evidence both in favor of and against its recommendation. This evidence was presented via two cases that closely resembled a given base case, along with pixel attribution maps. These cases were associated with the same AI advice for the base case, but one case was accurate while the other was erroneous with respect to the ground truth. While the introduction of this support system did not significantly enhance diagnostic accuracy, it was highly valued by more experienced users. Based on the findings of this study, we advocate for further research to validate the potential of reflective XAI in fostering more informed and responsible decision-making, ultimately preserving human agency.
paper
eXplainable AI; Medical machine learning; reflective AI; similarity metrics;
English
Machine Learning and Knowledge Extraction 7th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2023 - August 29 – September 1, 2023
2023
Machine Learning and Knowledge Extraction 7th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2023, Benevento, Italy, August 29 – September 1, 2023, Proceedings
9783031408366
2023
14065 LNCS
155
169
reserved
Cabitza, F., Campagner, A., Famiglini, L., Natali, C., Caccavella, V., Gallazzi, E. (2023). Let Me Think! Investigating the Effect of Explanations Feeding Doubts About the AI Advice. In Machine Learning and Knowledge Extraction 7th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2023, Benevento, Italy, August 29 – September 1, 2023, Proceedings (pp.155-169). Springer Science and Business Media Deutschland GmbH [10.1007/978-3-031-40837-3_10].
File in questo prodotto:
File Dimensione Formato  
Cabitza-2023-LNCS-VoR.pdf

Solo gestori archivio

Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Licenza: Tutti i diritti riservati
Dimensione 1.52 MB
Formato Adobe PDF
1.52 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/456600
Citazioni
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
Social impact