With the increasing adoption of Artificial Intelligence (AI) for decision-making processes by companies, developing systems that behave fairly and do not discriminate against specific groups of people becomes crucial. Reaching this objective requires a multidisciplinary approach that includes domain experts, data scientists, philosophers, and legal experts, to ensure complete accountability for algorithmic decisions. In such a context, Explainable AI (XAI) plays a key role in enabling professionals from different backgrounds to comprehend the functioning of automatized decision-making processes and, consequently, being able to identify the presence of fairness issues. This paper presents FairX, an innovative approach that uses Group-Contrastive (G-contrast) explanations to estimate whether different decision criteria apply among distinct population subgroups. FairX provides actionable insights through a comprehensive explanation of the decision-making process, enabling businesses to: detect the presence of direct discrimination on the target variable and choose the most appropriate fairness framework.

Castelnovo, A., Inverardi, N., Malandri, L., Mercorio, F., Mezzanzanica, M., Seveso, A. (2023). Leveraging Group Contrastive Explanations for Handling Fairness. In Explainable Artificial Intelligence First World Conference, xAI 2023, Lisbon, Portugal, July 26–28, 2023, Proceedings, Part III (pp.332-345). Springer Science and Business Media Deutschland GmbH [10.1007/978-3-031-44070-0_17].

Leveraging Group Contrastive Explanations for Handling Fairness

Castelnovo A.
;
Malandri L.;Mercorio F.;Mezzanzanica M.;Seveso A.
2023

Abstract

With the increasing adoption of Artificial Intelligence (AI) for decision-making processes by companies, developing systems that behave fairly and do not discriminate against specific groups of people becomes crucial. Reaching this objective requires a multidisciplinary approach that includes domain experts, data scientists, philosophers, and legal experts, to ensure complete accountability for algorithmic decisions. In such a context, Explainable AI (XAI) plays a key role in enabling professionals from different backgrounds to comprehend the functioning of automatized decision-making processes and, consequently, being able to identify the presence of fairness issues. This paper presents FairX, an innovative approach that uses Group-Contrastive (G-contrast) explanations to estimate whether different decision criteria apply among distinct population subgroups. FairX provides actionable insights through a comprehensive explanation of the decision-making process, enabling businesses to: detect the presence of direct discrimination on the target variable and choose the most appropriate fairness framework.
paper
Contrastive Explanation; Direct discrimination; Fairness; XAI;
English
First World Conference, xAI 2023 - July 26–28, 2023
2023
Longo, L
Explainable Artificial Intelligence First World Conference, xAI 2023, Lisbon, Portugal, July 26–28, 2023, Proceedings, Part III
9783031440694
2023
1903 CCIS
332
345
none
Castelnovo, A., Inverardi, N., Malandri, L., Mercorio, F., Mezzanzanica, M., Seveso, A. (2023). Leveraging Group Contrastive Explanations for Handling Fairness. In Explainable Artificial Intelligence First World Conference, xAI 2023, Lisbon, Portugal, July 26–28, 2023, Proceedings, Part III (pp.332-345). Springer Science and Business Media Deutschland GmbH [10.1007/978-3-031-44070-0_17].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/451463
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
Social impact