Ranking is required for many real applications, such as search, personalisation, recommendation, and filtering. Recent research has focused on developing reliable ranking algorithms that maintain fairness in their outcomes. However, only a few consider multiple protected groups since this extension introduces significant challenges. While useful in the research sector, considering only one binary sensitive feature for handling fairness is inappropriate when the algorithm must be deployed responsibly in real-world applications. Our work is built on top of Multinomial FA*IR, a Fair Top-k ranking with multiple protected groups, which we extend to provide users the option to balance fairness and utility, adapting the final ranking accordingly. Our experimental results show that alternative better solutions overlooked by Multinomial FA*IR may be found through our approach without violating fairness boundaries. The code of the implemented solution and the experiments are publicly available to the community as a GitHub repository.
Alimonda, N., Castelnovo, A., Crupi, R., Mercorio, F., Mezzanzanica, M. (2023). Preserving Utility in Fair Top-k Ranking with Intersectional Bias. In Advances in Bias and Fairness in Information Retrieval 4th International Workshop, BIAS 2023, Dublin, Ireland, April 2, 2023, Revised Selected Papers (pp.59-73). Springer Science and Business Media Deutschland GmbH [10.1007/978-3-031-37249-0_5].
Preserving Utility in Fair Top-k Ranking with Intersectional Bias
Castelnovo, Alessandro
;Mercorio, Fabio;Mezzanzanica, Mario
2023
Abstract
Ranking is required for many real applications, such as search, personalisation, recommendation, and filtering. Recent research has focused on developing reliable ranking algorithms that maintain fairness in their outcomes. However, only a few consider multiple protected groups since this extension introduces significant challenges. While useful in the research sector, considering only one binary sensitive feature for handling fairness is inappropriate when the algorithm must be deployed responsibly in real-world applications. Our work is built on top of Multinomial FA*IR, a Fair Top-k ranking with multiple protected groups, which we extend to provide users the option to balance fairness and utility, adapting the final ranking accordingly. Our experimental results show that alternative better solutions overlooked by Multinomial FA*IR may be found through our approach without violating fairness boundaries. The code of the implemented solution and the experiments are publicly available to the community as a GitHub repository.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.