Artificial intelligence (AI)-based decision-making systems are employed nowadays in an ever growing number of online as well as offline services-some of great importance. Depending on sophisticated learning algorithms and available data, these systems are increasingly becoming automated and data-driven. However, these systems can impact individuals and communities with ethical or legal consequences. Numerous approaches have therefore been proposed to develop decision-making systems that are discrimination-conscious by-design. However, these methods assume the underlying data distribution is stationary without drift, which is counterfactual in many realworld applications. In addition, their focus has been largely on minimizing discrimination while maximizing prediction performance without necessary flexibility in customizing the tradeoff according to different applications. To this end, we propose a learning algorithm for fair classification that also adapts to evolving data streams and further allows for a flexible control on the degree of accuracy and fairness. The positive results on a set of discriminated and non-stationary data streams demonstrate the effectiveness and flexibility of this approach.

Zhang, W., Zhang, M., Zhang, J., Liu, Z., Chen, Z., Wang, J., et al. (2020). Flexible and Adaptive Fairness-Aware Learning in Non-stationary Data Streams. In Proceedings - International Conference on Tools with Artificial Intelligence, ICTAI (pp.399-406). IEEE Computer Society [10.1109/ICTAI50040.2020.00069].

Flexible and Adaptive Fairness-Aware Learning in Non-stationary Data Streams

Messina V.
2020

Abstract

Artificial intelligence (AI)-based decision-making systems are employed nowadays in an ever growing number of online as well as offline services-some of great importance. Depending on sophisticated learning algorithms and available data, these systems are increasingly becoming automated and data-driven. However, these systems can impact individuals and communities with ethical or legal consequences. Numerous approaches have therefore been proposed to develop decision-making systems that are discrimination-conscious by-design. However, these methods assume the underlying data distribution is stationary without drift, which is counterfactual in many realworld applications. In addition, their focus has been largely on minimizing discrimination while maximizing prediction performance without necessary flexibility in customizing the tradeoff according to different applications. To this end, we propose a learning algorithm for fair classification that also adapts to evolving data streams and further allows for a flexible control on the degree of accuracy and fairness. The positive results on a set of discriminated and non-stationary data streams demonstrate the effectiveness and flexibility of this approach.
paper
AI fairness; flexible fairness; online classification;
English
32nd IEEE International Conference on Tools with Artificial Intelligence, ICTAI 2020 9-11 November
2020
Proceedings - International Conference on Tools with Artificial Intelligence, ICTAI
978-1-7281-9228-4
2020
2020-
399
406
9288346
none
Zhang, W., Zhang, M., Zhang, J., Liu, Z., Chen, Z., Wang, J., et al. (2020). Flexible and Adaptive Fairness-Aware Learning in Non-stationary Data Streams. In Proceedings - International Conference on Tools with Artificial Intelligence, ICTAI (pp.399-406). IEEE Computer Society [10.1109/ICTAI50040.2020.00069].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/298736
Citazioni
  • Scopus 4
  • ???jsp.display-item.citation.isi??? 4
Social impact