In this work we present a large-scale comparison of 21 learning and aggregation methods proposed in the ensemble learning, social choice theory (SCT), information fusion and uncertainty management (IF-UM) and collective intelligence (CI) fields, based on a large collection of 40 benchmark datasets. The results of this comparison show that Bagging-based approaches reported performances comparable with XGBoost, and significantly outperformed other Boosting methods. In particular, ExtraTree-based approaches were as accurate as both XGBoost and Decision Tree-based ones while also being more computationally efficient. We also show how standard Bagging-based and IF-UM-inspired approaches outperformed the approaches based on CI and SCT. IF-UM-inspired approaches, in particular, reported the best performance (together with standard ExtraTrees), as well as the strongest resistance to label noise (together with XGBoost). Based on our results, we provide useful indications on the practical effectiveness of different state-of-the-art ensemble and aggregation methods in general settings.

Campagner, A., Ciucci, D., Cabitza, F. (2023). Aggregation models in ensemble learning: A large-scale comparison. INFORMATION FUSION, 90(February 2023), 241-252 [10.1016/j.inffus.2022.09.015].

Aggregation models in ensemble learning: A large-scale comparison

Campagner A.
;
Ciucci D.;Cabitza F.
2023

Abstract

In this work we present a large-scale comparison of 21 learning and aggregation methods proposed in the ensemble learning, social choice theory (SCT), information fusion and uncertainty management (IF-UM) and collective intelligence (CI) fields, based on a large collection of 40 benchmark datasets. The results of this comparison show that Bagging-based approaches reported performances comparable with XGBoost, and significantly outperformed other Boosting methods. In particular, ExtraTree-based approaches were as accurate as both XGBoost and Decision Tree-based ones while also being more computationally efficient. We also show how standard Bagging-based and IF-UM-inspired approaches outperformed the approaches based on CI and SCT. IF-UM-inspired approaches, in particular, reported the best performance (together with standard ExtraTrees), as well as the strongest resistance to label noise (together with XGBoost). Based on our results, we provide useful indications on the practical effectiveness of different state-of-the-art ensemble and aggregation methods in general settings.
Articolo in rivista - Articolo scientifico
Aggregation methods; Collective intelligence; Ensemble learning; Information fusion; Social choice theory; Uncertainty management;
English
23-set-2022
2023
90
February 2023
241
252
none
Campagner, A., Ciucci, D., Cabitza, F. (2023). Aggregation models in ensemble learning: A large-scale comparison. INFORMATION FUSION, 90(February 2023), 241-252 [10.1016/j.inffus.2022.09.015].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/394398
Citazioni
  • Scopus 11
  • ???jsp.display-item.citation.isi??? 9
Social impact