In this paper, we present and discuss a novel reliability metric to quantify the extent a ground truth, generated in multi-rater settings, as a reliable basis for the training and validation of machine learning predictive models. To define this metric, three dimensions are taken into account: agreement (that is, how much a group of raters mutually agree on a single case); confidence (that is, how much a rater is certain of each rating expressed); and competence (that is, how accurate a rater is). Therefore, this metric produces a reliability score weighted for the raters' confidence and competence, but it only requires the former information to be actually collected, as the latter can be obtained by the ratings themselves, if no further information is available. We found that our proposal was both more conservative and robust to known paradoxes than other existing agreement measures, by virtue of a more articulated notion of the agreement due to chance, which was based on an empirical estimation of the reliability of the single raters involved. We discuss the above metric within a realistic annotation task that involved 13 expert radiologists in labeling the MRNet dataset. We also provide a nomogram by which to assess the actual accuracy of a classification model, given the reliability of its ground truth. In this respect, we also make the point that theoretical estimates of model performance are consistently overestimated if ground truth reliability is not properly taken into account.

Cabitza, F., Campagner, A., Albano, D., Aliprandi, A., Bruno, A., Chianca, V., et al. (2020). The elephant in the machine: Proposing a new metric of data reliability and its application to a medical case to assess classification reliability. APPLIED SCIENCES, 10(11) [10.3390/app10114014].

The elephant in the machine: Proposing a new metric of data reliability and its application to a medical case to assess classification reliability

Cabitza F.
;
Campagner A.;
2020

Abstract

In this paper, we present and discuss a novel reliability metric to quantify the extent a ground truth, generated in multi-rater settings, as a reliable basis for the training and validation of machine learning predictive models. To define this metric, three dimensions are taken into account: agreement (that is, how much a group of raters mutually agree on a single case); confidence (that is, how much a rater is certain of each rating expressed); and competence (that is, how accurate a rater is). Therefore, this metric produces a reliability score weighted for the raters' confidence and competence, but it only requires the former information to be actually collected, as the latter can be obtained by the ratings themselves, if no further information is available. We found that our proposal was both more conservative and robust to known paradoxes than other existing agreement measures, by virtue of a more articulated notion of the agreement due to chance, which was based on an empirical estimation of the reliability of the single raters involved. We discuss the above metric within a realistic annotation task that involved 13 expert radiologists in labeling the MRNet dataset. We also provide a nomogram by which to assess the actual accuracy of a classification model, given the reliability of its ground truth. In this respect, we also make the point that theoretical estimates of model performance are consistently overestimated if ground truth reliability is not properly taken into account.
Articolo in rivista - Articolo scientifico
Ground truth; Inter-rater agreement; Knee; Machine learning; Magnetic resonance imaging; MRNet; Reliability
English
10-giu-2020
2020
10
11
4014
open
Cabitza, F., Campagner, A., Albano, D., Aliprandi, A., Bruno, A., Chianca, V., et al. (2020). The elephant in the machine: Proposing a new metric of data reliability and its application to a medical case to assess classification reliability. APPLIED SCIENCES, 10(11) [10.3390/app10114014].
File in questo prodotto:
File Dimensione Formato  
applsci-10-04014.pdf

accesso aperto

Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Dimensione 2.88 MB
Formato Adobe PDF
2.88 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/292803
Citazioni
  • Scopus 19
  • ???jsp.display-item.citation.isi??? 17
Social impact