In a previous work, it was shown that there is a curious problem with the benchmark ColorChecker dataset for illuminant estimation. To wit, this dataset has at least 3 different sets of ground-truths. Typically, for a single algorithm a single ground-truth is used. But then different algorithms, whose performance is measured with respect to different ground-truths, are compared against each other and then ranked. This makes no sense. We show in this paper that there are also errors in how each ground-truth set was calculated. As a result, all performance rankings based on the ColorChecker dataset - and there are scores of these - are inaccurate. In this paper, we re-generate a new 'recommended' ground-truth set based on the calculation methodology described by Shi and Funt. We then review the performance evaluation of a range of illuminant estimation algorithms. Compared with the legacy ground-truths, we find that the difference in how algorithms perform can be large, with many local rankings of algorithms being reversed. Finally, we draw the readers attention to our new 'open' data repository which, we hope, will allow the ColorChecker set to be rehabilitated and once again become a useful benchmark for illuminant estimation algorithms.

Hemrit, G., Finlayson, G., Gijsenij, A., Gehler, P., Bianco, S., Funt, B., et al. (2018). Rehabilitating the colorchecker dataset for illuminant estimation. In Final Program and Proceedings - IS and T/SID Color Imaging Conference (pp.350-353). Society for Imaging Science and Technology [10.2352/issn.2169-2629.2018.26.350].

Rehabilitating the colorchecker dataset for illuminant estimation

Bianco, S;
2018

Abstract

In a previous work, it was shown that there is a curious problem with the benchmark ColorChecker dataset for illuminant estimation. To wit, this dataset has at least 3 different sets of ground-truths. Typically, for a single algorithm a single ground-truth is used. But then different algorithms, whose performance is measured with respect to different ground-truths, are compared against each other and then ranked. This makes no sense. We show in this paper that there are also errors in how each ground-truth set was calculated. As a result, all performance rankings based on the ColorChecker dataset - and there are scores of these - are inaccurate. In this paper, we re-generate a new 'recommended' ground-truth set based on the calculation methodology described by Shi and Funt. We then review the performance evaluation of a range of illuminant estimation algorithms. Compared with the legacy ground-truths, we find that the difference in how algorithms perform can be large, with many local rankings of algorithms being reversed. Finally, we draw the readers attention to our new 'open' data repository which, we hope, will allow the ColorChecker set to be rehabilitated and once again become a useful benchmark for illuminant estimation algorithms.
slide + paper
1707; Electronic, Optical and Magnetic Materials; Atomic and Molecular Physics, and Optics
English
Color and Imaging Conference: Color Science and Engineering Systems, Technologies, and Applications, CIC 2018
2018
Final Program and Proceedings - IS and T/SID Color Imaging Conference
9780892083374
2018
2018-
350
353
none
Hemrit, G., Finlayson, G., Gijsenij, A., Gehler, P., Bianco, S., Funt, B., et al. (2018). Rehabilitating the colorchecker dataset for illuminant estimation. In Final Program and Proceedings - IS and T/SID Color Imaging Conference (pp.350-353). Society for Imaging Science and Technology [10.2352/issn.2169-2629.2018.26.350].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/219727
Citazioni
  • Scopus 35
  • ???jsp.display-item.citation.isi??? ND
Social impact