Large-scale self-supervised pretraining of deep learning models is known to be critical in several fields, such as language processing, where its has led to significant breakthroughs. Indeed, it is often more impactful than architectural designs. However, the use of self-supervised pretraining lags behind in several domains, such as hyperspectral images, due to data scarcity. This paper addresses the challenge of data scarcity in the development of methods for spatial super-resolution of hyperspectral images (HSI-SR). We show that state-of-the-art HSI-SR methods are severely bottlenecked by the small paired datasets that are publicly available, also leading to unreliable assessment of the architectural merits of the models. We propose to capitalize on the abundance of high resolution (HR) RGB images to develop a self-supervised pretraining approach that significantly improves the quality of HSI-SR models. In particular, we leverage advances in spectral reconstruction methods to create a vast dataset with high spatial resolution and plausible spectra from RGB images, to be used for pretraining HSI-SR methods. Experimental results, conducted across multiple datasets, report large gains for state-of-the-art HSI-SR methods when pretrained according to the proposed procedure, and also highlight the unreliability of ranking methods when training on small datasets.

Aiello, E., Agarla, M., Valsesia, D., Napoletano, P., Bianchi, T., Magli, E., et al. (2024). Synthetic Data Pretraining for Hyperspectral Image Super-Resolution. IEEE ACCESS, 12, 65024-65031 [10.1109/ACCESS.2024.3396990].

Synthetic Data Pretraining for Hyperspectral Image Super-Resolution

Agarla M.;Napoletano P.;Schettini R.
2024

Abstract

Large-scale self-supervised pretraining of deep learning models is known to be critical in several fields, such as language processing, where its has led to significant breakthroughs. Indeed, it is often more impactful than architectural designs. However, the use of self-supervised pretraining lags behind in several domains, such as hyperspectral images, due to data scarcity. This paper addresses the challenge of data scarcity in the development of methods for spatial super-resolution of hyperspectral images (HSI-SR). We show that state-of-the-art HSI-SR methods are severely bottlenecked by the small paired datasets that are publicly available, also leading to unreliable assessment of the architectural merits of the models. We propose to capitalize on the abundance of high resolution (HR) RGB images to develop a self-supervised pretraining approach that significantly improves the quality of HSI-SR models. In particular, we leverage advances in spectral reconstruction methods to create a vast dataset with high spatial resolution and plausible spectra from RGB images, to be used for pretraining HSI-SR methods. Experimental results, conducted across multiple datasets, report large gains for state-of-the-art HSI-SR methods when pretrained according to the proposed procedure, and also highlight the unreliability of ranking methods when training on small datasets.
Articolo in rivista - Articolo scientifico
Hyperspectral images; self-supervised pretraining; spectral reconstruction; super resolution; synthetic data;
English
6-mag-2024
2024
12
65024
65031
open
Aiello, E., Agarla, M., Valsesia, D., Napoletano, P., Bianchi, T., Magli, E., et al. (2024). Synthetic Data Pretraining for Hyperspectral Image Super-Resolution. IEEE ACCESS, 12, 65024-65031 [10.1109/ACCESS.2024.3396990].
File in questo prodotto:
File Dimensione Formato  
Aiello-2024-IEEE Access-VoR.pdf

accesso aperto

Descrizione: This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Licenza: Creative Commons
Dimensione 1.23 MB
Formato Adobe PDF
1.23 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/504399
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
Social impact