Hyperspectral imaging (HSI) enables fine spectral analysis but is often limited by low spatial resolution due to sensor constraints. To address this, we propose CGNet, a color-guided hyperspectral super-resolution network that leverages complementary information from low-resolution hyperspectral inputs and high-resolution RGB images. CGNet adopts a dual-encoder design: the RGB encoder extracts hierarchical spatial features, while the HSI encoder progressively upsamples spectral features. A multi-scale fusion decoder then combines both modalities in a coarse-to-fine manner to reconstruct the high-resolution HSI. Training is driven by a hybrid loss that balances L1 and Spectral Angle Mapper (SAM), which ablation studies confirm as the most effective formulation. Experiments on two benchmarks, ARAD1K and StereoMSI, at (Formula presented.) and (Formula presented.) upscaling factors demonstrate that CGNet consistently outperforms state-of-the-art baselines. CGNet achieves higher PSNR and SSIM, lower SAM, and reduced (Formula presented.), confirming its ability to recover sharp spatial structures while preserving spectral fidelity.

Kolyszko, M., Buzzelli, M., Bianco, S., Schettini, R. (2026). Multiscale RGB-Guided Fusion for Hyperspectral Image Super-Resolution. JOURNAL OF IMAGING, 12(2), 1-24 [10.3390/jimaging12020061].

Multiscale RGB-Guided Fusion for Hyperspectral Image Super-Resolution

Kolyszko, Matteo;Buzzelli, Marco;Bianco, Simone;Schettini, Raimondo
2026

Abstract

Hyperspectral imaging (HSI) enables fine spectral analysis but is often limited by low spatial resolution due to sensor constraints. To address this, we propose CGNet, a color-guided hyperspectral super-resolution network that leverages complementary information from low-resolution hyperspectral inputs and high-resolution RGB images. CGNet adopts a dual-encoder design: the RGB encoder extracts hierarchical spatial features, while the HSI encoder progressively upsamples spectral features. A multi-scale fusion decoder then combines both modalities in a coarse-to-fine manner to reconstruct the high-resolution HSI. Training is driven by a hybrid loss that balances L1 and Spectral Angle Mapper (SAM), which ablation studies confirm as the most effective formulation. Experiments on two benchmarks, ARAD1K and StereoMSI, at (Formula presented.) and (Formula presented.) upscaling factors demonstrate that CGNet consistently outperforms state-of-the-art baselines. CGNet achieves higher PSNR and SSIM, lower SAM, and reduced (Formula presented.), confirming its ability to recover sharp spatial structures while preserving spectral fidelity.
Articolo in rivista - Articolo scientifico
deep learning; hyperspectral imaging; image fusion; RGB guidance; super-resolution;
English
28-gen-2026
2026
12
2
1
24
61
open
Kolyszko, M., Buzzelli, M., Bianco, S., Schettini, R. (2026). Multiscale RGB-Guided Fusion for Hyperspectral Image Super-Resolution. JOURNAL OF IMAGING, 12(2), 1-24 [10.3390/jimaging12020061].
File in questo prodotto:
File Dimensione Formato  
Kolyszko et al-2026-J. Imaging-VoR.pdf

accesso aperto

Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Licenza: Creative Commons
Dimensione 2.03 MB
Formato Adobe PDF
2.03 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/588502
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
Social impact