The increasing frequency of extreme rainfall events has triggered a significant rise in landslides, making accurate and timely detection essential for effective disaster management. However, the scarcity of annotated landslide data in target regions severely limits the performance of deep learning-based mapping models, especially in emergency scenarios. To address this challenge, this study proposes an innovative Style-Pix2Pix GAN framework capable of autonomously synthesizing high-fidelity landslide data from limited real samples. The framework employs a dual-network architecture: StyleGAN2 generates realistic landslide masks by capturing the morphological patterns and spatial structures of real landslides, while Pix2Pix GAN reconstructs the corresponding optical images by learning a conditional mapping between masks and images. Experiments on the Shaoguan Landslide Dataset demonstrate the effectiveness of the proposed framework. The synthetic data exhibit geometric complexity and spectral characteristics that closely resemble those of real landslides. The integration of synthetic data and real samples can enhance the training of semantic segmentation models for landslide mapping. Models trained on this combined dataset exhibit superior performance in landslide identification compared to those trained using only real data.

Ren, T., Gong, W., Agliardi, F., Gao, L., Xiang, X. (2025). A style-Pix2Pix GAN framework for data augmentation in landslide semantic segmentation. LANDSLIDES [10.1007/s10346-025-02621-9].

A style-Pix2Pix GAN framework for data augmentation in landslide semantic segmentation

Agliardi F.;
2025

Abstract

The increasing frequency of extreme rainfall events has triggered a significant rise in landslides, making accurate and timely detection essential for effective disaster management. However, the scarcity of annotated landslide data in target regions severely limits the performance of deep learning-based mapping models, especially in emergency scenarios. To address this challenge, this study proposes an innovative Style-Pix2Pix GAN framework capable of autonomously synthesizing high-fidelity landslide data from limited real samples. The framework employs a dual-network architecture: StyleGAN2 generates realistic landslide masks by capturing the morphological patterns and spatial structures of real landslides, while Pix2Pix GAN reconstructs the corresponding optical images by learning a conditional mapping between masks and images. Experiments on the Shaoguan Landslide Dataset demonstrate the effectiveness of the proposed framework. The synthetic data exhibit geometric complexity and spectral characteristics that closely resemble those of real landslides. The integration of synthetic data and real samples can enhance the training of semantic segmentation models for landslide mapping. Models trained on this combined dataset exhibit superior performance in landslide identification compared to those trained using only real data.
Articolo in rivista - Articolo scientifico
Data augmentation; GAN; Landslide; Semantic segmentation;
English
17-set-2025
2025
none
Ren, T., Gong, W., Agliardi, F., Gao, L., Xiang, X. (2025). A style-Pix2Pix GAN framework for data augmentation in landslide semantic segmentation. LANDSLIDES [10.1007/s10346-025-02621-9].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/572323
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
Social impact