In this article, we present an extensive evaluation of visual descriptors for the content-based retrieval of remote-sensing (RS) images. The evaluation includes global hand-crafted, local hand-crafted, and convolutional neural networks (CNNs) features coupled with four different content-based image retrieval schemes. We conducted all the experiments on two publicly available datasets: the 21-class University of California (UC) Merced Land Use/Land Cover (LandUse) dataset and 19-class High-resolution Satellite Scene dataset (SceneSat). The content of RS images might be quite heterogeneous, ranging from images containing fine grained textures, to coarse grained ones or to images containing objects. It is, therefore, not obvious in this domain, which descriptor should be employed to describe images having such a variability. Results demonstrate that CNN-based features perform better than both global and local hand-crafted features whatever is the retrieval scheme adopted. Features extracted from a residual CNN suitable fine-tuned on the RS domain, shows much better performance than a residual CNN pre-trained on multimedia scene and object images. Features extracted from Network of Vector of Locally Aggregated Descriptors (NetVLAD), a CNN that considers both CNN and local features, works better than others CNN solutions on those images that contain fine-grained textures and objects

Napoletano, P. (2018). Visual descriptors for content-based retrieval of remote sensing images. INTERNATIONAL JOURNAL OF REMOTE SENSING, 39(5), 1343-1376 [10.1080/01431161.2017.1399472].

Visual descriptors for content-based retrieval of remote sensing images

Napoletano, P
2018

Abstract

In this article, we present an extensive evaluation of visual descriptors for the content-based retrieval of remote-sensing (RS) images. The evaluation includes global hand-crafted, local hand-crafted, and convolutional neural networks (CNNs) features coupled with four different content-based image retrieval schemes. We conducted all the experiments on two publicly available datasets: the 21-class University of California (UC) Merced Land Use/Land Cover (LandUse) dataset and 19-class High-resolution Satellite Scene dataset (SceneSat). The content of RS images might be quite heterogeneous, ranging from images containing fine grained textures, to coarse grained ones or to images containing objects. It is, therefore, not obvious in this domain, which descriptor should be employed to describe images having such a variability. Results demonstrate that CNN-based features perform better than both global and local hand-crafted features whatever is the retrieval scheme adopted. Features extracted from a residual CNN suitable fine-tuned on the RS domain, shows much better performance than a residual CNN pre-trained on multimedia scene and object images. Features extracted from Network of Vector of Locally Aggregated Descriptors (NetVLAD), a CNN that considers both CNN and local features, works better than others CNN solutions on those images that contain fine-grained textures and objects
Articolo in rivista - Articolo scientifico
Content-based image retrieval, Remote sensing, Deep Learning
English
2018
39
5
1343
1376
reserved
Napoletano, P. (2018). Visual descriptors for content-based retrieval of remote sensing images. INTERNATIONAL JOURNAL OF REMOTE SENSING, 39(5), 1343-1376 [10.1080/01431161.2017.1399472].
File in questo prodotto:
File Dimensione Formato  
napoletano2018CBIR.pdf

Solo gestori archivio

Dimensione 1.36 MB
Formato Adobe PDF
1.36 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Visual descriptors for content based retrieval of remote sensing images.pdf

Solo gestori archivio

Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Dimensione 3.63 MB
Formato Adobe PDF
3.63 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/183680
Citazioni
  • Scopus 94
  • ???jsp.display-item.citation.isi??? 87
Social impact