Previous literature suggests that perceptual similarity is an emergent property shared across deep visual representations. Experiments conducted on a dataset of human-judged image distortions have proven that deep features outperform classic perceptual metrics. In this work we take a further step in the direction of a broader understanding of such property by analyzing the capability of deep visual representations to intrinsically characterize different types of image distortions. To this end, we firstly generate a number of synthetically distorted images and then we analyze the features extracted by different layers of different Deep Neural Networks. We observe that a dimension-reduced representation of the features extracted from a given layer permits to efficiently separate types of distortions in the feature space. Moreover, each network layer exhibits a different ability to separate between different types of distortions, and this ability varies according to the network architecture. Finally, we evaluate the exploitation of features taken from the layer that better separates image distortions for: i) reduced-reference image quality assessment, and ii) distortion types and severity levels characterization on both single and multiple distortion databases. Results achieved on both tasks suggest that deep visual representations can be unsupervisedly employed to efficiently characterize various image distortions.

Bianco, S., Celona, L., Napoletano, P. (2021). Disentangling Image Distortions in Deep Feature Space. PATTERN RECOGNITION LETTERS, 148, 128-135 [10.1016/j.patrec.2021.05.008].

Disentangling Image Distortions in Deep Feature Space

Bianco, Simone;Celona, Luigi
;
Napoletano, Paolo
2021

Abstract

Previous literature suggests that perceptual similarity is an emergent property shared across deep visual representations. Experiments conducted on a dataset of human-judged image distortions have proven that deep features outperform classic perceptual metrics. In this work we take a further step in the direction of a broader understanding of such property by analyzing the capability of deep visual representations to intrinsically characterize different types of image distortions. To this end, we firstly generate a number of synthetically distorted images and then we analyze the features extracted by different layers of different Deep Neural Networks. We observe that a dimension-reduced representation of the features extracted from a given layer permits to efficiently separate types of distortions in the feature space. Moreover, each network layer exhibits a different ability to separate between different types of distortions, and this ability varies according to the network architecture. Finally, we evaluate the exploitation of features taken from the layer that better separates image distortions for: i) reduced-reference image quality assessment, and ii) distortion types and severity levels characterization on both single and multiple distortion databases. Results achieved on both tasks suggest that deep visual representations can be unsupervisedly employed to efficiently characterize various image distortions.
Articolo in rivista - Articolo scientifico
Convolutional neural networks; Deep representations; Image quality; Unsupervised learning;
English
5-giu-2021
2021
148
128
135
none
Bianco, S., Celona, L., Napoletano, P. (2021). Disentangling Image Distortions in Deep Feature Space. PATTERN RECOGNITION LETTERS, 148, 128-135 [10.1016/j.patrec.2021.05.008].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/316318
Citazioni
  • Scopus 9
  • ???jsp.display-item.citation.isi??? 5
Social impact