With the advancement of technology and the rapid rise of social media, people now tend to share their experiences and feelings using images. It is useful to effectively mine opinions and identify sentiments from the images available on various platforms for tasks like social media marketing and user profiling. Visual sentiment concepts extracted from tags of online images are known as adjective noun pairs and the sentiment conveyed by them is usually predicted using opinion mining methods. However, the pre-training work is time consuming and requires huge space. Instead of collecting tags for the images, in this paper we propose to automatically predict the corresponding sentiments using deep convolutional networks. Next, to model the sequence of different sentiments in a single image we consider a recurrent neural network. Such a model is able to remember the context of different sentiments in a single image. We applied our method on Flickr dataset and our approach outperformed baselines in the range of 3-20%.

Qian, C., Chaturvedi, I., Poria, S., Cambria, E., Malandri, L. (2019). Learning Visual Concepts in Images Using Temporal Convolutional Networks. In Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence, SSCI 2018 (pp.1280-1284). Institute of Electrical and Electronics Engineers Inc. [10.1109/SSCI.2018.8628703].

Learning Visual Concepts in Images Using Temporal Convolutional Networks

Malandri L.
2019

Abstract

With the advancement of technology and the rapid rise of social media, people now tend to share their experiences and feelings using images. It is useful to effectively mine opinions and identify sentiments from the images available on various platforms for tasks like social media marketing and user profiling. Visual sentiment concepts extracted from tags of online images are known as adjective noun pairs and the sentiment conveyed by them is usually predicted using opinion mining methods. However, the pre-training work is time consuming and requires huge space. Instead of collecting tags for the images, in this paper we propose to automatically predict the corresponding sentiments using deep convolutional networks. Next, to model the sequence of different sentiments in a single image we consider a recurrent neural network. Such a model is able to remember the context of different sentiments in a single image. We applied our method on Flickr dataset and our approach outperformed baselines in the range of 3-20%.
slide + paper
Image Sentiment Analysis; Temporal Convolutional Networks;
English
8th IEEE Symposium Series on Computational Intelligence, SSCI 2018 - 18 November 2018 through 21 November 2018
2018
Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence, SSCI 2018
978-1-5386-9276-9
2019
1280
1284
8628703
none
Qian, C., Chaturvedi, I., Poria, S., Cambria, E., Malandri, L. (2019). Learning Visual Concepts in Images Using Temporal Convolutional Networks. In Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence, SSCI 2018 (pp.1280-1284). Institute of Electrical and Electronics Engineers Inc. [10.1109/SSCI.2018.8628703].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/401330
Citazioni
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 2
Social impact