Several studies exist in the literature that address the problem of emotion classification of visual stimuli but less effort has been devoted to emotion classification of audio stimuli. The most of these studies start from the analysis of physiological signals such as EEG data [1]. The aim of this work is to evaluate if it is possible to classify audio signals according to elicited emotions using only objective features. In our analysis we adopt the IADS (International Affective Digitized Sound) database [2], composed of 167 auditory stimuli. The database provides pleasure, arousal and dominance ratings for each audio stimulus, recorded from 100 subjects during psycho physical test. The database is formed by different type of audio: from environmental sounds to music, as well as from single sound to complex ones. We start considering the affective dimension of valence within the three categorical classes of low, medium and high pleasure. To investigate this classification task we consider 35 features both in time and frequency domain. With these features, we test three types of classifiers: Bayesian, K Nearest Neighbor and Classification and Regression Tree [3]. We apply a feature selection strategy in order to find the more significant features. Using these features and the Bayesian classifier we have reached an average accuracy of 45%. A similar result is achieved using physiological signals [1]. Starting from our results we believe that dividing each audio files in frames and applying a windowing strategy to evaluate objective features, the final classification performance could significantly increase.

Gasparini, F., Corchs, S., Grossi, A. (2016). Affective audio analysis using objective features. Intervento presentato a: Second International Meeting of the Milan Center for Neuroscience (Neuromi): Prediction and Prevention of Dementia: New Hope, Milano.

Affective audio analysis using objective features

GASPARINI, FRANCESCA
Primo
;
CORCHS, SILVIA ELENA
Secondo
;
2016

Abstract

Several studies exist in the literature that address the problem of emotion classification of visual stimuli but less effort has been devoted to emotion classification of audio stimuli. The most of these studies start from the analysis of physiological signals such as EEG data [1]. The aim of this work is to evaluate if it is possible to classify audio signals according to elicited emotions using only objective features. In our analysis we adopt the IADS (International Affective Digitized Sound) database [2], composed of 167 auditory stimuli. The database provides pleasure, arousal and dominance ratings for each audio stimulus, recorded from 100 subjects during psycho physical test. The database is formed by different type of audio: from environmental sounds to music, as well as from single sound to complex ones. We start considering the affective dimension of valence within the three categorical classes of low, medium and high pleasure. To investigate this classification task we consider 35 features both in time and frequency domain. With these features, we test three types of classifiers: Bayesian, K Nearest Neighbor and Classification and Regression Tree [3]. We apply a feature selection strategy in order to find the more significant features. Using these features and the Bayesian classifier we have reached an average accuracy of 45%. A similar result is achieved using physiological signals [1]. Starting from our results we believe that dividing each audio files in frames and applying a windowing strategy to evaluate objective features, the final classification performance could significantly increase.
No
abstract + poster
affective computing, audio processing
English
Second International Meeting of the Milan Center for Neuroscience (Neuromi): Prediction and Prevention of Dementia: New Hope
Gasparini, F., Corchs, S., Grossi, A. (2016). Affective audio analysis using objective features. Intervento presentato a: Second International Meeting of the Milan Center for Neuroscience (Neuromi): Prediction and Prevention of Dementia: New Hope, Milano.
Gasparini, F; Corchs, S; Grossi, A
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/132320
Citazioni
  • Scopus ND
  • ???jsp.display-item.citation.isi??? 0
Social impact