In this paper we address the problem of multimodal car driver stress recognition. To this aim, four different signals are considered: heart rate (HR), breathing rate (BR), palm EDA (P-EDA), and perinasal perspitation (PER-EDA). The raw signals are windowed and for each window 21 different features, including both time-domain and frequency-domain descriptors, are extracted. The recognition problem is formulated as a stress vs no-stress binary problem, and is addressed in two different experimental setups: five-fold cross validation and leave one subject out. In both setups the extracted features are classified, both individually and concatenated, with three different classifiers (k−NN, SVM, and ANN) using them both alone and stacking their predictions. Experiments run on a publicly available database of multimodal signals acquired in a controlled experiment on a driving simulator show that the best recognition results are obtained feeding the classifiers with the concatenation of the features of all the signals considered, reaching a micro average accuracy of 77.25% and 65.09% in the two experimental setups respectively.

Bianco, S., Napoletano, P., Schettini, R. (2019). Multimodal car driver stress recognition. In ACM International Conference Proceeding Series (pp.302-307). 1515 BROADWAY, NEW YORK, NY 10036-9998 USA : Association for Computing Machinery [10.1145/3329189.3329221].

Multimodal car driver stress recognition

Bianco, S;Napoletano, P;Schettini, R
2019

Abstract

In this paper we address the problem of multimodal car driver stress recognition. To this aim, four different signals are considered: heart rate (HR), breathing rate (BR), palm EDA (P-EDA), and perinasal perspitation (PER-EDA). The raw signals are windowed and for each window 21 different features, including both time-domain and frequency-domain descriptors, are extracted. The recognition problem is formulated as a stress vs no-stress binary problem, and is addressed in two different experimental setups: five-fold cross validation and leave one subject out. In both setups the extracted features are classified, both individually and concatenated, with three different classifiers (k−NN, SVM, and ANN) using them both alone and stacking their predictions. Experiments run on a publicly available database of multimodal signals acquired in a controlled experiment on a driving simulator show that the best recognition results are obtained feeding the classifiers with the concatenation of the features of all the signals considered, reaching a micro average accuracy of 77.25% and 65.09% in the two experimental setups respectively.
paper
Driver fatigue; Machine learning; Stress detection
English
EAI International Conference on Pervasive Computing Technologies for Healthcare, PervasiveHealth 2019
2019
ACM International Conference Proceeding Series
9781450361262
2019
302
307
none
Bianco, S., Napoletano, P., Schettini, R. (2019). Multimodal car driver stress recognition. In ACM International Conference Proceeding Series (pp.302-307). 1515 BROADWAY, NEW YORK, NY 10036-9998 USA : Association for Computing Machinery [10.1145/3329189.3329221].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/255372
Citazioni
  • Scopus 25
  • ???jsp.display-item.citation.isi??? 16
Social impact