Evidence suggests that already at birth newborn infants can match facial and vocal speech gestures (i.e. Sai, 2005) and non-human faces and vocalizations primarily on the basis of audio-visual temporal synchrony (Lewkowicz et al., 2010). As far as we know, no study has investigated whether newborns can detect audio-visual relations between facial gestures and non-speech sounds like yawns and hiccups which are behaviors that emerge in the earliest stages of fetal life (Piontelli et al., 2015). Using an intermodal preference procedure, we explored whether newborns are able to match facial gestures of a yawn and a hiccup with their corresponding sound. Looking times and orienting responses were measured while 2-days-old newborns concurrently viewed two videos of a woman’s face performing a yawn and a hiccup in two different experimental conditions: in the absence of auditory stimulation (silent condition) and in the presence of a sound (hiccup or yawn) that matched only one of the two facial gestures (in-sound condition). Although analyses do not reach full statistical significance, newborns revealed a strong tendency tomatch the facial gesture with the correspondent sound, as testified by increased looking times to the sound-matched facial gesture in the in-sound condition. Conversely, no visual preference emerged in the silent condition. Our results show that as soon as birth, newborns seem to be able to detect, discriminate and match facial gestures and non-speech sounds. Next step will be to assess whether this ability is based on audio-visual temporal synchrony or if it is rather an early and more global capability to integrate non-speech sounds and facial gestures that are part of newborns’ sensory-motor repertoire

Addabbo, M., Marchis, I., Turati, C. (2016). Newborns’ early ability to match non-speech sounds and facial movements. Intervento presentato a: CEU Conference on Cognitive Development, BCCCD January 7-9, Budapest, Hungary.

Newborns’ early ability to match non-speech sounds and facial movements

ADDABBO, MARGARET
Primo
;
TURATI, CHIARA
Ultimo
2016

Abstract

Evidence suggests that already at birth newborn infants can match facial and vocal speech gestures (i.e. Sai, 2005) and non-human faces and vocalizations primarily on the basis of audio-visual temporal synchrony (Lewkowicz et al., 2010). As far as we know, no study has investigated whether newborns can detect audio-visual relations between facial gestures and non-speech sounds like yawns and hiccups which are behaviors that emerge in the earliest stages of fetal life (Piontelli et al., 2015). Using an intermodal preference procedure, we explored whether newborns are able to match facial gestures of a yawn and a hiccup with their corresponding sound. Looking times and orienting responses were measured while 2-days-old newborns concurrently viewed two videos of a woman’s face performing a yawn and a hiccup in two different experimental conditions: in the absence of auditory stimulation (silent condition) and in the presence of a sound (hiccup or yawn) that matched only one of the two facial gestures (in-sound condition). Although analyses do not reach full statistical significance, newborns revealed a strong tendency tomatch the facial gesture with the correspondent sound, as testified by increased looking times to the sound-matched facial gesture in the in-sound condition. Conversely, no visual preference emerged in the silent condition. Our results show that as soon as birth, newborns seem to be able to detect, discriminate and match facial gestures and non-speech sounds. Next step will be to assess whether this ability is based on audio-visual temporal synchrony or if it is rather an early and more global capability to integrate non-speech sounds and facial gestures that are part of newborns’ sensory-motor repertoire
abstract + poster
Newborns, multisensory integration, intermodal matching procedure, sensorimotor experience
English
CEU Conference on Cognitive Development, BCCCD January 7-9
2016
2016
none
Addabbo, M., Marchis, I., Turati, C. (2016). Newborns’ early ability to match non-speech sounds and facial movements. Intervento presentato a: CEU Conference on Cognitive Development, BCCCD January 7-9, Budapest, Hungary.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/143428
Citazioni
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
Social impact