The aim of the present study was to investigate how auditory background interacts with learning and memory. Previous studies show that music listening alters the way in which we perceive the world. Reported effects of background music go toward both facilitatory and interfering consequences, depending on the type of auditory stimulation and of concurrent cognitive task. In order to investigate the neural mechanism of memory encoding for faces, we recorded event related potentials (ERPs) during listening to classical music (Čajkovskij), environmental sounds (rain) or silence. Participants were 15 healthy non-musician university students, engaged in an old/new memory task (involving the study of about 400 unknown faces, followed by a recognition phase). Behavioural and ERP data were recorded. Relative to ERPs, the mean area amplitude of Anterior Negativity at anterior-frontal sites and Anterior FN400, respectively during encoding and memory task. ERP data were subjected to multifactorial repeated-measures ANOVAs with three within group factors: Condition (with levels music, rain, silence for the encoding session and levels music, rain, silence and new for the memory session); Electrode (dependent upon the ERP component of interest) and Hemisphere (left, right). Regarding hits percentage, recognition for faces seen during Čaikovskij listening or silence were higher (p = 0.03) than faces seen during listening to rain sounds. Moreover, RTs were slower (p < 0.0001) to new than old faces. For the memory session we focused on AF3-AF4 and AFF1-AFF2 recording Anterior Negativity in the window between 1400-1600 ms. The ANOVA on mean area amplitude revealed a larger AN (p < 0.01) to faces associated with music than rain, and intermediate (p = 0.06) to faces associated to silence. An swLORETA applied on the same time window showed the involvement of the STG only in the music condition. For the recognition session we focused on AFp3h-AFp4h and AF3-AF4 recording FN400 between 400-600 ms in response to old (regardless of learning condition) vs. new faces. The ANOVA on mean area amplitude revealed a larger FN400 to faces studied in silence compared to faces studied during rain or music listening. Both behavioural and electrophysiological findings provide evidence that memory for faces is enhanced by listening to music. Electrophysiological evidences revealed how faces seen during music listening elicited a larger AN (1400-1600 ms), showing how unfamiliar material (new faces) stimulate larger prefrontal activity. The swLORETA performed on the same time window showed the right medial temporal cortex as to be active during auditory stimulation (music and rain) but the magnitude of the activation was stronger for music (10.30 µA) than rain (9.19 µA) stimuli. It may be that unstructured auditory material (rain sounds) would produce a smaller activation of associative auditory areas. This data show how listening to touching music has a facilitatory effect on memory for faces, probably deriving from the music emotional content, which, in association with face visual characteristic, makes the memory engram more solid.

De Benedetto, F., Proverbio, A. (2017). Music, a Way to Remember. Intervento presentato a: NeuroMi Annual Meeting, Milano, Italia.

Music, a Way to Remember

De Benedetto, F;Proverbio, AM
2017

Abstract

The aim of the present study was to investigate how auditory background interacts with learning and memory. Previous studies show that music listening alters the way in which we perceive the world. Reported effects of background music go toward both facilitatory and interfering consequences, depending on the type of auditory stimulation and of concurrent cognitive task. In order to investigate the neural mechanism of memory encoding for faces, we recorded event related potentials (ERPs) during listening to classical music (Čajkovskij), environmental sounds (rain) or silence. Participants were 15 healthy non-musician university students, engaged in an old/new memory task (involving the study of about 400 unknown faces, followed by a recognition phase). Behavioural and ERP data were recorded. Relative to ERPs, the mean area amplitude of Anterior Negativity at anterior-frontal sites and Anterior FN400, respectively during encoding and memory task. ERP data were subjected to multifactorial repeated-measures ANOVAs with three within group factors: Condition (with levels music, rain, silence for the encoding session and levels music, rain, silence and new for the memory session); Electrode (dependent upon the ERP component of interest) and Hemisphere (left, right). Regarding hits percentage, recognition for faces seen during Čaikovskij listening or silence were higher (p = 0.03) than faces seen during listening to rain sounds. Moreover, RTs were slower (p < 0.0001) to new than old faces. For the memory session we focused on AF3-AF4 and AFF1-AFF2 recording Anterior Negativity in the window between 1400-1600 ms. The ANOVA on mean area amplitude revealed a larger AN (p < 0.01) to faces associated with music than rain, and intermediate (p = 0.06) to faces associated to silence. An swLORETA applied on the same time window showed the involvement of the STG only in the music condition. For the recognition session we focused on AFp3h-AFp4h and AF3-AF4 recording FN400 between 400-600 ms in response to old (regardless of learning condition) vs. new faces. The ANOVA on mean area amplitude revealed a larger FN400 to faces studied in silence compared to faces studied during rain or music listening. Both behavioural and electrophysiological findings provide evidence that memory for faces is enhanced by listening to music. Electrophysiological evidences revealed how faces seen during music listening elicited a larger AN (1400-1600 ms), showing how unfamiliar material (new faces) stimulate larger prefrontal activity. The swLORETA performed on the same time window showed the right medial temporal cortex as to be active during auditory stimulation (music and rain) but the magnitude of the activation was stronger for music (10.30 µA) than rain (9.19 µA) stimuli. It may be that unstructured auditory material (rain sounds) would produce a smaller activation of associative auditory areas. This data show how listening to touching music has a facilitatory effect on memory for faces, probably deriving from the music emotional content, which, in association with face visual characteristic, makes the memory engram more solid.
poster
music; neuroscience; EEG/ERP; memory; face recognition
English
NeuroMi Annual Meeting
2017
2017
reserved
De Benedetto, F., Proverbio, A. (2017). Music, a Way to Remember. Intervento presentato a: NeuroMi Annual Meeting, Milano, Italia.
File in questo prodotto:
File Dimensione Formato  
De Benedetto-2017-NeuroMi-Poster.pdf

Solo gestori archivio

Descrizione: Poster
Tipologia di allegato: Other attachments
Licenza: Tutti i diritti riservati
Dimensione 1.24 MB
Formato Adobe PDF
1.24 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/421318
Citazioni
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
Social impact