An important issue in human cognition concerns face processing. Faces are incontestably one of the most important biological stimuli for humans. They convey crucial social cues, such as age, sex, emotion and identity information, and are the basis of verbal and non-verbal communication. Face processing and recognition have been extensively studied over the past years, through different methodology including neuroimaging and electrophysiology, mostly aimed at testing the extent to which faces can be considered a class of special visual stimuli (e. g. Farah, Wilson, Drain, & Tanaka, 1998; but see also Gauthier, Behrmann, & Tarr, 2004). Despite there is no complete agreement on this debated issue all authors concord on the fact that there are at least two reasons that make faces special: face recognition exhibit functional characteristics not found in the recognition of other visual stimuli and, second, the neural substrate that mediates face recognition is anatomically separated from the those mediating general object recognition (e.g. Farah, Rabinowitz, Quinn, & Liu, 2000). The majority of the literature on face processing was aimed primarily to investigate the ability to discriminate between faces and non-face like objects (e.g. Gauthier, Behrmann, & Tarr, 1999), as well as defining which kind of processing is involved (configural vs. featural processing) (e.g. Maurer, Grand, & Mondloch, 2002) and the ability to perceive the uniqueness of individual faces (e.g. Bruce & Young, 1986), thus focusing primarily on face-identity related aspects of recognition It has been claimed that the recognition of facial identity is based on invariant facial features, such as eyes, nose, mouth and their reciprocal configurational relations. As well as these invariant aspects, faces have another essential component: their changeable aspects, that carry a variety of socially important cues that are essential to social interaction. Indeed, since birth most face viewing occurs in the context of social interactions and faces provide a wealth of information, beyond identity, which facilitate social communication. Indeed, facial features can move, changing their reciprocal relations, generating for example facial expression, lip or eye movement. In fact, while these changeable aspects do not modify the identity of that particular face, they result constitute in different visual stimuli which convey different social signals. The ability to process such social relevant information may represent a more highly developed visual perceptual skill than the recognition of identity. Only recently however, the study of these aspects have started to be investigated. Among the different neuroanatomical - functional models proposed in literature, the Haxby and colleagues’s (2000) take into account both important components, invariant features and changeable aspects of a face. The network includes visual (“core”) regions, which process invariant facial features, as well as limbic and prefrontal (“extended”) regions that process changeable aspects of faces (Haxby, Hoffman, & Gobbini, 2000; Ishai, 2008). Starting from Haxby model, the attention of the present work has been focused on the role of the changeable aspect of a face within social interaction. More specifically, the aim of the current series of studies was to investigate how observer could process, use, interact and react to different social signals (i.e. gaze direction, head orientation, facial expressions). In experiment 1 we explored the perception of different gaze directions and the role of conflicting information in gaze following behaviour was investigated using ERPs. In experiment 2 we examined the effect of the combination between gaze direction and head positions on allocation of attentional resources and thus on the processing of subsequent target using fMRI. In experiment 3 we studied how non-emotional facial expressions could help recognition of identity in a clinical population (i.e. congenital prosopagnosia). It is well known that others’ gaze direction and body position attract our attention (Ricciardelli, Baylis, & Driver, 2000), and it also has been demonstrated the existence of an automatic tendency to follow the gaze of others, leading to joint attention (Ricciardelli, Bricolo, Aglioti, & Chelazzi, 2002). It is known that we can use those signals to modulate our attention, but it is still unclear the nature and the time course of control processes involved in this modulation. In the first part of the present study we investigate this issue on gaze by using different methodologies: electrophysiological method in order to investigate the time course of the gaze following behaviour (the fact that ultimately the observer’s look and attend where another person is looking); and neuroimaging method to explore what neural system is activated when a temporal allocation of resource is required and influenced by seeing actors with different gaze direction and head orientation. In experiment 1 we wanted to trace the time course of the processed involved in a gaze cueing task in which the effect was investigated in an overt paradigm. By combining eye movement and ERP recordings we investigate the involvement of conflict monitoring processes in various contexts and at different times with respect to the distracter’s eyes movement. We used ERP because they provide a measure of the timing of the elaboration of gaze observed and of the consequent planning of a saccadic response. Participants were instructed to saccade towards one of two lateral targets in a Posner like paradigm. Seventy-five milliseconds before, or after the instruction onset, a distracting face gazed towards a target (goal-directed), congruent or incongruent with the instructed direction, or towards an empty spatial location (non-goal-directed). We analyzed the N2 and Error-Related Negativity (ERN) measures, known to be involved in conflict monitoring processes (respectively in pre-response conflict and in error detection). Results interestingly showed that a certain degree of control over the gaze following response is possible, suggesting that tendency to follow the gaze of others is more flexible than previously believed, as it seems to depend not only on an early visuo-motor priming (Crostella, Carducci, & Aglioti, 2009), but also on the circumstances (i.e. context) associated with the seen gaze shift. In experiment 2 we explored activations in face neural system in order to verify whether social cues indicating mutual contact enhanced or reduced attention for subsequent events. More specifically it has been investigated how the processing of gaze direction (averted, directed) and head position (deviated, frontal) diminishes attentional blink (AB) for subsequent visual events. We used fMRI in order to measure the hemodynamic response (change in blood flow) related to neural activity in attentional and face processing systems when the temporal allocation of resource is linked to gaze direction and head position processing. Results showed that when the eyes and the head were oriented in the same direction (i.e., congruent conditions), attract attention and increase the processing of subsequent visual events, than when they were oriented in opposite directions (i.e., incongruent), In fact analysis showed that congruent gaze direction and head orientation increased activity within bilateral temporoparietal junction, an area that is strongly associated with mentalizing and understanding intentions of other’s (Redcay et al., 2010), as well as increased activity in regions of the face perception network, such as Occipital Face Area, Superior Temporal Sulcus and anterior insula (Ishai, 2008), but these responses were drastically diminished during AB. Moreover activity in bilateral Intraparietal Sulcus, a region involved in gaze perception (Calder et al., 2007) and attention (Marois, Chun, & Gore, 2000), decreased during AB in parallel to the decrease in recognition performance, thus when head and gaze were averted. These results show that head and gaze directions seem to be powerful social cues that are able to modulate the AB effect and, more generally, influence the observer's attention in reacting to subsequent visual stimuli. Together with the results from Experiment 1, these findings validate the issue that humans has a neural system to process other’s gaze direction and that this system is complexity linked with attentional networks both to allocate resource and to share the attention with someone else. Another important features connected with social signals in face perception are facial expressions which were investigated in the second part of the present work. The idea that facial identity and facial expressions are processed by separate visual route has well established in face research. The model proposed by Haxby and colleagues (2000) contain a separate route for facial identity but it is unknown if a single system supports the processing of emotional and non-emotional facial expressions whereas non emotional facial expressions are expressions that are not supported by an affective state. A previous study (Comparetti, Ricciardelli, & Daini, 2011) on normal subjects suggests that non-emotional facial expressions could be processed in a specific way dissociable from emotions and from other facial features. In perceiving emotional expressions congenital prosopagnosic individuals (people who are unable to recognize faces and maintain the disability lifelong in absence of any obvious brain damage) are indistinguishable from control but it’s still unknown if they could process non-emotional facial expressions. This hypothesis was tested in Experiment 3 by investigating whether and how CP participants could elaborate facial expressions that not convey an affective state (A. J. O'Toole, Roark, & Abdi, 2002). Using the Face inversion paradigm, as in Comparetti et al. (2011) we tested if non-emotional facial expressions could be processed by system, differentiated from identity recognition system and emotion processing in CP subjects with pathological score at standard face recognition tasks. We carried out a behavioural study in which we compared performance in a recognition task and in a same/different judgement task, using upright and inverted faces. In the experiment were manipulated respectively internal features, emotional and non-emotional facial expressions. Results demonstrated that in these subjects non emotional facial expressions are processed and facilitated the judgment in the upright orientation, while emotions and features manipulation did not. Overall, the present thesis has investigated issues from the current domain of processes associated with face perception and social information essential for adaptive behavior in a complex social environment. It provides further evidence that social signs are important and are processed even if they are not relevant for the task. For example gaze cueing is observed even when the participants are motivated to orient away from gaze direction because the target will be in an uncued location (experiment 1) or even if it is not relevant for the task (experiment 2) and facial expressions are elaborated even if only the identity of the face will be required (experiment 3). More specifically it has been investigated how people react to social signal and could plan their behaviours reacting to the social information given by a face. In fact in Experiment 1 has been demonstrated that other’s gaze is a strong trigger to allocate our attention to an important location in space but more deeply other’s gaze it’s really important when the two actors have something in common (i. e. the same peripheral targets); in fact, under certain conditions, the gaze following behaviour could be controlled and specifically when the context is not shared. Moreover the Experiment 2 has shown how people could allocate temporarily their attention responding to gaze direction and head orientation, and demonstrating that when the different signals are congruent it is possible to reallocate attentional resources to process subsequent event. Finally in Experiment 3 it has been demonstrated that a facial expression that does not convey an universal affective state could be processed by congenital prosopagnosic individuals and these expressions could be used as a cue to arrive to the identity.

(2012). Looking at a face. Relavant aspects of face perception in social cognition. (Tesi di dottorato, Università degli Studi di Milano-Bicocca, 2012).

Looking at a face. Relavant aspects of face perception in social cognition

COMPARETTI, CHIARA MADDALENA
2012

Abstract

An important issue in human cognition concerns face processing. Faces are incontestably one of the most important biological stimuli for humans. They convey crucial social cues, such as age, sex, emotion and identity information, and are the basis of verbal and non-verbal communication. Face processing and recognition have been extensively studied over the past years, through different methodology including neuroimaging and electrophysiology, mostly aimed at testing the extent to which faces can be considered a class of special visual stimuli (e. g. Farah, Wilson, Drain, & Tanaka, 1998; but see also Gauthier, Behrmann, & Tarr, 2004). Despite there is no complete agreement on this debated issue all authors concord on the fact that there are at least two reasons that make faces special: face recognition exhibit functional characteristics not found in the recognition of other visual stimuli and, second, the neural substrate that mediates face recognition is anatomically separated from the those mediating general object recognition (e.g. Farah, Rabinowitz, Quinn, & Liu, 2000). The majority of the literature on face processing was aimed primarily to investigate the ability to discriminate between faces and non-face like objects (e.g. Gauthier, Behrmann, & Tarr, 1999), as well as defining which kind of processing is involved (configural vs. featural processing) (e.g. Maurer, Grand, & Mondloch, 2002) and the ability to perceive the uniqueness of individual faces (e.g. Bruce & Young, 1986), thus focusing primarily on face-identity related aspects of recognition It has been claimed that the recognition of facial identity is based on invariant facial features, such as eyes, nose, mouth and their reciprocal configurational relations. As well as these invariant aspects, faces have another essential component: their changeable aspects, that carry a variety of socially important cues that are essential to social interaction. Indeed, since birth most face viewing occurs in the context of social interactions and faces provide a wealth of information, beyond identity, which facilitate social communication. Indeed, facial features can move, changing their reciprocal relations, generating for example facial expression, lip or eye movement. In fact, while these changeable aspects do not modify the identity of that particular face, they result constitute in different visual stimuli which convey different social signals. The ability to process such social relevant information may represent a more highly developed visual perceptual skill than the recognition of identity. Only recently however, the study of these aspects have started to be investigated. Among the different neuroanatomical - functional models proposed in literature, the Haxby and colleagues’s (2000) take into account both important components, invariant features and changeable aspects of a face. The network includes visual (“core”) regions, which process invariant facial features, as well as limbic and prefrontal (“extended”) regions that process changeable aspects of faces (Haxby, Hoffman, & Gobbini, 2000; Ishai, 2008). Starting from Haxby model, the attention of the present work has been focused on the role of the changeable aspect of a face within social interaction. More specifically, the aim of the current series of studies was to investigate how observer could process, use, interact and react to different social signals (i.e. gaze direction, head orientation, facial expressions). In experiment 1 we explored the perception of different gaze directions and the role of conflicting information in gaze following behaviour was investigated using ERPs. In experiment 2 we examined the effect of the combination between gaze direction and head positions on allocation of attentional resources and thus on the processing of subsequent target using fMRI. In experiment 3 we studied how non-emotional facial expressions could help recognition of identity in a clinical population (i.e. congenital prosopagnosia). It is well known that others’ gaze direction and body position attract our attention (Ricciardelli, Baylis, & Driver, 2000), and it also has been demonstrated the existence of an automatic tendency to follow the gaze of others, leading to joint attention (Ricciardelli, Bricolo, Aglioti, & Chelazzi, 2002). It is known that we can use those signals to modulate our attention, but it is still unclear the nature and the time course of control processes involved in this modulation. In the first part of the present study we investigate this issue on gaze by using different methodologies: electrophysiological method in order to investigate the time course of the gaze following behaviour (the fact that ultimately the observer’s look and attend where another person is looking); and neuroimaging method to explore what neural system is activated when a temporal allocation of resource is required and influenced by seeing actors with different gaze direction and head orientation. In experiment 1 we wanted to trace the time course of the processed involved in a gaze cueing task in which the effect was investigated in an overt paradigm. By combining eye movement and ERP recordings we investigate the involvement of conflict monitoring processes in various contexts and at different times with respect to the distracter’s eyes movement. We used ERP because they provide a measure of the timing of the elaboration of gaze observed and of the consequent planning of a saccadic response. Participants were instructed to saccade towards one of two lateral targets in a Posner like paradigm. Seventy-five milliseconds before, or after the instruction onset, a distracting face gazed towards a target (goal-directed), congruent or incongruent with the instructed direction, or towards an empty spatial location (non-goal-directed). We analyzed the N2 and Error-Related Negativity (ERN) measures, known to be involved in conflict monitoring processes (respectively in pre-response conflict and in error detection). Results interestingly showed that a certain degree of control over the gaze following response is possible, suggesting that tendency to follow the gaze of others is more flexible than previously believed, as it seems to depend not only on an early visuo-motor priming (Crostella, Carducci, & Aglioti, 2009), but also on the circumstances (i.e. context) associated with the seen gaze shift. In experiment 2 we explored activations in face neural system in order to verify whether social cues indicating mutual contact enhanced or reduced attention for subsequent events. More specifically it has been investigated how the processing of gaze direction (averted, directed) and head position (deviated, frontal) diminishes attentional blink (AB) for subsequent visual events. We used fMRI in order to measure the hemodynamic response (change in blood flow) related to neural activity in attentional and face processing systems when the temporal allocation of resource is linked to gaze direction and head position processing. Results showed that when the eyes and the head were oriented in the same direction (i.e., congruent conditions), attract attention and increase the processing of subsequent visual events, than when they were oriented in opposite directions (i.e., incongruent), In fact analysis showed that congruent gaze direction and head orientation increased activity within bilateral temporoparietal junction, an area that is strongly associated with mentalizing and understanding intentions of other’s (Redcay et al., 2010), as well as increased activity in regions of the face perception network, such as Occipital Face Area, Superior Temporal Sulcus and anterior insula (Ishai, 2008), but these responses were drastically diminished during AB. Moreover activity in bilateral Intraparietal Sulcus, a region involved in gaze perception (Calder et al., 2007) and attention (Marois, Chun, & Gore, 2000), decreased during AB in parallel to the decrease in recognition performance, thus when head and gaze were averted. These results show that head and gaze directions seem to be powerful social cues that are able to modulate the AB effect and, more generally, influence the observer's attention in reacting to subsequent visual stimuli. Together with the results from Experiment 1, these findings validate the issue that humans has a neural system to process other’s gaze direction and that this system is complexity linked with attentional networks both to allocate resource and to share the attention with someone else. Another important features connected with social signals in face perception are facial expressions which were investigated in the second part of the present work. The idea that facial identity and facial expressions are processed by separate visual route has well established in face research. The model proposed by Haxby and colleagues (2000) contain a separate route for facial identity but it is unknown if a single system supports the processing of emotional and non-emotional facial expressions whereas non emotional facial expressions are expressions that are not supported by an affective state. A previous study (Comparetti, Ricciardelli, & Daini, 2011) on normal subjects suggests that non-emotional facial expressions could be processed in a specific way dissociable from emotions and from other facial features. In perceiving emotional expressions congenital prosopagnosic individuals (people who are unable to recognize faces and maintain the disability lifelong in absence of any obvious brain damage) are indistinguishable from control but it’s still unknown if they could process non-emotional facial expressions. This hypothesis was tested in Experiment 3 by investigating whether and how CP participants could elaborate facial expressions that not convey an affective state (A. J. O'Toole, Roark, & Abdi, 2002). Using the Face inversion paradigm, as in Comparetti et al. (2011) we tested if non-emotional facial expressions could be processed by system, differentiated from identity recognition system and emotion processing in CP subjects with pathological score at standard face recognition tasks. We carried out a behavioural study in which we compared performance in a recognition task and in a same/different judgement task, using upright and inverted faces. In the experiment were manipulated respectively internal features, emotional and non-emotional facial expressions. Results demonstrated that in these subjects non emotional facial expressions are processed and facilitated the judgment in the upright orientation, while emotions and features manipulation did not. Overall, the present thesis has investigated issues from the current domain of processes associated with face perception and social information essential for adaptive behavior in a complex social environment. It provides further evidence that social signs are important and are processed even if they are not relevant for the task. For example gaze cueing is observed even when the participants are motivated to orient away from gaze direction because the target will be in an uncued location (experiment 1) or even if it is not relevant for the task (experiment 2) and facial expressions are elaborated even if only the identity of the face will be required (experiment 3). More specifically it has been investigated how people react to social signal and could plan their behaviours reacting to the social information given by a face. In fact in Experiment 1 has been demonstrated that other’s gaze is a strong trigger to allocate our attention to an important location in space but more deeply other’s gaze it’s really important when the two actors have something in common (i. e. the same peripheral targets); in fact, under certain conditions, the gaze following behaviour could be controlled and specifically when the context is not shared. Moreover the Experiment 2 has shown how people could allocate temporarily their attention responding to gaze direction and head orientation, and demonstrating that when the different signals are congruent it is possible to reallocate attentional resources to process subsequent event. Finally in Experiment 3 it has been demonstrated that a facial expression that does not convey an universal affective state could be processed by congenital prosopagnosic individuals and these expressions could be used as a cue to arrive to the identity.
DAINI, ROBERTA
RICCIARDELLI, PAOLA
face, attention, gaze, prosopagnosia
M-PSI/02 - PSICOBIOLOGIA E PSICOLOGIA FISIOLOGICA
English
19-gen-2012
Scuola di Dottorato in Psicologia e Scienze Cognitive
PSICOLOGIA SPERIMENTALE, LINGUISTICA E NEUROSCIENZE COGNITIVE - 52R
23
2010/2011
open
(2012). Looking at a face. Relavant aspects of face perception in social cognition. (Tesi di dottorato, Università degli Studi di Milano-Bicocca, 2012).
File in questo prodotto:
File Dimensione Formato  
Phd_unimib_042797.pdf

Accesso Aperto

Tipologia di allegato: Doctoral thesis
Dimensione 1.4 MB
Formato Adobe PDF
1.4 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/28331
Citazioni
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
Social impact