2B: Face Perception
Tracks
Track 2
Friday, November 27, 2015 |
1:30 PM - 3:00 PM |
Princes Ballroom B |
Speaker
Dr Bronson Harry
Post Doctoral Fellow
MARCS Institute
Evidence for integrated face and body representations in the anterior temporal lobes
1:30 PM - 1:50 PMAbstract Text
Research on visual face perception has revealed a region in the ventral anterior temporal lobes, often referred to as the anterior face patch (AFP), which responds strongly to images of faces relative to images of other kinds. To date, the selectivity of the AFP has been examined by contrasting responses to faces against a small selection of categories. Here, we assess the selectivity of the AFP in humans with a broad range of visual control stimuli to provide a strong test of face selectivity in this region. In Experiment 1, participants viewed images from 20 stimulus categories in an event-related functional magnetic resonance imaging design. Faces evoked more activity than all other 19 categories in the left AFP. In the right AFP, equally strong responses were observed for both faces and headless bodies. To pursue this finding, in Experiment 2 we used multi-voxel pattern analysis to examine whether the strong response to face and body stimuli reflects a common coding of both classes, or instead overlapping but distinct representations. Face and whole-body responses were significantly positively correlated in right AFP, but face and body-part responses were not. This finding suggests there is shared neural coding of faces and whole-bodies in AFP that does not extend to individual body parts. In contrast, the same approach revealed distinct face and body representations in the right fusiform gyrus. Taken together the results suggest increasing convergence of distinct sources of person-related perceptual information proceeding from the posterior to the anterior temporal cortex.
Miss Jessica McFadyen
PhD Candidate
The University of Queensland
The Subcortical Route to the Amygdala: Spatial Frequencies and Facial Expressions
1:50 PM - 2:10 PMAbstract Text
Converging evidence from human and animal studies suggests that there is a colliculo-pulvinar subcortical pathway – the “low road” – that allows low spatial frequency visual information to reach the amygdala, bypassing the visual cortex. While this has been investigated with fMRI in humans, it has not yet been determined whether this route provides a temporal advantage, nor whether such information is conveyed directly to the amygdala without input from higher cortical areas. This study employed dynamic causal modelling (DCM) to investigate whether a subcortical route is engaged in visual processing above and beyond a cortical pathway and, if so, whether a subcortical route is used primarily to transmit salient low spatial frequency information. Neural activity was recorded using magnetoencephalography (MEG) while participants performed a gender discrimination task on neutral or fearful faces presented in broad (BSF), low (LSF), or high (HSF) spatial frequencies. Mass-univariate sensor space analyses revealed a temporal advantage of LSF (85ms) over HSF (175ms) at occipital sensors. Moreover, greater neural signal intensity for LSF faces was associated with faster reaction times. Bayesian model comparison of DCMs demonstrated that neural activity for LSF faces was best modelled by a dual route (i.e. cortical and subcortical amygdala connections). Overall, these initial results support the “low road” hypothesis for conveying visual information directly to the amygdala, facilitating coarse but rapid appraisal of visual stimuli.
Professor Peter Rendell
Professor of Psychology
Australian Catholic University
Ageing effects spared with regulation of facial activity for intense emotional stimuli that are positive but not negative.
2:15 PM - 2:35 PMAbstract Text
Ageing is generally associated with decline but emotion regulation is regarded as one of the few exceptions. Previous research has found that older adults can as effectively as young adult, regulate facial reactivity to positive and negative pictures. The current study investigated age differences in regulation of facial reactivity that varied in intensity using dynamic stimuli. The positive stimuli involved amusing scenes from films and the negative stimuli involved sad scenes from films. Forty young (18-33 years) and 40 older (60-85 years) were shown sets of emotional films (8 amusing, 8 sad) under watch and suppression conditions. Facial reactivity were objectively monitored using facial electromyography (EMG). Both older and younger adults increased zygomaticus activity for amusing films and corrugator activity when watching sad films. Both younger and older could regulate facial reactivity for amusing stimuli but not back to baseline levels. Only younger adults could reduce corrugator activity for sad films. Even with more intense stimuli, older adults did show lower levels of facial activity when just watching stimuli, and thus had less to suppress. With more intense stimuli, the sparing of ageing effects is maintained with positive stimuli but not with negative stimuli. There does seem to be some limitations to the exceptional ability of older adults abilities to regulate emotion.
A/Prof Paul Corballis
Associate Professor
University of Auckland
The Auckland Face Simulator: A parametric tool for facial research
2:35 PM - 2:55 PMAbstract Text
The perception and recognition of faces is our most complicated and socially significant ability. Facial invariants convey information about a person’s identity, gender, and ethnicity, while the recognition of emotions, social cues and cognitive states involves processing dynamic variations in contractions of more than 40 facial muscles. Despite several decades of productive research, face perception remains poorly understood – at least in part because progress has been hampered by poor stimulus control and heavy reliance on static images. Here, we introduce a uniquely realistic computational simulation of the human head and face, the “Auckland Face Simulator” (AFS). This resource simulates facial reflectance, structure and movement to provide a dynamic and compelling representation of the facial musculature, skin, features, and eyes. A unique strength of the AFS is the ability to create a controllable and highly realistic representation of the face in motion. In addition, we describe the design of a framework to extend the simulation to create autonomous, expressive, and interactive models of behaviour based on current theory in affective and cognitive neuroscience. A major goal of our research is to provide a workspace to integrate many different theories and models to create a functioning sketch of several fundamental aspects of human behaviour – including face-to-face social interactions – to explore how complex social behaviours may emerge from interactions of low-level and high-level neural systems.
Chairperson
Will Hayward
Professor
University of Auckland