English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Attending to expression or identity of dynamic faces engages different cortical areas

MPS-Authors
/persons/resource/persons83890

Dobs,  K
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Project group: Cognitive Engineering, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84201

Schultz,  J
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource

Link
(Publisher version)

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Dobs, K., Schultz, J., Bülthoff, H., & Gardner, J. (2013). Attending to expression or identity of dynamic faces engages different cortical areas. Poster presented at 43rd Annual Meeting of the Society for Neuroscience (Neuroscience 2013), San Diego, CA, USA.


Cite as: https://hdl.handle.net/21.11116/0000-0001-4E2F-5
Abstract
Identity and facial expression of faces we interact with are represented as invariant and changeable aspects, respectively - what are the cortical mechanisms that allow us to selectively extract information about these two important cues? We had subjects attend to either identity or expression of the same dynamic face stimuli and decoded concurrently measured fMRI activity to ask whether distinct cortical areas were differentially engaged in these tasks.
We measured fMRI activity (3x3x3mm, 34 slices, TR=1.5, 4T) from 6 human subjects (2 female) as they performed a change-detection task on dynamic face stimuli. At trial onset, a cue (letters "E" or "I") was presented (0.5s) which instructed subjects to attend to either the expression or the identity of animations of faces (8 presentations per trial of 2s movie clips depicting 1 of 2 facial identities expressing happiness or anger). Subjects were to report (by button press) changes in the cued dimension (these occurred in 20 of trials) and ignore changes in the uncued dimension. Subjects successfully attended to the cued dimension (mean d'=2.4 for cued and d'=-1.9 for uncued dimension), and sensitivity did not differ across tasks (F(1,10)=0.19, p>0.6). Subjects performed 18-20 7min scans (20 trials/scan in pseudorandom order) in 2 sessions.
We built linear classifiers to decode the attended dimension. Face-sensitive areas were defined in separate localizer scans as clusters of voxels responding more to faces than to houses. To independently determine the voxels to be included in the analyses, we ran a task localizer in which 10s grey screen was alternated with 10s of stimuli+task. For each area, we selected the 100 voxels whose signal correlated best with task/no task alternations. BOLD signal in these voxels was averaged over 3-21s of each trial of the main experiment, concatenated across subjects and sessions and used to build the classifiers.
We found that we could decode the attended dimension on cross-validated data from many visual cortical areas (percentage correct classifications: FFA: 68, MT: 73, OFA: 79, STS: 68, V1: 77; p<0.05, permutation test). However, while ventral face-sensitive areas (OFA, FFA) showed larger BOLD signal during attention-to-identity than attention-to-expression trials (p<0.001, t-test), motion processing areas (MT, STS) showed the opposite effect (p<0.001, t-test). Our results suggest that attending to expression or identity engages areas involved in stimulus-specific processing of these two dimensions. Moreover, attending to expression encoded in facial motion recruits motion processing areas, while attending to face identity activates ventral face-sensitive areas.