English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  High-level after-effects in the recognition of dynamic facial expressions

Curio, C., Giese, M., Breidt, M., Kleiner, M., & Bülthoff, H. (2007). High-level after-effects in the recognition of dynamic facial expressions. Poster presented at 7th Annual Meeting of the Vision Sciences Society (VSS 2007), Sarasota, FL, USA.

Item is

Files

show Files

Locators

show
hide
Description:
-
OA-Status:

Creators

show
hide
 Creators:
Curio, C1, 2, 3, Author           
Giese, MA, Author           
Breidt, M1, 2, 3, Author           
Kleiner, M1, 2, 3, Author           
Bülthoff, HH1, 2, Author           
Affiliations:
1Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497797              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497794              
3Project group: Cognitive Engineering, Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_2528702              

Content

show
hide
Free keywords: -
 Abstract: Strong high-level after-effects have been reported for the recognition of static faces (Webster et al. 1999; Leopold et al. 2001). Presentation of static ‘anti-faces’ biases the perception of neutral test faces temporarily towards perception of specific identities or facial expressions. Recent experiments have demonstrated high-level after-effects also for point-light walkers, resulting in shifts of perceived gender. Our study presents first results on after-effects for dynamic facial expressions. In particular, we investigated how such after-effects depend on facial identity and dynamic vs. static adapting stimuli.

STIMULI: Stimuli were generated using a 3D morphable model for facial expressions based on laser scans. The 3D model is driven by facial motion capture data recorded with a VICON system. We recorded data of two facial expressions (Disgust and Happy) from an amateur actor. In order to create ‘dynamic anti-expressions’ the motion data was projected onto a basis of 17 facial action units. These units were parameterized by motion data obtained from specially trained actors, who are capable of executing individual action units according to FACS (Ekman 1978). Anti-expressions were obtained by inverting the vectors in this linear projection space.

METHOD: After determining a baseline-performance for expression recognition, participants were adapted with dynamic anti-expressions or static adapting stimuli (extreme keyframes of same duration), followed by an expression recognition test. Test stimuli were Disgust and Happy with strongly reduced expression strength (corresponding to vectors of reduced length in linear projection space). Adaptation and test stimuli were derived from faces with same or different identities.

RESULTS: Adaptation with dynamic anti-expressions resulted in selective after-effects: increased recognition for matching test stimuli (p [[lt]] 0.05, N=13). Adaptation effects were significantly reduced for static adapting stimuli, and for different identities of adapting and test face. This suggests identity-specific neural representations of dynamic facial expressions.

Details

show
hide
Language(s):
 Dates: 2007-06
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: DOI: 10.1167/7.9.994
BibTex Citekey: 4815
 Degree: -

Event

show
hide
Title: 7th Annual Meeting of the Vision Sciences Society (VSS 2007)
Place of Event: Sarasota, FL, USA
Start-/End Date: 2007-05-11 - 2007-05-16

Legal Case

show

Project information

show

Source 1

show
hide
Title: Journal of Vision
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: Charlottesville, VA : Scholar One, Inc.
Pages: - Volume / Issue: 7 (9) Sequence Number: - Start / End Page: 994 Identifier: ISSN: 1534-7362
CoNE: https://pure.mpg.de/cone/journals/resource/111061245811050