English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Perceptual integration of kinematic components for the recognition of emotional facial expressions

Chiovetto, E., Curio, C., Endres, D., & Giese, M. (2014). Perceptual integration of kinematic components for the recognition of emotional facial expressions. Journal of Vision, 14(10), 205.

Item is

Basic

show hide
Genre: Meeting Abstract

Files

show Files

Locators

show
hide
Description:
-
OA-Status:

Creators

show
hide
 Creators:
Chiovetto, E, Author
Curio, C1, 2, 3, Author           
Endres, D, Author
Giese, MA, Author           
Affiliations:
1Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497797              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497794              
3Project group: Cognitive Engineering, Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_2528702              

Content

show
hide
Free keywords: -
 Abstract: There is evidence in both motor control (Flash and Hochner 2005; Chiovetto and Giese, 2013) as well as in the study of the perception of facial expressions (Ekman Friesen, 1978) showing that complex movements can be decomposed into simpler basic components (usually referred to as ‘movement primitives’ or ‘action units’). However, such components have rarely been investigated in the context of dynamic facial movements (as opposed to static pictures of faces). METHODS. By application of dimensionality reduction methods (NMF and anechoic demixing) we identified spatio-temporal components that capture the major part of the variance of dynamic facial expressions, where the motion was parameterized exploiting a 3D facial animation system (Curio et al, 2006). We generated stimuli with varying information content of the identified components and investigated how many components are minimally required to attain natural appearance (Turing test). In addition, we investigated how perception integrates these components, using expression classification and expressiveness rating tasks. The best trade-off between model complexity and approximation quality of the model was determined by Bayesian inference, and compared to the human data. In addition, we developed a Bayesian cue fusion model that correctly accounts for the data. RESULTS. For anechoic mixing models only two components were sufficient to reconstruct three facial expressions with high accuracy, which is perceptually indistinguishable from original expressions. A simple Bayesian cue fusion model provides a good fit of the data on the integration of information conveyed by the different movement components. References: Chiovetto E, Giese MA. PLoS One 2013 19;8(11):e79555. doi: 10.1371/journal.pone.0079555. Curio C, Breidt M, Kleiner M, Vuong QC, Giese MA and Bülthoff HH. Applied Perception in Graphics and Visualization 2006: 77-84. Ekman P and Friesen W. Consulting Psychologists Press, Palo Alto, 1978. Flash T, Hochner B. Curr Opin Neurobiol 2005; 15(6):660-6.

Details

show
hide
Language(s):
 Dates: 2014-08
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: DOI: 10.1167/14.10.205
BibTex Citekey: ChiovettoCEG2014
 Degree: -

Event

show
hide
Title: 14th Annual Meeting of the Vision Sciences Society (VSS 2014)
Place of Event: St. Pete Beach, FL, USA
Start-/End Date: 2014-05-16 - 2014-05-21

Legal Case

show

Project information

show

Source 1

show
hide
Title: Journal of Vision
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: Charlottesville, VA : Scholar One, Inc.
Pages: - Volume / Issue: 14 (10) Sequence Number: - Start / End Page: 205 Identifier: ISSN: 1534-7362
CoNE: https://pure.mpg.de/cone/journals/resource/111061245811050