English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Quantifying Human Sensitivity to Spatio-Temporal Information in Dynamic Faces

MPS-Authors
/persons/resource/persons83890

Dobs,  K
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Project group: Cognitive Engineering, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83840

Bülthoff,  I
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83829

Breidt,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Project group: Cognitive Engineering, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83871

Curio,  C
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Project group: Cognitive Engineering, Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource

Link
(Any fulltext)

https://f1000research.com/posters/1094412
(Any fulltext)

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Dobs, K., Bülthoff, I., Breidt, M., Vuong, Q., Curio, C., & Schultz, J. (2013). Quantifying Human Sensitivity to Spatio-Temporal Information in Dynamic Faces. Poster presented at 36th European Conference on Visual Perception (ECVP 2013), Bremen. Germany.


Cite as: https://hdl.handle.net/21.11116/0000-0001-4E6B-1
Abstract
A great deal of social information is conveyed by facial motion. However, understanding how observers use the natural timing and intensity information conveyed by facial motion is difficult because of the complexity of these motion cues. Here, we systematically manipulated animations of facial expressions to investigate observers' sensitivity to changes in facial motion. We filmed and motion-captured four facial expressions and decomposed each expression into time courses of semantically meaningful local facial actions (e.g., eyebrow raise). These time courses were used to animate a 3D head model with either the original time courses or approximations of them. We then tested observers' perceptual sensitivity to these changes using matching-to-sample tasks. When viewing two animations (original vs. approximation), observers chose original animations as most similar to the video of the expression. In a second experiment, we used several measures of stimulus similarity to explain observers' choice of which approximation was most similar to the original animation when viewing two different approximations. We found that high-level cues about spatio-temporal characteristics of facial motion (e.g., onset and peak of eyebrow raise) best explained observers' choices. Our results demonstrate the usefulness of our method; and importantly, they reveal observers' sensitivity to natural facial dynamics.