English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Quantifying Human Sensitivity to Spatio-Temporal Information in Dynamic Faces

Dobs, K., Bülthoff, I., Breidt, M., Vuong, Q., Curio, C., & Schultz, J. (2013). Quantifying Human Sensitivity to Spatio-Temporal Information in Dynamic Faces. Poster presented at 36th European Conference on Visual Perception (ECVP 2013), Bremen. Germany.

Item is

Basic

show hide
Item Permalink: http://hdl.handle.net/21.11116/0000-0001-4E6B-1 Version Permalink: http://hdl.handle.net/21.11116/0000-0003-18CC-D
Genre: Poster

Files

show Files

Locators

show
hide
Locator:
Link (Any fulltext)
Description:
-

Creators

show
hide
 Creators:
Dobs, K1, 2, 3, Author              
Bülthoff, I1, 2, Author              
Breidt, M1, 2, 3, Author              
Vuong, QC, Author              
Curio, C1, 2, 3, Author              
Schultz, JW, Author              
Affiliations:
1Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497797              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497794              
3Project group: Cognitive Engineering, Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_2528702              

Content

show
hide
Free keywords: -
 Abstract: A great deal of social information is conveyed by facial motion. However, understanding how observers use the natural timing and intensity information conveyed by facial motion is difficult because of the complexity of these motion cues. Here, we systematically manipulated animations of facial expressions to investigate observers' sensitivity to changes in facial motion. We filmed and motion-captured four facial expressions and decomposed each expression into time courses of semantically meaningful local facial actions (e.g., eyebrow raise). These time courses were used to animate a 3D head model with either the original time courses or approximations of them. We then tested observers' perceptual sensitivity to these changes using matching-to-sample tasks. When viewing two animations (original vs. approximation), observers chose original animations as most similar to the video of the expression. In a second experiment, we used several measures of stimulus similarity to explain observers' choice of which approximation was most similar to the original animation when viewing two different approximations. We found that high-level cues about spatio-temporal characteristics of facial motion (e.g., onset and peak of eyebrow raise) best explained observers' choices. Our results demonstrate the usefulness of our method; and importantly, they reveal observers' sensitivity to natural facial dynamics.

Details

show
hide
Language(s):
 Dates: 2013-08
 Publication Status: Published in print
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Method: -
 Identifiers: DOI: 10.1177/03010066130420S101
BibTex Citekey: DobsBBVCS2013
 Degree: -

Event

show
hide
Title: 36th European Conference on Visual Perception (ECVP 2013)
Place of Event: Bremen. Germany
Start-/End Date: -

Legal Case

show

Project information

show

Source 1

show
hide
Title: Perception
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: London : Pion Ltd.
Pages: - Volume / Issue: 42 (ECVP Abstract Supplement) Sequence Number: - Start / End Page: 197 Identifier: ISSN: 0301-0066
CoNE: https://pure.mpg.de/cone/journals/resource/954925509369