English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Quantifying human sensitivity to spatio-temporal information in dynamic faces

Dobs, K., Bülthoff, I., Breidt, M., Vuong, Q., Curio, C., & Schultz, J. (2014). Quantifying human sensitivity to spatio-temporal information in dynamic faces. Vision Research, 100, 78-87. doi:10.1016/j.visres.2014.04.009.

Item is

Files

show Files

Locators

show
hide
Description:
-
OA-Status:

Creators

show
hide
 Creators:
Dobs, K1, 2, 3, Author           
Bülthoff, I1, 2, Author           
Breidt, M1, 2, 3, Author           
Vuong, QC, Author           
Curio, C1, 2, 3, Author           
Schultz, J1, 2, Author           
Affiliations:
1Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497797              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              
3Project group: Cognitive Engineering, Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_2528702              

Content

show
hide
Free keywords: -
 Abstract: A great deal of perceptual and social information is conveyed by facial motion. Here, we investigated observersrsquo; sensitivity to the complex spatio-temporal information in facial expressions and what cues they use to judge the similarity of these movements. We motion-captured four facial expressions and decomposed them into time courses of semantically meaningful local facial actions (e.g., eyebrow raise). We then generated approximations of the time courses which differed in the amount of information about the natural facial motion they contained, and used these and the original time courses to animate an avatar head. Observers chose which of two animations based on approximations was more similar to the animation based on the original time course. We found that observers preferred animations containing more information about the natural facial motion dynamics. To explain observersrsquo; similarity judgments, we developed and used several measures of objective stimulus similarity. The time course of facial actions (e.g., onset and peak of eyebrow raise) explained observersrsquo; behavioral choices better than image-based measures (e.g., optic flow). Our results thus revealed observersrsquo; sensitivity to changes of natural facial dynamics. Importantly, our method allows a quantitative explanation of the perceived similarity of dynamic facial expressions, which suggests that sparse but meaningful spatio-temporal cues are used to process facial motion.

Details

show
hide
Language(s):
 Dates: 2014-07
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: DOI: 10.1016/j.visres.2014.04.009
BibTex Citekey: DobsBBVCS2014
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: Vision Research
  Other : Vision Res.
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: Amsterdam : Pergamon
Pages: - Volume / Issue: 100 Sequence Number: - Start / End Page: 78 - 87 Identifier: ISSN: 0042-6989
CoNE: https://pure.mpg.de/cone/journals/resource/954925451842