English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Optimal integration of facial form and motion during face recognition

Dobs, K., Bülthoff, I., & Reddy, L. (2016). Optimal integration of facial form and motion during face recognition. Poster presented at 16th Annual Meeting of the Vision Sciences Society (VSS 2016), St. Pete Beach, FL, USA.

Item is

Basic

show hide
Item Permalink: http://hdl.handle.net/21.11116/0000-0000-7B2E-4 Version Permalink: http://hdl.handle.net/21.11116/0000-0005-9B0B-1
Genre: Poster

Files

show Files

Locators

show
hide
Locator:
Link (Any fulltext)
Description:
-

Creators

show
hide
 Creators:
Dobs, K1, 2, 3, Author              
Bülthoff, I1, 2, 3, Author              
Reddy, L, Author
Affiliations:
1Project group: Recognition & Categorization, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_2528707              
2Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497797              
3Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497794              

Content

show
hide
Free keywords: -
 Abstract: Integration of multiple sensory cues pertaining to the same object is essential for precise and accurate perception. The optimal strategy to estimate an object’s property is to weight sensory cues proportional to their relative reliability (i.e., the inverse of the variance). Recent studies showed that human observers apply this strategy when integrating low-level unisensory and multisensory signals, but evidence for high-level perception remains scarce. Here we asked if human observers optimally integrate high-level visual cues in a socially critical task, namely the recognition of a face. We therefore had subjects identify one of two previously learned synthetic facial identities (“Laura” and “Susan”) using facial form and motion. Five subjects performed a 2AFC identification task (i.e., “Laura or Susan?”) based on dynamic face stimuli that systematically varied in the amount of form and motion information they contained about each identity (10 morph steps from Laura to Susan). In single-cue conditions one cue (e.g., form) was varied while the other (e.g., motion) was kept uninformative (50 morph). In the combined-cue condition both cues varied by the same amount. To assess whether subjects weight facial form and motion proportional to their reliability, we also introduced cue-conflict conditions in which both cues were varied but separated by a small conflict (±10). We fitted psychometric functions to the proportion of “Susan” choices pooled across subjects (fixed-effects analysis) for each condition. As predicted by optimal cue integration, the empirical combined variance was lower than the single-cue variances (p< 0.001, bootstrap test), and did not differ from the optimal combined variance (p>0.5). Moreover, no difference was found between empirical and optimal form and motion weights (p>0.5). Our data thus suggest that humans integrate high-level visual cues, such as facial form and motion, proportional to their reliability to yield a coherent percept of a facial identity.

Details

show
hide
Language(s):
 Dates: 2016-08
 Publication Status: Published in print
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Method: -
 Identifiers: DOI: 10.1167/16.12.925
BibTex Citekey: DobsBR2016
 Degree: -

Event

show
hide
Title: 16th Annual Meeting of the Vision Sciences Society (VSS 2016)
Place of Event: St. Pete Beach, FL, USA
Start-/End Date: 2016-05-13 - 2016-05-18

Legal Case

show

Project information

show

Source 1

show
hide
Title: Journal of Vision
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: Charlottesville, VA : Scholar One, Inc.
Pages: - Volume / Issue: 16 (12) Sequence Number: - Start / End Page: 925 Identifier: ISSN: 1534-7362
CoNE: https://pure.mpg.de/cone/journals/resource/111061245811050