English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Report

Face Recognition across Large Viewpoint Changes

MPS-Authors
/persons/resource/persons84123

O'Toole,  AJ
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84263

Troje,  NF
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84280

Vetter,  T
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (public)

MPIK-TR-9.pdf
(Publisher version), 270KB

Supplementary Material (public)
There is no public supplementary material available
Citation

O'Toole, A., Bülthoff, H., Troje, N., & Vetter, T.(1995). Face Recognition across Large Viewpoint Changes (9). Tübingen, Germany: Max Planck Institute for Biological Cybernetics.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0013-ECC6-B
Abstract
We describe a computational model of face recognition that makes use of the overlapping texture and shape information visible in different views of faces. The model operates on view dependent data from three-dimensional laser scans of human heads, which provided three-dimensional surface data as well as surface image detail in the form of a texture map. View-dependent information from these surface and texture representations was registered onto separate three-dimensional head models. We used an auto-associative memory model as a pattern completion device to fill in parts of the head from a learned view when a test view with partially overlapping information was used as a memory key. We show that the overlapping visible regions of heads for both surface and texture data can support accurate recognition, even with pose differences of as much as 90 degrees (full face to profile view) between the learning and test view.