English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Talk

Pixel-based versus correspondence-based representations of human faces: Implications for sex discrimination

MPS-Authors
/persons/resource/persons84263

Troje,  NF
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84280

Vetter,  T
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Locator
There are no locators available
Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available
Citation

Troje, N., & Vetter, T. (1996). Pixel-based versus correspondence-based representations of human faces: Implications for sex discrimination. Talk presented at 19th European Conference on Visual Perception. Strasbourg, France.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0013-EB3E-5
Abstract
In human perception, as well as in machine vision, a crucial step in solving any object recognition task is an appropriate description of the object class under consideration. We emphasise this issue when considering the object class `human faces'. We discuss different representations that can be characterised by the degree of alignment between the images they provide for. The representations used span the whole range between a purely pixel-based image representation and a sophisticated model-based representation derived from the pixel-to-pixel correspondence between the faces [Vetter and Troje, 1995, in Mustererkennung Eds G Sagerer, S Posch, F Kummert (Berlin: Springer)]. The usefulness of these representations for sex classification was compared. This was done by first applying a Karhunen -- Loewe transformation on the representation to orthogonalise the data. A linear classifier was trained by means of a gradient-descent procedure. The classification error in a completely cross-validated simulation ranged from 15 in the simplest version of the pixel-based representation to 2.5 for the correspondence-based representation. However, even with intermediate representations very good performance was achieved.