Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Meeting Abstract

Pixel-based versus correspondence-based representations of human faces: Implications for sex discrimination

MPG-Autoren
/persons/resource/persons84263

Troje,  NF
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84280

Vetter,  T
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Troje, N., & Vetter, T. (1996). Pixel-based versus correspondence-based representations of human faces: Implications for sex discrimination. Perception, 25(ECVP Abstract Supplement), 52.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-EB3E-5
Zusammenfassung
In human perception, as well as in machine vision, a crucial step in solving any object recognition task is an appropriate description of the object class under consideration. We emphasise this issue when considering the object class `human faces'. We discuss different representations that can be characterised by the degree of alignment between the images they provide for. The representations used span the whole range between a purely pixel-based image representation and a sophisticated model-based representation derived from the pixel-to-pixel correspondence between the faces [Vetter and Troje, 1995, in Mustererkennung Eds G Sagerer, S Posch, F Kummert (Berlin: Springer)]. The usefulness of these representations for sex classification was compared. This was done by first applying a Karhunen -- Loewe transformation on the representation to orthogonalise the data. A linear classifier was trained by means of a gradient-descent procedure. The classification error in a completely cross-validated simulation ranged from 15 in the simplest version of the pixel-based representation to 2.5 for the correspondence-based representation. However, even with intermediate representations very good performance was achieved.