English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Classifying faces by sex is more accurate with 3D shape information than with texture

MPS-Authors
/persons/resource/persons84123

O'Toole,  AJ
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84280

Vetter,  T
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84263

Troje,  NF
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

ARVO-1996-OToole.pdf
(Any fulltext), 24KB

Supplementary Material (public)
There is no public supplementary material available
Citation

O'Toole, A., Vetter, T., Troje, N., & Bülthoff, H. (1996). Classifying faces by sex is more accurate with 3D shape information than with texture. Poster presented at Annual Meeting of the Association for Research in Vision and Ophthalmology (ARVO 1996), Fort Lauderdale, FL, USA.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-EB94-3
Abstract
Purpose: We compared quality of information available in 3D surface models versus texture maps for classifying human faces by sex. Methods: 3D surface models and texture maps from laser scans of 130 human heads (65 male, 65 female) were analyzed with separate principal components analyses (PCAs). Individual principal components (PCs) from the 3D head data characterized complex structural differences between male and female heads. Likewise, individual PCs in the texture analysis contrasted characteristically male vs. female texture patterns (e.g., presence/absence of facial hair shadowing). More formally, representing faces with only their projection coefficients onto the PCs, and varying the subspace from 1 to 50 dimensions, we trained a series of perceptrons to predict the sex of the faces using either the 3D or texture data. A "leave-one-out" technique was applied to measure the gen-eralizability of the perceptron's sex predictions. Results: While very good sex generalization performance was obtained for both representations, even with very low dimensional subspaces (e.g., 76.1 correct with only one 3D projection coefficient), the 3D data supported more accurate sex classification across nearly the entire range of subspaces tested. For texture, 93.8 correct sex generalization was achieved with a minimun subspace of 20 projection coefficients. For 3D data, 96.9 correct generalization was achieved with 17 projection coefficients. Conclusions: These data highlight the importance of considering the kinds of information available in different face representations with respect to the task demands.