Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Spatio-temporal Caricatures of Facial Motion

MPG-Autoren
/persons/resource/persons84018

Knappmeyer,  B
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Knappmeyer, B., Giese, M., Ilg, W., & Bülthoff, H. (2003). Spatio-temporal Caricatures of Facial Motion. Poster presented at 6. Tübinger Wahrnehmungskonferenz (TWK 2003), Tübingen, Germany.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-DD20-5
Zusammenfassung
It is well established that there is a recognition advantage for slightly caricatured versions of static pictures of faces (e.g., Rhodes et al., 1987, Cognitive Psychology, 473-
497; Benson Perrett, 1994, Perception, 75-93). Recently, similar caricature eects
have been shown using temporal or spatial exaggerations of complex body movements
(point light displays) (Hill Pollick, 2000, Psychological Science, 223-228; Pollick et
al. 2001, Perception, 323-338). Here, we generated spatio-temporal caricatures of facial
movements using a motion morphing technique developed by Giese Poggio (2000,
International Journal of Computer Vision, 59-732000) to investigate whether identication
from facial motion can be improved by caricaturing. The motion caricaturing was
accomplished using hierarchical spatio-temporal morphable models (HSTMM). This
technique represents complex motion sequences by linear combinations of learned prototypical
movement elements. Facial motion trajectories of 72 re
ecting markers were
obtained using a commercial 3D motion capture system (VICON). These original trajectories
and the morphed or exaggerated versions are applied to photo-realistic head
models (Blanz Vetter, 1999, SIGGRAPH: 187-194) using a commercial face animation
software (famous3D Pty. Ltd.). In a rst experiment which employed motion data
captured from 2D videos, we tested the quality of this linear combination technique.
Naturalness ratings from 7 observers were obtained. They had to rate an averageshaped
head model, which was animated with three classes of motion trajectories: 1)
original motion capture data, 2) approximations of the trajectories by the linear combination
model, and 3) morphs between facial movement sequences of two dierent
individuals. We found that the approximations were perceived as natural as the originals.
Unexpectedly, the morphs were perceived as even more natural (t(6)=4.6, p<.01)
than the original trajectories and their approximations. This might re
ect the fact
that the morphs tend to average out extreme movements. In a second experiment 14
observers had to distinguish between characteristic facial movements of two individuals
applied to a face with average shape. The movements were presented with three different
caricature levels (100, 125, 150). We found a signicant caricature eect:
150 caricatures were recognized better than the non-caricatured patterns (t(13)=2.5,
p<.05). This result suggests that spatio-temporal exaggeration improves the recognition
of identity from facial movements. We are currently investigating whether this
result generalizes to the 3D motion data and to dierent types of facial motion (e.g.,
rigid head motion versus non-rigid deformation of the face).