Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Zeitschriftenartikel

Navigating through a virtual city: Using virtual reality technology to study human action and perception.

MPG-Autoren
/persons/resource/persons84273

van Veen,  HAHC
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83888

Distler,  H
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83828

Braun,  S
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

van Veen, H., Distler, H., Braun, S., & Bülthoff, H. (1998). Navigating through a virtual city: Using virtual reality technology to study human action and perception. Future Generation Computer Systems, 14(3-4), 231-242. doi:10.1016/S0167-739X(98)00027-2.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-E805-A
Zusammenfassung
Images formed by a human face change with viewpoint. A new technique is described for synthesizing images of faces from new viewpoints,
when only a single 2D image is available. A novel 2D image of a face can be computed without explicitly computing the 3D structure of the head. The
technique draws on a single generic 3D model of a human head and on prior knowledge of faces based on example images of other faces seen in different
poses. The example images are used to "learn" a pose-invariant shape and texture description of a new face. The 3D model is used to solve the
correspondence problem between images showing faces in different poses. The proposed method is interesting for view independent face recognition tasks
as well as for image synthesis problems in areas like teleconferencing and virtualized reality.