Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Zeitschriftenartikel

Combining 3D Flow Fields with Silhouette-based Human Motion Capture for Immersive Video

MPG-Autoren
/persons/resource/persons45610

Theobalt,  Christian
Computer Graphics, MPI for Informatics, Max Planck Society;
Programming Logics, MPI for Informatics, Max Planck Society;

/persons/resource/persons44222

Carranza,  Joel
Computer Graphics, MPI for Informatics, Max Planck Society;
Graphics - Optics - Vision, MPI for Informatics, Max Planck Society;

/persons/resource/persons44965

Magnor,  Marcus
Graphics - Optics - Vision, MPI for Informatics, Max Planck Society;

/persons/resource/persons45449

Seidel,  Hans-Peter
Computer Graphics, MPI for Informatics, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Theobalt, C., Carranza, J., Magnor, M., & Seidel, H.-P. (2004). Combining 3D Flow Fields with Silhouette-based Human Motion Capture for Immersive Video. Graphical Models, 66, 333-351.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-000F-29CE-7
Zusammenfassung
\begin{abstract} In recent years, the convergence of Computer Vision and Computer Graphics has put forth a new field of research that focuses on the reconstruction of real-world scenes from video streams. To make immersive \mbox{3D} video reality, the whole pipeline spanning from scene acquisition over \mbox{3D} video reconstruction to real-time rendering needs to be researched. In this paper, we describe latest advancements of our system to record, reconstruct and render free-viewpoint videos of human actors. We apply a silhouette-based non-intrusive motion capture algorithm making use of a 3D human body model to estimate the actor's parameters of motion from multi-view video streams. A renderer plays back the acquired motion sequence in real-time from any arbitrary perspective. Photo-realistic physical appearance of the moving actor is obtained by generating time-varying multi-view textures from video. This work shows how the motion capture sub-system can be enhanced by incorporating texture information from the input video streams into the tracking process. 3D motion fields are reconstructed from optical flow that are used in combination with silhouette matching to estimate pose parameters. We demonstrate that a high visual quality can be achieved with the proposed approach and validate the enhancements caused by the the motion field step. \end{abstract}