English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

EgoCap: Egocentric Marker-less Motion Capture with Two Fisheye Cameras (Extended Abstract)

MPS-Authors
/persons/resource/persons79450

Rhodin,  Helge
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45289

Richardt,  Christian
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons138534

Casas,  Dan
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons188120

Insafutdinov,  Eldar
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

/persons/resource/persons199324

Shafiei,  Mohammad
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45449

Seidel,  Hans-Peter       
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45383

Schiele,  Bernt       
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian       
Computer Graphics, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:1701.00142.pdf
(Preprint), 3MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Rhodin, H., Richardt, C., Casas, D., Insafutdinov, E., Shafiei, M., Seidel, H.-P., et al. (2016). EgoCap: Egocentric Marker-less Motion Capture with Two Fisheye Cameras (Extended Abstract). Retrieved from http://arxiv.org/abs/1701.00142.


Cite as: https://hdl.handle.net/21.11116/0000-0000-3B3D-B
Abstract
Marker-based and marker-less optical skeletal motion-capture methods use an outside-in arrangement of cameras placed around a scene, with viewpoints converging on the center. They often create discomfort by possibly needed marker suits, and their recording volume is severely restricted and often constrained to indoor scenes with controlled backgrounds. We therefore propose a new method for real-time, marker-less and egocentric motion capture which estimates the full-body skeleton pose from a lightweight stereo pair of fisheye cameras that are attached to a helmet or virtual-reality headset. It combines the strength of a new generative pose estimation framework for fisheye views with a ConvNet-based body-part detector trained on a new automatically annotated and augmented dataset. Our inside-in method captures full-body motion in general indoor and outdoor scenes, and also crowded scenes.