English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Mo2Cap2: Real-time Mobile 3D Motion Capture with a Cap-mounted Fisheye Camera

Xu, W., Chatterjee, A., Zollhöfer, M., Rhodin, H., Fua, P., Seidel, H.-P., et al. (2018). Mo2Cap2: Real-time Mobile 3D Motion Capture with a Cap-mounted Fisheye Camera. Retrieved from http://arxiv.org/abs/1803.05959.

Item is

Basic

show hide
Genre: Paper
Latex : {Mo2Cap2}: Real-time Mobile {3D} Motion Capture with a Cap-mounted Fisheye Camera

Files

show Files
hide Files
:
arXiv:1803.05959.pdf (Preprint), 7MB
Name:
arXiv:1803.05959.pdf
Description:
File downloaded from arXiv at 2018-05-02 10:03 Submission to ECCV 2018
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Xu, Weipeng1, Author           
Chatterjee, Avishek1, Author           
Zollhöfer, Michael1, Author           
Rhodin, Helge2, Author           
Fua, Pascal2, Author
Seidel, Hans-Peter1, Author                 
Theobalt, Christian1, Author                 
Affiliations:
1Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              
2External Organizations, ou_persistent22              

Content

show
hide
Free keywords: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Abstract: We propose the first real-time approach for the egocentric estimation of 3D human body pose in a wide range of unconstrained everyday activities. This setting has a unique set of challenges, such as mobility of the hardware setup, and robustness to long capture sessions with fast recovery from tracking failures. We tackle these challenges based on a novel lightweight setup that converts a standard baseball cap to a device for high-quality pose estimation based on a single cap-mounted fisheye camera. From the captured egocentric live stream, our CNN based 3D pose estimation approach runs at 60Hz on a consumer-level GPU. In addition to the novel hardware setup, our other main contributions are: 1) a large ground truth training corpus of top-down fisheye images and 2) a novel disentangled 3D pose estimation approach that takes the unique properties of the egocentric viewpoint into account. As shown by our evaluation, we achieve lower 3D joint error as well as better 2D overlay than the existing baselines.

Details

show
hide
Language(s): eng - English
 Dates: 2018-03-152018
 Publication Status: Published online
 Pages: 18 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 1803.05959
URI: http://arxiv.org/abs/1803.05959
BibTex Citekey: Xu_arXiv1803.05959
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show