English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Learning to Reconstruct People in Clothing from a Single RGB Camera

MPS-Authors
/persons/resource/persons221911

Alldieck,  Thiemo
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

/persons/resource/persons221909

Bhatnagar,  Bharat Lal
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons118756

Pons-Moll,  Gerard
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

Locator
There are no locators available
Fulltext (public)

arXiv:1903.05885.pdf
(Preprint), 8MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Alldieck, T., Magnor, M. A., Bhatnagar, B. L., Theobalt, C., & Pons-Moll, G. (2019). Learning to Reconstruct People in Clothing from a Single RGB Camera. Retrieved from http://arxiv.org/abs/1903.05885.


Cite as: http://hdl.handle.net/21.11116/0000-0003-FE01-E
Abstract
We present a learning-based model to infer the personalized 3D shape of people from a few frames (1-8) of a monocular video in which the person is moving, in less than 10 seconds with a reconstruction accuracy of 5mm. Our model learns to predict the parameters of a statistical body model and instance displacements that add clothing and hair to the shape. The model achieves fast and accurate predictions based on two key design choices. First, by predicting shape in a canonical T-pose space, the network learns to encode the images of the person into pose-invariant latent codes, where the information is fused. Second, based on the observation that feed-forward predictions are fast but do not always align with the input images, we predict using both, bottom-up and top-down streams (one per view) allowing information to flow in both directions. Learning relies only on synthetic 3D data. Once learned, the model can take a variable number of frames as input, and is able to reconstruct shapes even from a single image with an accuracy of 6mm. Results on 3 different datasets demonstrate the efficacy and accuracy of our approach.