English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Learning to Reconstruct People in Clothing from a Single RGB Camera

Alldieck, T., Magnor, M. A., Bhatnagar, B. L., Theobalt, C., & Pons-Moll, G. (2019). Learning to Reconstruct People in Clothing from a Single RGB Camera. Retrieved from http://arxiv.org/abs/1903.05885.

Item is

Files

show Files
hide Files
:
arXiv:1903.05885.pdf (Preprint), 8MB
Name:
arXiv:1903.05885.pdf
Description:
File downloaded from arXiv at 2019-07-09 10:04
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Alldieck, Thiemo1, Author           
Magnor, Marcus A.2, Author           
Bhatnagar, Bharat Lal1, Author           
Theobalt, Christian3, Author                 
Pons-Moll, Gerard1, Author                 
Affiliations:
1Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society, ou_persistent22              
2External Organizations, ou_persistent22              
3Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              

Content

show
hide
Free keywords: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Abstract: We present a learning-based model to infer the personalized 3D shape of
people from a few frames (1-8) of a monocular video in which the person is
moving, in less than 10 seconds with a reconstruction accuracy of 5mm. Our
model learns to predict the parameters of a statistical body model and instance
displacements that add clothing and hair to the shape. The model achieves fast
and accurate predictions based on two key design choices. First, by predicting
shape in a canonical T-pose space, the network learns to encode the images of
the person into pose-invariant latent codes, where the information is fused.
Second, based on the observation that feed-forward predictions are fast but do
not always align with the input images, we predict using both, bottom-up and
top-down streams (one per view) allowing information to flow in both
directions. Learning relies only on synthetic 3D data. Once learned, the model
can take a variable number of frames as input, and is able to reconstruct
shapes even from a single image with an accuracy of 6mm. Results on 3 different
datasets demonstrate the efficacy and accuracy of our approach.

Details

show
hide
Language(s): eng - English
 Dates: 2019-03-142019-04-082019
 Publication Status: Published online
 Pages: 12 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 1903.05885
URI: http://arxiv.org/abs/1903.05885
BibTex Citekey: Alldieck_arXiv1903.05885
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show