Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Forschungspapier

Learning to Reconstruct People in Clothing from a Single RGB Camera

MPG-Autoren
/persons/resource/persons221911

Alldieck,  Thiemo
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

/persons/resource/persons221909

Bhatnagar,  Bharat Lal
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian       
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons118756

Pons-Moll,  Gerard       
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)

arXiv:1903.05885.pdf
(Preprint), 8MB

Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Alldieck, T., Magnor, M. A., Bhatnagar, B. L., Theobalt, C., & Pons-Moll, G. (2019). Learning to Reconstruct People in Clothing from a Single RGB Camera. Retrieved from http://arxiv.org/abs/1903.05885.


Zitierlink: https://hdl.handle.net/21.11116/0000-0003-FE01-E
Zusammenfassung
We present a learning-based model to infer the personalized 3D shape of
people from a few frames (1-8) of a monocular video in which the person is
moving, in less than 10 seconds with a reconstruction accuracy of 5mm. Our
model learns to predict the parameters of a statistical body model and instance
displacements that add clothing and hair to the shape. The model achieves fast
and accurate predictions based on two key design choices. First, by predicting
shape in a canonical T-pose space, the network learns to encode the images of
the person into pose-invariant latent codes, where the information is fused.
Second, based on the observation that feed-forward predictions are fast but do
not always align with the input images, we predict using both, bottom-up and
top-down streams (one per view) allowing information to flow in both
directions. Learning relies only on synthetic 3D data. Once learned, the model
can take a variable number of frames as input, and is able to reconstruct
shapes even from a single image with an accuracy of 6mm. Results on 3 different
datasets demonstrate the efficacy and accuracy of our approach.