English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Tex2Shape: Detailed Full Human Body Geometry from a Single Image

MPS-Authors
/persons/resource/persons118756

Pons-Moll,  Gerard
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian
Computer Graphics, MPI for Informatics, Max Planck Society;

External Ressource
No external resources are shared
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Alldieck, T., Pons-Moll, G., Theobalt, C., & Magnor, M. A. (2019). Tex2Shape: Detailed Full Human Body Geometry from a Single Image. In International Conference on Computer Vision (pp. 2293-2303). Piscataway, NJ: IEEE. doi:10.1109/ICCV.2019.00238.


Cite as: http://hdl.handle.net/21.11116/0000-0003-ECBE-E
Abstract
We present a simple yet effective method to infer detailed full human body shape from only a single photograph. Our model can infer full-body shape including face, hair, and clothing including wrinkles at interactive frame-rates. Results feature details even on parts that are occluded in the input image. Our main idea is to turn shape regression into an aligned image-to-image translation problem. The input to our method is a partial texture map of the visible region obtained from off-the-shelf methods. From a partial texture, we estimate detailed normal and vector displacement maps, which can be applied to a low-resolution smooth body model to add detail and clothing. Despite being trained purely with synthetic data, our model generalizes well to real-world photographs. Numerous results demonstrate the versatility and robustness of our method.