Help Privacy Policy Disclaimer
  Advanced SearchBrowse




Conference Paper

Tex2Shape: Detailed Full Human Body Geometry from a Single Image


Pons-Moll,  Gerard
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;


Theobalt,  Christian
Computer Graphics, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (public)

(Preprint), 8MB

Supplementary Material (public)
There is no public supplementary material available

Alldieck, T., Pons-Moll, G., Theobalt, C., & Magnor, M. A. (2019). Tex2Shape: Detailed Full Human Body Geometry from a Single Image. In International Conference on Computer Vision (pp. 2293-2303). Piscataway, NJ: IEEE. doi:10.1109/ICCV.2019.00238.

Cite as: http://hdl.handle.net/21.11116/0000-0003-ECBE-E
We present a simple yet effective method to infer detailed full human body shape from only a single photograph. Our model can infer full-body shape including face, hair, and clothing including wrinkles at interactive frame-rates. Results feature details even on parts that are occluded in the input image. Our main idea is to turn shape regression into an aligned image-to-image translation problem. The input to our method is a partial texture map of the visible region obtained from off-the-shelf methods. From a partial texture, we estimate detailed normal and vector displacement maps, which can be applied to a low-resolution smooth body model to add detail and clothing. Despite being trained purely with synthetic data, our model generalizes well to real-world photographs. Numerous results demonstrate the versatility and robustness of our method.