日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

成果報告書

Tex2Shape: Detailed Full Human Body Geometry From a Single Image

MPS-Authors
/persons/resource/persons221911

Alldieck,  Thiemo
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

/persons/resource/persons118756

Pons-Moll,  Gerard
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian
Computer Graphics, MPI for Informatics, Max Planck Society;

External Resource
There are no locators available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)

arXiv:1904.08645.pdf
(プレプリント), 6MB

付随資料 (公開)
There is no public supplementary material available
引用

Alldieck, T., Pons-Moll, G., Theobalt, C., & Magnor, M. A. (2019). Tex2Shape: Detailed Full Human Body Geometry From a Single Image. Retrieved from http://arxiv.org/abs/1904.08645.


引用: https://hdl.handle.net/21.11116/0000-0005-7CF6-B
要旨
We present a simple yet effective method to infer detailed full human body
shape from only a single photograph. Our model can infer full-body shape
including face, hair, and clothing including wrinkles at interactive
frame-rates. Results feature details even on parts that are occluded in the
input image. Our main idea is to turn shape regression into an aligned
image-to-image translation problem. The input to our method is a partial
texture map of the visible region obtained from off-the-shelf methods. From a
partial texture, we estimate detailed normal and vector displacement maps,
which can be applied to a low-resolution smooth body model to add detail and
clothing. Despite being trained purely with synthetic data, our model
generalizes well to real-world photographs. Numerous results demonstrate the
versatility and robustness of our method.