English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Combining Implicit Function Learning and Parametric Models for 3D Human Reconstruction

MPS-Authors

Bhatnagar,  Bharat Lal
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian       
Computer Graphics, MPI for Informatics, Max Planck Society;

Pons-Moll,  Gerard
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:2007.11432.pdf
(Preprint), 10MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Bhatnagar, B. L., Sminchisescu, C., Theobalt, C., & Pons-Moll, G. (2020). Combining Implicit Function Learning and Parametric Models for 3D Human Reconstruction. Retrieved from https://arxiv.org/abs/2007.11432.


Cite as: https://hdl.handle.net/21.11116/0000-0007-E8A0-E
Abstract
Implicit functions represented as deep learning approximations are powerful
for reconstructing 3D surfaces. However, they can only produce static surfaces
that are not controllable, which provides limited ability to modify the
resulting model by editing its pose or shape parameters. Nevertheless, such
features are essential in building flexible models for both computer graphics
and computer vision. In this work, we present methodology that combines
detail-rich implicit functions and parametric representations in order to
reconstruct 3D models of people that remain controllable and accurate even in
the presence of clothing. Given sparse 3D point clouds sampled on the surface
of a dressed person, we use an Implicit Part Network (IP-Net)to jointly predict
the outer 3D surface of the dressed person, the and inner body surface, and the
semantic correspondences to a parametric body model. We subsequently use
correspondences to fit the body model to our inner surface and then non-rigidly
deform it (under a parametric body + displacement model) to the outer surface
in order to capture garment, face and hair detail. In quantitative and
qualitative experiments with both full body data and hand scans we show that
the proposed methodology generalizes, and is effective even given incomplete
point clouds collected from single-view depth images. Our models and code can
be downloaded from http://virtualhumans.mpi-inf.mpg.de/ipnet.