English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

i3DMM: Deep Implicit 3D Morphable Model of Human Heads

MPS-Authors
/persons/resource/persons206546

Tewari,  Ayush
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45449

Seidel,  Hans-Peter       
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons229949

Elgharib,  Mohamed
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian       
Computer Graphics, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:2011.14143.pdf
(Preprint), 30MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Yenamandra, T., Tewari, A., Bernard, F., Seidel, H.-P., Elgharib, M., Cremers, D., et al. (2020). i3DMM: Deep Implicit 3D Morphable Model of Human Heads. Retrieved from https://arxiv.org/abs/2011.14143.


Cite as: https://hdl.handle.net/21.11116/0000-0007-B702-8
Abstract
We present the first deep implicit 3D morphable model (i3DMM) of full heads.
Unlike earlier morphable face models it not only captures identity-specific
geometry, texture, and expressions of the frontal face, but also models the
entire head, including hair. We collect a new dataset consisting of 64 people
with different expressions and hairstyles to train i3DMM. Our approach has the
following favorable properties: (i) It is the first full head morphable model
that includes hair. (ii) In contrast to mesh-based models it can be trained on
merely rigidly aligned scans, without requiring difficult non-rigid
registration. (iii) We design a novel architecture to decouple the shape model
into an implicit reference shape and a deformation of this reference shape.
With that, dense correspondences between shapes can be learned implicitly. (iv)
This architecture allows us to semantically disentangle the geometry and color
components, as color is learned in the reference space. Geometry is further
disentangled as identity, expressions, and hairstyle, while color is
disentangled as identity and hairstyle components. We show the merits of i3DMM
using ablation studies, comparisons to state-of-the-art models, and
applications such as semantic head editing and texture transfer. We will make
our model publicly available.