hide
Free keywords:
Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Graphics, cs.GR,Computer Science, Learning, cs.LG
Abstract:
We present the first deep implicit 3D morphable model (i3DMM) of full heads.
Unlike earlier morphable face models it not only captures identity-specific
geometry, texture, and expressions of the frontal face, but also models the
entire head, including hair. We collect a new dataset consisting of 64 people
with different expressions and hairstyles to train i3DMM. Our approach has the
following favorable properties: (i) It is the first full head morphable model
that includes hair. (ii) In contrast to mesh-based models it can be trained on
merely rigidly aligned scans, without requiring difficult non-rigid
registration. (iii) We design a novel architecture to decouple the shape model
into an implicit reference shape and a deformation of this reference shape.
With that, dense correspondences between shapes can be learned implicitly. (iv)
This architecture allows us to semantically disentangle the geometry and color
components, as color is learned in the reference space. Geometry is further
disentangled as identity, expressions, and hairstyle, while color is
disentangled as identity and hairstyle components. We show the merits of i3DMM
using ablation studies, comparisons to state-of-the-art models, and
applications such as semantic head editing and texture transfer. We will make
our model publicly available.