hide
Free keywords:
Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Graphics, cs.GR
Abstract:
We present a new effective way for performance capture of deforming meshes
with fine-scale time-varying surface detail from multi-view video. Our method
builds up on coarse 4D surface reconstructions, as obtained with commonly used
template-based methods. As they only capture models of coarse-to-medium scale
detail, fine scale deformation detail is often done in a second pass by using
stereo constraints, features, or shading-based refinement. In this paper, we
propose a new effective and stable solution to this second step. Our framework
creates an implicit representation of the deformable mesh using a dense
collection of 3D Gaussian functions on the surface, and a set of 2D Gaussians
for the images. The fine scale deformation of all mesh vertices that maximizes
photo-consistency can be efficiently found by densely optimizing a new
model-to-image consistency energy on all vertex positions. A principal
advantage is that our problem formulation yields a smooth closed form energy
with implicit occlusion handling and analytic derivatives. Error-prone
correspondence finding, or discrete sampling of surface displacement values are
also not needed. We show several reconstructions of human subjects wearing
loose clothing, and we qualitatively and quantitatively show that we robustly
capture more detail than related methods.