English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Unbiased 4D: Monocular 4D Reconstruction with a Neural Deformation Model

MPS-Authors
/persons/resource/persons283702

Johnson,  Erik Colin Manasie
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society;

/persons/resource/persons101676

Habermann,  Marc
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society;

/persons/resource/persons255947

Shimada,  Soshi
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society;

/persons/resource/persons239654

Golyanik,  Vladislav
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian       
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:2206.08368.pdf
(Preprint), 30MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Johnson, E. C. M., Habermann, M., Shimada, S., Golyanik, V., & Theobalt, C. (in press). Unbiased 4D: Monocular 4D Reconstruction with a Neural Deformation Model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops.


Cite as: https://hdl.handle.net/21.11116/0000-000B-9C7F-8
Abstract
Capturing general deforming scenes is crucial for many computer graphics and
vision applications, and it is especially challenging when only a monocular RGB
video of the scene is available. Competing methods assume dense point tracks,
3D templates, large-scale training datasets, or only capture small-scale
deformations. In contrast to those, our method, Ub4D, makes none of these
assumptions while outperforming the previous state of the art in challenging
scenarios. Our technique includes two new, in the context of non-rigid 3D
reconstruction, components, i.e., 1) A coordinate-based and implicit neural
representation for non-rigid scenes, which enables an unbiased reconstruction
of dynamic scenes, and 2) A novel dynamic scene flow loss, which enables the
reconstruction of larger deformations. Results on our new dataset, which will
be made publicly available, demonstrate the clear improvement over the state of
the art in terms of surface reconstruction accuracy and robustness to large
deformations. Visit the project page https://4dqv.mpi-inf.mpg.de/Ub4D/.