English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Fast Non-Rigid Radiance Fields from Monocularized Data

MPS-Authors
/persons/resource/persons239654

Golyanik,  Vladislav
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian       
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:2212.01368.pdf
(Preprint), 51MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Kappel, M., Golyanik, V., Castillo, S., Theobalt, C., & Magnor, M. A. (2022). Fast Non-Rigid Radiance Fields from Monocularized Data. Retrieved from https://arxiv.org/abs/2212.01368.


Cite as: https://hdl.handle.net/21.11116/0000-000C-161D-C
Abstract
3D reconstruction and novel view synthesis of dynamic scenes from collections
of single views recently gained increased attention. Existing work shows
impressive results for synthetic setups and forward-facing real-world data, but
is severely limited in the training speed and angular range for generating
novel views. This paper addresses these limitations and proposes a new method
for full 360{\deg} novel view synthesis of non-rigidly deforming scenes. At the
core of our method are: 1) An efficient deformation module that decouples the
processing of spatial and temporal information for acceleration at training and
inference time; and 2) A static module representing the canonical scene as a
fast hash-encoded neural radiance field. We evaluate the proposed approach on
the established synthetic D-NeRF benchmark, that enables efficient
reconstruction from a single monocular view per time-frame randomly sampled
from a full hemisphere. We refer to this form of inputs as monocularized data.
To prove its practicality for real-world scenarios, we recorded twelve
challenging sequences with human actors by sampling single frames from a
synchronized multi-view rig. In both cases, our method is trained significantly
faster than previous methods (minutes instead of days) while achieving higher
visual accuracy for generated novel views. Our source code and data is
available at our project page
https://graphics.tu-bs.de/publications/kappel2022fast.