English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Fast Non-Rigid Radiance Fields from Monocularized Data

Kappel, M., Golyanik, V., Castillo, S., Theobalt, C., & Magnor, M. A. (2022). Fast Non-Rigid Radiance Fields from Monocularized Data. Retrieved from https://arxiv.org/abs/2212.01368.

Item is

Files

show Files
hide Files
:
arXiv:2212.01368.pdf (Preprint), 51MB
Name:
arXiv:2212.01368.pdf
Description:
File downloaded from arXiv at 2022-12-28 10:53
OA-Status:
Not specified
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Kappel, Moritz1, Author
Golyanik, Vladislav2, Author           
Castillo, Susana1, Author
Theobalt, Christian2, Author                 
Magnor, Marcus A.1, Author           
Affiliations:
1External Organizations, ou_persistent22              
2Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society, ou_3311330              

Content

show
hide
Free keywords: Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Graphics, cs.GR,Computer Science, Learning, cs.LG
 Abstract: 3D reconstruction and novel view synthesis of dynamic scenes from collections
of single views recently gained increased attention. Existing work shows
impressive results for synthetic setups and forward-facing real-world data, but
is severely limited in the training speed and angular range for generating
novel views. This paper addresses these limitations and proposes a new method
for full 360{\deg} novel view synthesis of non-rigidly deforming scenes. At the
core of our method are: 1) An efficient deformation module that decouples the
processing of spatial and temporal information for acceleration at training and
inference time; and 2) A static module representing the canonical scene as a
fast hash-encoded neural radiance field. We evaluate the proposed approach on
the established synthetic D-NeRF benchmark, that enables efficient
reconstruction from a single monocular view per time-frame randomly sampled
from a full hemisphere. We refer to this form of inputs as monocularized data.
To prove its practicality for real-world scenarios, we recorded twelve
challenging sequences with human actors by sampling single frames from a
synchronized multi-view rig. In both cases, our method is trained significantly
faster than previous methods (minutes instead of days) while achieving higher
visual accuracy for generated novel views. Our source code and data is
available at our project page
https://graphics.tu-bs.de/publications/kappel2022fast.

Details

show
hide
Language(s): eng - English
 Dates: 2022-12-022022
 Publication Status: Published online
 Pages: 17 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 2212.01368
URI: https://arxiv.org/abs/2212.01368
BibTex Citekey: Kappel2212.01368
 Degree: -

Event

show

Legal Case

show

Project information

show hide
Project name : 4DRepLy
Grant ID : 770784
Funding program : Horizon 2020 (H2020)
Funding organization : European Commission (EC)

Source

show