English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video

Tretschk, E., Tewari, A., Golyanik, V., Zollhöfer, M., Lassner, C., & Theobalt, C. (2020). Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video. Retrieved from https://arxiv.org/abs/2012.12247.

Item is

Files

show Files
hide Files
:
arXiv:2012.12247.pdf (Preprint), 4MB
Name:
arXiv:2012.12247.pdf
Description:
File downloaded from arXiv at 2021-02-08 14:11
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Tretschk, Edgar1, Author           
Tewari, Ayush1, Author           
Golyanik, Vladislav1, Author           
Zollhöfer, Michael2, Author           
Lassner, Christoph2, Author
Theobalt, Christian1, Author                 
Affiliations:
1Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              
2External Organizations, ou_persistent22              

Content

show
hide
Free keywords: Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Graphics, cs.GR
 Abstract: In this tech report, we present the current state of our ongoing work on
reconstructing Neural Radiance Fields (NERF) of general non-rigid scenes via
ray bending. Non-rigid NeRF (NR-NeRF) takes RGB images of a deforming object
(e.g., from a monocular video) as input and then learns a geometry and
appearance representation that not only allows to reconstruct the input
sequence but also to re-render any time step into novel camera views with high
fidelity. In particular, we show that a consumer-grade camera is sufficient to
synthesize convincing bullet-time videos of short and simple scenes. In
addition, the resulting representation enables correspondence estimation across
views and time, and provides rigidity scores for each point in the scene. We
urge the reader to watch the supplemental videos for qualitative results. We
will release our code.

Details

show
hide
Language(s): eng - English
 Dates: 2020-12-222020-12-232020
 Publication Status: Published online
 Pages: 9 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 2012.12247
URI: https://arxiv.org/abs/2012.12247
BibTex Citekey: Tretschk_2012.12247
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show