English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Neural Sparse Voxel Fields

Liu, L., Gu, J., Lin, K. Z., Chua, T.-S., & Theobalt, C. (2020). Neural Sparse Voxel Fields. Retrieved from https://arxiv.org/abs/2007.11571.

Item is

Files

show Files
hide Files
:
arXiv:2007.11571.pdf (Preprint), 10MB
Name:
arXiv:2007.11571.pdf
Description:
File downloaded from arXiv at 2021-02-08 10:48
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Liu, Lingjie1, Author           
Gu, Jiatao2, Author
Lin, Kyaw Zaw2, Author
Chua, Tat-Seng2, Author
Theobalt, Christian1, Author           
Affiliations:
1Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              
2External Organizations, ou_persistent22              

Content

show
hide
Free keywords: Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Graphics, cs.GR,Computer Science, Learning, cs.LG
 Abstract: Photo-realistic free-viewpoint rendering of real-world scenes using classical
computer graphics techniques is challenging, because it requires the difficult
step of capturing detailed appearance and geometry models. Recent studies have
demonstrated promising results by learning scene representations that
implicitly encode both geometry and appearance without 3D supervision. However,
existing approaches in practice often show blurry renderings caused by the
limited network capacity or the difficulty in finding accurate intersections of
camera rays with the scene geometry. Synthesizing high-resolution imagery from
these representations often requires time-consuming optical ray marching. In
this work, we introduce Neural Sparse Voxel Fields (NSVF), a new neural scene
representation for fast and high-quality free-viewpoint rendering. NSVF defines
a set of voxel-bounded implicit fields organized in a sparse voxel octree to
model local properties in each cell. We progressively learn the underlying
voxel structures with a differentiable ray-marching operation from only a set
of posed RGB images. With the sparse voxel octree structure, rendering novel
views can be accelerated by skipping the voxels containing no relevant scene
content. Our method is typically over 10 times faster than the state-of-the-art
(namely, NeRF(Mildenhall et al., 2020)) at inference time while achieving
higher quality results. Furthermore, by utilizing an explicit sparse voxel
representation, our method can easily be applied to scene editing and scene
composition. We also demonstrate several challenging tasks, including
multi-scene learning, free-viewpoint rendering of a moving human, and
large-scale scene rendering. Code and data are available at our website:
https://github.com/facebookresearch/NSVF.

Details

show
hide
Language(s): eng - English
 Dates: 2020-07-222021-01-062020
 Publication Status: Published online
 Pages: 20 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 2007.11571
BibTex Citekey: Liu_2007.11571
URI: https://arxiv.org/abs/2007.11571
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show