Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

 
 
DownloadE-Mail
  Neural Sparse Voxel Fields

Liu, L., Gu, J., Lin, K. Z., Chua, T.-S., & Theobalt, C. (2020). Neural Sparse Voxel Fields. Retrieved from https://arxiv.org/abs/2007.11571.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Forschungspapier

Dateien

einblenden: Dateien
ausblenden: Dateien
:
arXiv:2007.11571.pdf (Preprint), 10MB
Name:
arXiv:2007.11571.pdf
Beschreibung:
File downloaded from arXiv at 2021-02-08 10:48
OA-Status:
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/pdf / [MD5]
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Liu, Lingjie1, Autor           
Gu, Jiatao2, Autor
Lin, Kyaw Zaw2, Autor
Chua, Tat-Seng2, Autor
Theobalt, Christian1, Autor           
Affiliations:
1Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              
2External Organizations, ou_persistent22              

Inhalt

einblenden:
ausblenden:
Schlagwörter: Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Graphics, cs.GR,Computer Science, Learning, cs.LG
 Zusammenfassung: Photo-realistic free-viewpoint rendering of real-world scenes using classical
computer graphics techniques is challenging, because it requires the difficult
step of capturing detailed appearance and geometry models. Recent studies have
demonstrated promising results by learning scene representations that
implicitly encode both geometry and appearance without 3D supervision. However,
existing approaches in practice often show blurry renderings caused by the
limited network capacity or the difficulty in finding accurate intersections of
camera rays with the scene geometry. Synthesizing high-resolution imagery from
these representations often requires time-consuming optical ray marching. In
this work, we introduce Neural Sparse Voxel Fields (NSVF), a new neural scene
representation for fast and high-quality free-viewpoint rendering. NSVF defines
a set of voxel-bounded implicit fields organized in a sparse voxel octree to
model local properties in each cell. We progressively learn the underlying
voxel structures with a differentiable ray-marching operation from only a set
of posed RGB images. With the sparse voxel octree structure, rendering novel
views can be accelerated by skipping the voxels containing no relevant scene
content. Our method is typically over 10 times faster than the state-of-the-art
(namely, NeRF(Mildenhall et al., 2020)) at inference time while achieving
higher quality results. Furthermore, by utilizing an explicit sparse voxel
representation, our method can easily be applied to scene editing and scene
composition. We also demonstrate several challenging tasks, including
multi-scene learning, free-viewpoint rendering of a moving human, and
large-scale scene rendering. Code and data are available at our website:
https://github.com/facebookresearch/NSVF.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2020-07-222021-01-062020
 Publikationsstatus: Online veröffentlicht
 Seiten: 20 p.
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: arXiv: 2007.11571
BibTex Citekey: Liu_2007.11571
URI: https://arxiv.org/abs/2007.11571
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: