Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  Fast Non-Rigid Radiance Fields from Monocularized Data

Kappel, M., Golyanik, V., Castillo, S., Theobalt, C., & Magnor, M. A. (2022). Fast Non-Rigid Radiance Fields from Monocularized Data. Retrieved from https://arxiv.org/abs/2212.01368.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Forschungspapier

Dateien

einblenden: Dateien
ausblenden: Dateien
:
arXiv:2212.01368.pdf (Preprint), 51MB
Name:
arXiv:2212.01368.pdf
Beschreibung:
File downloaded from arXiv at 2022-12-28 10:53
OA-Status:
Keine Angabe
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/pdf / [MD5]
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Kappel, Moritz1, Autor
Golyanik, Vladislav2, Autor           
Castillo, Susana1, Autor
Theobalt, Christian2, Autor                 
Magnor, Marcus A.1, Autor           
Affiliations:
1External Organizations, ou_persistent22              
2Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society, ou_3311330              

Inhalt

einblenden:
ausblenden:
Schlagwörter: Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Graphics, cs.GR,Computer Science, Learning, cs.LG
 Zusammenfassung: 3D reconstruction and novel view synthesis of dynamic scenes from collections
of single views recently gained increased attention. Existing work shows
impressive results for synthetic setups and forward-facing real-world data, but
is severely limited in the training speed and angular range for generating
novel views. This paper addresses these limitations and proposes a new method
for full 360{\deg} novel view synthesis of non-rigidly deforming scenes. At the
core of our method are: 1) An efficient deformation module that decouples the
processing of spatial and temporal information for acceleration at training and
inference time; and 2) A static module representing the canonical scene as a
fast hash-encoded neural radiance field. We evaluate the proposed approach on
the established synthetic D-NeRF benchmark, that enables efficient
reconstruction from a single monocular view per time-frame randomly sampled
from a full hemisphere. We refer to this form of inputs as monocularized data.
To prove its practicality for real-world scenarios, we recorded twelve
challenging sequences with human actors by sampling single frames from a
synchronized multi-view rig. In both cases, our method is trained significantly
faster than previous methods (minutes instead of days) while achieving higher
visual accuracy for generated novel views. Our source code and data is
available at our project page
https://graphics.tu-bs.de/publications/kappel2022fast.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2022-12-022022
 Publikationsstatus: Online veröffentlicht
 Seiten: 17 p.
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: arXiv: 2212.01368
URI: https://arxiv.org/abs/2212.01368
BibTex Citekey: Kappel2212.01368
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden: ausblenden:
Projektname : 4DRepLy
Grant ID : 770784
Förderprogramm : Horizon 2020 (H2020)
Förderorganisation : European Commission (EC)

Quelle

einblenden: