Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

 
 
DownloadE-Mail
  IGNOR: Image-guided Neural Object Rendering

Thies, J., Zollhöfer, M., Theobalt, C., Stamminger, M., & Nießner, M. (2018). IGNOR: Image-guided Neural Object Rendering. Retrieved from http://arxiv.org/abs/1811.10720.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Forschungspapier
Latex : {IGNOR}: {Image-guided Neural Object Rendering}

Dateien

einblenden: Dateien
ausblenden: Dateien
:
arXiv:1811.10720.pdf (Preprint), 5MB
Name:
arXiv:1811.10720.pdf
Beschreibung:
File downloaded from arXiv at 2019-02-11 13:04 Video: https://youtu.be/s79HG9yn7QM
OA-Status:
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/pdf / [MD5]
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-

Externe Referenzen

einblenden:
ausblenden:
externe Referenz:
https://youtu.be/s79HG9yn7QM (Ergänzendes Material)
Beschreibung:
Video
OA-Status:

Urheber

einblenden:
ausblenden:
 Urheber:
Thies, Justus1, Autor           
Zollhöfer, Michael1, Autor           
Theobalt, Christian2, Autor                 
Stamminger, Marc1, Autor           
Nießner, Matthias1, Autor           
Affiliations:
1External Organizations, ou_persistent22              
2Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              

Inhalt

einblenden:
ausblenden:
Schlagwörter: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Zusammenfassung: We propose a new learning-based novel view synthesis approach for scanned
objects that is trained based on a set of multi-view images. Instead of using
texture mapping or hand-designed image-based rendering, we directly train a
deep neural network to synthesize a view-dependent image of an object. First,
we employ a coverage-based nearest neighbour look-up to retrieve a set of
reference frames that are explicitly warped to a given target view using
cross-projection. Our network then learns to best composite the warped images.
This enables us to generate photo-realistic results, while not having to
allocate capacity on `remembering' object appearance. Instead, the multi-view
images can be reused. While this works well for diffuse objects,
cross-projection does not generalize to view-dependent effects. Therefore, we
propose a decomposition network that extracts view-dependent effects and that
is trained in a self-supervised manner. After decomposition, the diffuse
shading is cross-projected, while the view-dependent layer of the target view
is regressed. We show the effectiveness of our approach both qualitatively and
quantitatively on real as well as synthetic data.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2018-11-262018
 Publikationsstatus: Online veröffentlicht
 Seiten: 10 p.
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: arXiv: 1811.10720
URI: http://arxiv.org/abs/1811.10720
BibTex Citekey: Thies2018IGNORIN
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: