English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  IGNOR: Image-guided Neural Object Rendering

Thies, J., Zollhöfer, M., Theobalt, C., Stamminger, M., & Nießner, M. (2018). IGNOR: Image-guided Neural Object Rendering. Retrieved from http://arxiv.org/abs/1811.10720.

Item is

Basic

show hide
Genre: Paper
Latex : {IGNOR}: {Image-guided Neural Object Rendering}

Files

show Files
hide Files
:
arXiv:1811.10720.pdf (Preprint), 5MB
Name:
arXiv:1811.10720.pdf
Description:
File downloaded from arXiv at 2019-02-11 13:04 Video: https://youtu.be/s79HG9yn7QM
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show
hide
Locator:
https://youtu.be/s79HG9yn7QM (Supplementary material)
Description:
Video
OA-Status:

Creators

show
hide
 Creators:
Thies, Justus1, Author           
Zollhöfer, Michael1, Author           
Theobalt, Christian2, Author                 
Stamminger, Marc1, Author           
Nießner, Matthias1, Author           
Affiliations:
1External Organizations, ou_persistent22              
2Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              

Content

show
hide
Free keywords: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Abstract: We propose a new learning-based novel view synthesis approach for scanned
objects that is trained based on a set of multi-view images. Instead of using
texture mapping or hand-designed image-based rendering, we directly train a
deep neural network to synthesize a view-dependent image of an object. First,
we employ a coverage-based nearest neighbour look-up to retrieve a set of
reference frames that are explicitly warped to a given target view using
cross-projection. Our network then learns to best composite the warped images.
This enables us to generate photo-realistic results, while not having to
allocate capacity on `remembering' object appearance. Instead, the multi-view
images can be reused. While this works well for diffuse objects,
cross-projection does not generalize to view-dependent effects. Therefore, we
propose a decomposition network that extracts view-dependent effects and that
is trained in a self-supervised manner. After decomposition, the diffuse
shading is cross-projected, while the view-dependent layer of the target view
is regressed. We show the effectiveness of our approach both qualitatively and
quantitatively on real as well as synthetic data.

Details

show
hide
Language(s): eng - English
 Dates: 2018-11-262018
 Publication Status: Published online
 Pages: 10 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 1811.10720
URI: http://arxiv.org/abs/1811.10720
BibTex Citekey: Thies2018IGNORIN
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show