English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Neural View-Interpolation for Sparse Light Field Video

Bemana, M., Myszkowski, K., Seidel, H.-P., & Ritschel, T. (2019). Neural View-Interpolation for Sparse Light Field Video. Retrieved from http://arxiv.org/abs/1910.13921.

Item is

Files

show Files
hide Files
:
arXiv:1910.13921.pdf (Preprint), 4MB
 
File Permalink:
-
Name:
arXiv:1910.13921.pdf
Description:
File downloaded from arXiv at 2020-01-15 12:26
OA-Status:
Visibility:
Private (embargoed till 2020-07-31)
MIME-Type / Checksum:
application/pdf
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Bemana, Mojtaba1, Author           
Myszkowski, Karol1, Author           
Seidel, Hans-Peter1, Author                 
Ritschel, Tobias2, Author           
Affiliations:
1Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              
2External Organizations, ou_persistent22              

Content

show
hide
Free keywords: Computer Science, Graphics, cs.GR,Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Learning, cs.LG,eess.IV
 Abstract: We suggest representing light field (LF) videos as "one-off" neural networks
(NN), i.e., a learned mapping from view-plus-time coordinates to
high-resolution color values, trained on sparse views. Initially, this sounds
like a bad idea for three main reasons: First, a NN LF will likely have less
quality than a same-sized pixel basis representation. Second, only few training
data, e.g., 9 exemplars per frame are available for sparse LF videos. Third,
there is no generalization across LFs, but across view and time instead.
Consequently, a network needs to be trained for each LF video. Surprisingly,
these problems can turn into substantial advantages: Other than the linear
pixel basis, a NN has to come up with a compact, non-linear i.e., more
intelligent, explanation of color, conditioned on the sparse view and time
coordinates. As observed for many NN however, this representation now is
interpolatable: if the image output for sparse view coordinates is plausible,
it is for all intermediate, continuous coordinates as well. Our specific
network architecture involves a differentiable occlusion-aware warping step,
which leads to a compact set of trainable parameters and consequently fast
learning and fast execution.

Details

show
hide
Language(s): eng - English
 Dates: 2019-10-302019-11-062019
 Publication Status: Published online
 Pages: 11 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 1910.13921
URI: http://arxiv.org/abs/1910.13921
BibTex Citekey: Bemana_arXiv1910.13921
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show