English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Video Frame Interpolation for High Dynamic Range Sequences Captured with Dual-exposure Sensors

MPS-Authors
/persons/resource/persons250029

Çoğalan,  Uğur
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons232942

Bemana,  Mojtaba
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45449

Seidel,  Hans-Peter       
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45095

Myszkowski,  Karol       
Computer Graphics, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

2206.09485.pdf
(Preprint), 39MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Çoğalan, U., Bemana, M., Seidel, H.-P., & Myszkowski, K. (2022). Video Frame Interpolation for High Dynamic Range Sequences Captured with Dual-exposure Sensors. Retrieved from https://arxiv.org/abs/2206.09485.


Cite as: https://hdl.handle.net/21.11116/0000-000C-16E8-6
Abstract
Video frame interpolation (VFI) enables many important applications that
might involve the temporal domain, such as slow motion playback, or the spatial
domain, such as stop motion sequences. We are focusing on the former task,
where one of the key challenges is handling high dynamic range (HDR) scenes in
the presence of complex motion. To this end, we explore possible advantages of
dual-exposure sensors that readily provide sharp short and blurry long
exposures that are spatially registered and whose ends are temporally aligned.
This way, motion blur registers temporally continuous information on the scene
motion that, combined with the sharp reference, enables more precise motion
sampling within a single camera shot. We demonstrate that this facilitates a
more complex motion reconstruction in the VFI task, as well as HDR frame
reconstruction that so far has been considered only for the originally captured
frames, not in-between interpolated frames. We design a neural network trained
in these tasks that clearly outperforms existing solutions. We also propose a
metric for scene motion complexity that provides important insights into the
performance of VFI methods at the test time.