English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Modeling Blurred Video with Layers

MPS-Authors
/persons/resource/persons85113

Wulff,  Jonas
Dept. Perceiving Systems, Max Planck Institute for Intelligent Systems, Max Planck Society;

/persons/resource/persons75293

Black,  Michael J.
Dept. Perceiving Systems, Max Planck Institute for Intelligent Systems, Max Planck Society;

Locator
There are no locators available
Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available
Citation

Wulff, J., & Black, M. J. (2014). Modeling Blurred Video with Layers. In D. Fleet, T. Pajdla, B. Schiele, & T. Tuytelaars (Eds.), Computer Vision - ECCV 2014. 13th European Conference. Proceedings, Part VI (pp. 236-252). Cham et al.: Springer International Publishing. doi:10.1007/978-3-319-10599-4_16.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0024-E29F-E
Abstract
Videos contain complex spatially-varying motion blur due to the combination of object motion, camera motion, and depth variation with finite shutter speeds. Existing methods to estimate optical flow, deblur the images, and segment the scene fail in such cases. In particular, boundaries between differently moving objects cause problems, because here the blurred images are a combination of the blurred appearances of multiple surfaces. We address this with a novel layered model of scenes in motion. From a motion-blurred video sequence, we jointly estimate the layer segmentation and each layer's appearance and motion. Since the blur is a function of the layer motion and segmentation, it is completely determined by our generative model. Given a video, we formulate the optimization problem as minimizing the pixel error between the blurred frames and images synthesized from the model, and solve it using gradient descent. We demonstrate our approach on synthetic and real sequences.