English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Intrinsic Video

MPS-Authors
/persons/resource/persons85102

Kong,  Naejin
Dept. Perceiving Systems, Max Planck Institute for Intelligent Systems, Max Planck Society;

/persons/resource/persons44483

Gehler,  Peter
Dept. Perceiving Systems, Max Planck Institute for Intelligent Systems, Max Planck Society;

/persons/resource/persons75293

Black,  Michael J.
Dept. Perceiving Systems, Max Planck Institute for Intelligent Systems, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Kong, N., Gehler, P., & Black, M. J. (2014). Intrinsic Video. In D. Fleet, T. Pajdla, B. Schiele, & T. Tuytelaars (Eds.), Computer Vision - ECCV 2014. Proceedings, Part II (pp. 360-375). Cham et al.: Springer International Publishing.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0024-C6DE-D
Abstract
Intrinsic images such as albedo and shading are valuable for later stages of visual processing. Previous methods for extracting albedo and shading use either single images or images together with depth data. Instead, we define intrinsic video estimation as the problem of extracting temporally coherent albedo and shading from video alone. Our approach exploits the assumption that albedo is constant over time while shading changes slowly. Optical flow aids in the accurate estimation of intrinsic video by providing temporal continuity as well as putative surface boundaries. Additionally, we find that the estimated albedo sequence can be used to improve optical flow accuracy in sequences with changing illumination. The approach makes only weak assumptions about the scene and we show that it substantially outperforms existing single-frame intrinsic image methods. We evaluate this quantitatively on synthetic sequences as well on challenging natural sequences with complex geometry, motion, and illumination.