English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Occlusion-Aware Depth Estimation with Adaptive Normal Constraints

MPS-Authors
/persons/resource/persons226679

Liu,  Lingjie
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian
Computer Graphics, MPI for Informatics, Max Planck Society;

External Ressource
No external resources are shared
Fulltext (public)

arXiv:2004.00845.pdf
(Preprint), 9KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Long, X., Liu, L., Theobalt, C., & Wang, W. (2020). Occlusion-Aware Depth Estimation with Adaptive Normal Constraints. ECCV 2020. Lecture Notes in Computer Science, vol 12354. Springer, Cham. Retrieved from https://arxiv.org/abs/2004.00845.


Cite as: http://hdl.handle.net/21.11116/0000-0007-E0E9-5
Abstract
We present a new learning-based method for multi-frame depth estimation from a color video, which is a fundamental problem in scene understanding, robot navigation or handheld 3D reconstruction. While recent learning-based methods estimate depth at high accuracy, 3D point clouds exported from their depth maps often fail to preserve important geometric feature (e.g., corners, edges, planes) of man-made scenes. Widely-used pixel-wise depth errors do not specifically penalize inconsistency on these features. These inaccuracies are particularly severe when subsequent depth reconstructions are accumulated in an attempt to scan a full environment with man-made objects with this kind of features. Our depth estimation algorithm therefore introduces a Combined Normal Map (CNM) constraint, which is designed to better preserve high-curvature features and global planar regions. In order to further improve the depth estimation accuracy, we introduce a new occlusion-aware strategy that aggregates initial depth predictions from multiple adjacent views into one final depth map and one occlusion probability map for the current reference view. Our method outperforms the state-of-the-art in terms of depth estimation accuracy, and preserves essential geometric features of man-made indoor scenes much better than other algorithms.