Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Forschungspapier

Occlusion-Aware Depth Estimation with Adaptive Normal Constraints

MPG-Autoren
/persons/resource/persons226679

Liu,  Lingjie
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian       
Computer Graphics, MPI for Informatics, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)

arXiv:2004.00845.pdf
(Preprint), 9KB

Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Long, X., Liu, L., Theobalt, C., & Wang, W. (2020). Occlusion-Aware Depth Estimation with Adaptive Normal Constraints. ECCV 2020. Lecture Notes in Computer Science, vol 12354. Springer, Cham. Retrieved from https://arxiv.org/abs/2004.00845.


Zitierlink: https://hdl.handle.net/21.11116/0000-0007-E0E9-5
Zusammenfassung
We present a new learning-based method for multi-frame depth estimation from
a color video, which is a fundamental problem in scene understanding, robot
navigation or handheld 3D reconstruction. While recent learning-based methods
estimate depth at high accuracy, 3D point clouds exported from their depth maps
often fail to preserve important geometric feature (e.g., corners, edges,
planes) of man-made scenes. Widely-used pixel-wise depth errors do not
specifically penalize inconsistency on these features. These inaccuracies are
particularly severe when subsequent depth reconstructions are accumulated in an
attempt to scan a full environment with man-made objects with this kind of
features. Our depth estimation algorithm therefore introduces a Combined Normal
Map (CNM) constraint, which is designed to better preserve high-curvature
features and global planar regions. In order to further improve the depth
estimation accuracy, we introduce a new occlusion-aware strategy that
aggregates initial depth predictions from multiple adjacent views into one
final depth map and one occlusion probability map for the current reference
view. Our method outperforms the state-of-the-art in terms of depth estimation
accuracy, and preserves essential geometric features of man-made indoor scenes
much better than other algorithms.