Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

Monocular Heading Estimation in Non-stationary Urban Environment

MPG-Autoren
/persons/resource/persons83965

Herdtweck,  C
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Project group: Cognitive Engineering, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83871

Curio,  C
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Project group: Cognitive Engineering, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Herdtweck, C., & Curio, C. (2012). Monocular Heading Estimation in Non-stationary Urban Environment. In 2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI) (pp. 244-250). Piscataway, NJ, USA: IEEE.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-B632-8
Zusammenfassung
Estimating heading information reliably from visual cues only is an important goal in human navigation research as well as in application areas ranging from robotics to automotive safety. The focus of expansion (FoE) is deemed to be important for this task. Yet, dynamic and unstructured environments like urban areas still pose an algorithmic challenge. We extend a robust learning framework that operates on optical flow and has at center stage a continuous Latent Variable Model (LVM) [1]. It accounts for missing measurements, erroneous correspondences and independent outlier motion in the visual field of view. The approach bypasses classical camera calibration through learning stages, that only require monocular video footage and corresponding platform motion information. To estimate the FoE we present both a numerical method acting on inferred optical flow fields and regression mapping, e.g. Gaussian-Process regression. We also present results for mapping to velocity, yaw, and even pitch and roll. Performance is demonstrated for car data recorded in non-stationary, urban environments.