Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

Robust Pose Estimation with 3D Textured Models

MPG-Autoren
/persons/resource/persons44472

Gall,  Jürgen
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45312

Rosenhahn,  Bodo
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45449

Seidel,  Hans-Peter
Computer Graphics, MPI for Informatics, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Gall, J., Rosenhahn, B., & Seidel, H.-P. (2006). Robust Pose Estimation with 3D Textured Models. In Advances in Image and Video Technology, First Pacific Rim Symposium, PSIVT 2006 (pp. 84-95). Berlin, Germany: Springer.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-000F-23E0-9
Zusammenfassung
Estimating the pose of a rigid body means to determine the rigid body motion in the 3D space from 2D images. For this purpose, it is reasonable to make use of existing knowledge of the object. Our approach exploits the 3D shape and the texture of the tracked object in form of a 3D textured model to establish 3D-2D correspondences for pose estimation. While the surface of the 3D free-form model is matched to the contour extracted by segmentation, additional reliable correspondences are obtained by matching local descriptors of interest points between the textured model and the images. The fusion of these complementary features provides a robust pose estimation. Moreover, the initial pose is automatically detected and the pose is predicted for each frame. Using the predicted pose as shape prior makes the contour extraction less sensitive. The performance of our method is demonstrated by stereo tracking experiments.