hide
Free keywords:
-
Abstract:
Action recognition by the brain is thought to combine the recognition of body configurations with some form of feature tracking over time (for review see [1]). So far, in the field of computer vision these two mechanisms are typically addressed as separate problems. Proposed solutions
have either been based on independent recognition of learned body configurations [2,3], or on high-level model-based stochastic tracking mechanisms [4–6] in both 2D and 3D space. A new system will be presented that integrates dynamic view-based pose configuration estimation
and model-based articulated tracking, resulting in increased robustness and improved generalization properties for a small set of training data. A combination of kernel-based nonlinear regression analysis and competitive particle filtering maps together image features robustly
onto points of a smooth manifold of body postures (action space). This analysis-by-synthesis based posture estimation is used as a prior for the automatic initialization and support of the tracking of a flexible articulated 2D model. It will be discussed how this computational approach
represents perceptual ambiguities and robustly solves the correspondence problem for the reconstruction of human body pose under strong self-occlusions.