Skip to main content
  • Research Article
  • Open access
  • Published:

Monocular 3D Tracking of Articulated Human Motion in Silhouette and Pose Manifolds

Abstract

This paper presents a robust computational framework for monocular 3D tracking of human movement. The main innovation of the proposed framework is to explore the underlying data structures of the body silhouette and pose spaces by constructing low-dimensional silhouettes and poses manifolds, establishing intermanifold mappings, and performing tracking in such manifolds using a particle filter. In addition, a novel vectorized silhouette descriptor is introduced to achieve low-dimensional, noise-resilient silhouette representation. The proposed articulated motion tracker is view-independent, self-initializing, and capable of maintaining multiple kinematic trajectories. By using the learned mapping from the silhouette manifold to the pose manifold, particle sampling is informed by the current image observation, resulting in improved sample efficiency. Decent tracking results have been obtained using synthetic and real videos.

Publisher note

To access the full article, please see PDF.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gang Qian.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Guo, F., Qian, G. Monocular 3D Tracking of Articulated Human Motion in Silhouette and Pose Manifolds. J Image Video Proc 2008, 326896 (2008). https://doi.org/10.1155/2008/326896

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1155/2008/326896

Keywords