Skip to content


  • Research Article
  • Open Access

Contextual Information and Covariance Descriptors for People Surveillance: An Application for Safety of Construction Workers

EURASIP Journal on Image and Video Processing20102011:684819

  • Received: 30 April 2010
  • Accepted: 10 December 2010
  • Published:


In computer science, contextual information can be used both to reduce computations and to increase accuracy. This paper discusses how it can be exploited for people surveillance in very cluttered environments in terms of perspective (i.e., weak scene calibration) and appearance of the objects of interest (i.e., relevance feedback on the training of a classifier). These techniques are applied to a pedestrian detector that uses a LogitBoost classifier, appropriately modified to work with covariance descriptors which lie on Riemannian manifolds. On each detected pedestrian, a similar classifier is employed to obtain a precise localization of the head. Two novelties on the algorithms are proposed in this case: polar image transformations to better exploit the circular feature of the head appearance and multispectral image derivatives that catch not only luminance but also chrominance variations. The complete approach has been tested on the surveillance of a construction site to detect workers that do not wear the hard hat: in such scenarios, the complexity and dynamics are very high, making pedestrian detection a real challenge.


  • Manifold
  • Riemannian Manifold
  • Contextual Information
  • Construction Site
  • Relevance Feedback

Publisher note

To access the full article, please see PDF.

Authors’ Affiliations

DII, University of Modena and Reggio Emilia, 41122 Modena, Italy
DISMI, University of Modena and Reggio Emilia, 42122 Reggio Emilia, Italy


© Giovanni Gualdi et al. 2011

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.