Skip to main content
  • Research Article
  • Open access
  • Published:

Background Subtraction via Robust Dictionary Learning

Abstract

We propose a learning-based background subtraction approach based on the theory of sparse representation and dictionary learning. Our method makes the following two important assumptions: (1) the background of a scene has a sparse linear representation over a learned dictionary; (2) the foreground is "sparse" in the sense that majority pixels of the frame belong to the background. These two assumptions enable our method to handle both sudden and gradual background changes better than existing methods. As discussed in the paper, the way of learning the dictionary is critical to the success of background modeling in our method. To build a correct background model when training samples are not foreground-free, we propose a novel robust dictionary learning algorithm. It automatically prunes foreground pixels out as outliers at the learning stage. Experiments in both qualitative and quantitative comparisons with competing methods demonstrate the obtained robustness against background changes and better performance in foreground segmentation.

Publisher note

To access the full article, please see PDF.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cong Zhao.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Zhao, C., Wang, X. & Cham, WK. Background Subtraction via Robust Dictionary Learning. J Image Video Proc. 2011, 972961 (2011). https://doi.org/10.1155/2011/972961

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1155/2011/972961

Keywords