- Open Access
Patch-based local histograms and contour estimation for static foreground classification
© Pereira et al.; licensee Springer. 2015
- Received: 19 September 2014
- Accepted: 29 January 2015
- Published: 25 February 2015
This paper presents an approach to classify static foreground blobs in surveillance scenarios. Possible application is the detection of abandoned and removed objects. In order to classify the blobs, we developed two novel features based on the assumption that the neighborhood of a removed object is fairly continuous. In other words, there is a continuity, in the input frame, ranging from inside the corresponding blob contour to its surrounding region. Conversely, it is usual to find a discontinuity, i.e., edges, surrounding an abandoned object. We combined the two features to provide a reliable classification. In the first feature, we use several local histograms as a measure of similarity instead of previous attempts that used a single one. In the second, we developed an innovative method to quantify the ratio of the blob contour that corresponds to actual edges in the input image. A representative set of experiments shows that the proposed approach can outperform other equivalent techniques published recently.
- Abandoned and removed object detection
- Video surveillance
- Video segmentation
Video surveillance techniques for abandoned and removed object detection have received great attention in the last few years. Detecting suspicious objects is a central issue in the protection of public areas, such as airports, shopping malls, parks, and other mass-gathering areas.
In such applications, a sequence of computer vision methods is applied. Some approaches identify foreground blobs by applying background subtraction methods and then use an object tracker to determine whether the blob is static or not.
Other approaches avoid object tracking methods due to its flaws under crowded scenes . Some alternatives have been proposed. Bayona performed a survey on stationary foreground detection  and concluded that approaches based on sub-sampling schemes or accumulation of foreground masks assure the best results. One year later, Bayona proposed one static foreground detection technique based on a sub-sampling scheme that outperformed other efforts mentioned in his survey. A succession of improvements has been reported in  and . Although the stationary foreground detection issue is far from exhausted, the present research work is not concerned with the approach applied to identify stationary foreground. Instead, the focus is on the classification of static foreground blobs as either an abandoned or removed object.
We use a well-known shared assumption described in : When a background object is removed from the scene, it is reasonable to assume that the area thus vacated will exhibit a higher degree of agreement with its immediate surroundings than before.
Fitzsimons  provided a brief literature review and categorized the main mechanisms used to distinguish abandoned from removed objects into four groups: edge detection, histograms comparison, image inpainting, and region growing.
The edge detection and the histogram comparison approaches are of special interest to our research. An explanation of the other two categories can be found in .
The intuitive reasoning on the edge detection approach in , is that placing an object in front of the background will introduce more edges to the scene around the object’s boundaries, provided that the background is not extremely cluttered.
Flaws on distinguishing abandoned from removed objects by their edges can occur when the hypothesis of not extremely cluttered background is not valid. Grigorescu  showed that when textures and the scale of objects are similar, a non-contextual edge detector, such as the traditional Canny operator, generates strong responses to the texture regions. Then, object contours can be difficult to identify in the output of such an operator.
In our approach the best results were achieved by combining the Sobel and SUSAN edges, which is more invariant to scale changes than the Canny operator (as reported in ).
Henceforth for a simpler notation, unless otherwise specified, neighborhood of foreground blobs means the corresponding neighbor region in the input frame, not in the foreground mask.
Color histogram comparison [15,22-24] is another intuitive manner to discriminate abandoned and removed objects. Researchers compare the color distributions of the interior and exterior neighborhood of foreground blobs. It makes sense to assume that if the internal and external neighbor regions are similar in color, then no object is likely to be present. The inverse is also likely to be true.
We found that the accuracy of histogram-based features relies on the choice (shape and size) of the regions to compare. Usually, a bounding box delimits the external region. However, as we show next, the color distribution comparison of whole multi-colored objects often generates wrong results.
Although both edges and histogram categories present drawbacks, we show in our results that they are complementary and an appropriate combination can take the best of both.
All these approaches rely on a hidden assumption that the foreground blob correctly outlines the objects’ contour when a real object is present. Then, they define internal and external regions and extract data to compare one to the other. If the assumption fails, which often occurs, the outcome is a misleading comparison. Using bounding boxes that are smaller than the actual object and computing background pixels from the object color distribution are two examples of many possible mistakes.
Thus, the features we propose consider some degree of inaccuracy on the foreground blob. We argue that this care is essential to deal with several different video scenarios.
The first step is pre-processing each input image filtering noise and then evaluating the following two features in order to provide a reliable classification: F h - patch-based local histogram similarity and F c - contour sampling, detailed in Sections 2 and 2, respectively. The final classification using these features is detailed in Section 2.
The artificial edges created by image compression, with the quantization of 8 ×8 macroblocks, are not among the edges we aim to detect. Image noise, such as noise due to sensor quality, is not of interest to our present work either.
A common low-pass filter blurs the edges while removing noise, which is inappropriate for our purpose. Tomasi  proposed a bilateral filtering, which smooths images while preserving edges by means of a nonlinear combination of nearby image values.
The bilateral filter uses two parameters. The geometric spread σ d , where a large σ d blurs more because it combines values from more distant image locations. The photometric spread σ r , where pixels with values closer than σ r to each other are mixed together and values more distant than σ r are not.
The following presented features benefit from using smoothed input data, mainly the histogram-based feature, which computes the difference between the color distribution of two regions.
2.2 F h - patch-based local histogram similarity
In Figure 2c, we note considerable similarity between internal and external regions of the blob. In several instances of these removed objects, the color of the external neighborhood is similar to the color of the neighborhood inside the corresponding blob. We measured this similarity by comparing the histogram of internal and external neighbor regions.
We used the multi-color observation model, by Perez , based on hue saturation value (HSV) color histograms. This color histogram is more accurate than a grayscale one. Our technique uses the Kolmogorov-Smirnov test  as a metric to evaluate the similarity between the two histograms. Blobs corresponding to objects that differ from their neighbor region are unlikely to be classified as removed ones.
Up to this point, our proposed technique and previous ones are fairly equivalent. However, previous approaches did not tackle situations where the region behind a blob is not as homogeneous as in the example of Figure 2. For such situations, we proposed a novel approach, inspired in , to split the image into patches and to analyze whether each patch is homogeneous. This is discussed in Section 2.
2.2.1 Improving the similarity assessment
Color histograms can distinguish one object from another when their color distributions are distinct. However, color histograms do not differentiate objects with similar distributions but with different color locations. For example, suppose two 2 ×2 chess boards rotated 90° from each other. A simple color histogram comparison would evaluate that they are the same. As explained in , an appropriate approach would be to divide the object into regions (patches) and consider their histograms in order to take a more precise observation model of the object.
Briefly, the overall color distribution of two images might be similar, while the comparison of color distribution taken from lower scale pieces might tell us that the images are different. Lower scale pieces provide more accurate data. Therefore, the issue is how to determine the scale and shape of the pieces. In the following, we explain our method to get local color distributions.
Perez  proposed a color histogram with 110 bins. We use the same number of bins. The number of pixels in a patch must be representative in order to get plausible quality histograms. Then, the minimum goal patch area of 300 pixels showed to be suitable.
We use a bounding box extended by 25% in area (50% in each dimension) compared to the tight bounding box of the blobs. This is necessary to get enough pixels from the external blob neighborhood. Then, from this point on, we consider that bounding box means the extended one.
We perceived that the relative position of the patches to the whole blob can affect the similarity measure. Then, we gather in a single set the patches from grids of size N−1, N and N+1. Some of the patches are disregarded as explained below.
The purpose here is to evaluate the color similarity in the neighborhood of the blob contour. So, only patches that cover the blob contour are used. Each patch comprises two regions, internal and external. Then, we disregard patches in which any of these regions have an area smaller than 15% of the patch area. Very few pixels cannot form representative color distributions.
Next, for each patch, we compare the internal and external patch regions with Kolmogorov-Smirnov test  as a difference metric. In order to extract the whole similarity, we could take the mean from all the differences. However, this is more appropriately modeled as a voting problem. Each patch gives valuable information about its area. No matter how close its similarity is to 100%, it must not contribute to the similarity of other patches as it would contribute by calculating a simple average. Figure 3 presents such an example. Among 22 squared patches, there are five patches that cover a wrong segmentation area (homogeneous area covering the road) and the voting scheme is able to correctly classify the blob. The following equations show the related calculations.
The feature value is the ratio of patches that have similar internal and external regions.
2.3 F c - contour sampling
We developed a method to determine whether a blob region is surrounded by edges or not. The method detects the edges in the neighborhood of the blob border and evaluates the portion of the contour that is surrounded by edges. We consider that a closed/almost closed contour corresponds to an abandoned object.
We developed a monotonic function that quantifies the ratio of the object contour found by the edge detector in the neighborhood of the blob boundary. We call this function as contour sampling.
A geometric operation of intersecting a straight line at several (and possibly equally spaced) points of the contour can fulfill the monotonic requirement. Tracing concentric straight lines, from a point inside the contour, can perform the underlying procedure. Each line is rotated from the previous by an angle of some degrees.
Figure 5f shows the picked edges, the source point in green, the straight lines, and blue points representing the intersection. In this example, 60% of the lines intersected the edges.
In case of blobs with a complex shape, for example a U-shaped blob, a single source point is not enough to sample the whole contour because, for simplicity, we take only the first intersection point.
Then, we use several source points spaced throughout the blob region. For this, we take N S points from the Sobol sequence . This sequence is a solution to the problem of filling an area uniformly with quasi-random points.
N S is calculated with Expression 1, setting A g to 25 pixels. Thus, a quasi-random point is likely to be at each 5 ×5 piece of the bounding box.
As the number of traced lines increases, the value F c approaches the actual percentage of missing contour out of the 360°. We use L=30 lines for each source point, which yields a precise measurement.
We assume that all blobs have complex shapes and always use multiple source points. This feature can identify the removed object because it is extremely unusual to find edges around the whole blob that corresponds to a removed object.
2.4 Finding the edges
We extracted the edges of each RGB channel with the SUSAN detector and the edges of the luminance (grayscale) channel with the Sobel operator and combined their results into one edge mask.
We chose the SUSAN detector because it is more invariant to scale changes than other non-contextual edge detectors .
Using only the luminance Y (ITU-R BT.601), as in the original experiments of SUSAN , is not appropriate because there are many edge samples that do not appear on the luminance channel, but only on the chrominance channels. For example, two neighboring pixels with the same luminance, but opposite extreme values of chrominance show no edges on the luminance channel.
The SUSAN detector relies on a threshold t that determines the minimum contrast of edges that will be picked up. We use a fixed threshold t=15, which sometimes yields missing some edge pixels.
In Equation 5, mean stands for the mean of the Sobel gradient and std_dev the corresponding standard deviation. The support at 10 is needed to not pick almost dark Sobel pixels from gradient of homogeneous images.
2.5 Combining the two features
The target set (codomain) of both features is [0,1]. First, we evaluate each feature at the input frame (F h (In) and F c (In)) and at the background model (F h (Bg) and F c (Bg)). A high value of input frame features indicates that the blob is likely to correspond to a removed object. A low value indicates an abandoned one. The inverse is also true for the background features.
A negative value of the subtraction (Sub h or Sub c ) indicates that it is likely to exist an object in the background model and do not in the input frame, i.e., the background model does not correspond to the reality and the referred blob is a removed one. While a positive value indicates that the object is likely to be in the input frame and do not in the background model, i.e., an abandoned object.
One advantage of the proposed technique is that it is quite autonomous. It relies on two parameters τ h and t, one for each feature. The threshold τ h is set to 0.99. In our experiments, lower values of τ h produced undesirable false positives. Smith in  suggests a value between 10 and 20 to SUSAN threshold t. We set it to 15.
The classifier uses three input data: the input frames, a foreground mask, and the corresponding background model frame.
ASOD dataset description
We achieved 100% of accuracy classifying the blobs from the annotated subset (second and third column of Table 1) as either abandoned or removed. Fitzsimons  also achieved 100% of accuracy in the same subset. There are some reasons that we achieved a flawless result. The dataset provided canonical background frame and an annotated foreground mask. The background is a frame taken from the sequence where the only change is the presence or the absence of the object under analysis. The manually annotated foreground blobs tightly fit the border of the objects. This is the best scenario to evaluate the features. Although simple, this experiment is useful for the early validations.
Figure 7c presents the accumulated value of Sub h and Sub c , and their corresponding best fitted lines (least square sense). An ideal feature would approach the line x=y, since in the abandoned scenarios, the subtractions Sub h and Sub c should always be 1. The slope of these lines are 0.61 and 0.72, for Sub h and Sub c , respectively. The slope is a suitable way to compare the features, since it represents the trend of the feature plot. The conclusion here is that the feature F c is more accurate than the feature F h .
Finally, Figure 7d shows that the feature F c can correctly classify the whole annotated subset. By Equation 9, any sum value (Sub h +Sub c ) above zero is abandoned and below zero is removed. The margin is approximately the range [-0.2,0.2].
Results on the real subset
Our result is 3.7% more accurate than the results from . We argue that this improvement is mainly due to: 1) the diversity of patch shapes that makes the histogram feature take into consideration (most of the times) suitable regions, 2) the contour feature searching for edges in the internal neighborhood of a blob and in the external neighborhood of the blob convex hull, 3) combining the SUSAN with Sobel edges in the contour feature, and 4) replacing fixed feature thresholds for dynamic ones.
Figure 8c presents the accumulated value of Sub h and Sub c and their corresponding best fitted lines (least square sense). Here, we see that the classification problem is harder than the annotated one because the slope of the fitted lines is lower, 0.53 and 0.62, respectively, for Sub h and Sub c . The feature F c is again more accurate than the feature F h .
Finally, Figure 8d shows that neither the combination of the features could correctly classify the whole real subset. The mistakes were just 0.9% of the total, and the corresponding blobs barely resemble the annotated ones.
In the next experiment, we used the PETS2006 videos of the camera 3 from scenarios 1 to 7. A single event in each of these videos has been used for the accuracy evaluation on previous research [41-44]. All the seven events are abandoned bags.
In this experiment, we used the foreground mask produced with the SuBSENSE  segmenter. SuBSENSE does not maintain a single background model frame. Instead, it manages a set of samples for each pixel. Then, for each pixel, we extracted a background frame by choosing the sample that best fits each corresponding pixel from input frame and used it as a running average background model. This procedure was repeated for each input frame.
Gaetano  reported the detection of the blobs that appeared after the removal of the purple bins. We also classified these blobs as removed object ones.
We performed the experiments on a PC with an Intel(R) Core(TM) i5-3210M CPU @ 2.50 GHz. The performance on the PETS2006 dataset, with frames measuring 720 ×576, was 11 frames per second. The bilateral filter took 75% of the time to analyze each frame.
The main goal of the present research work is to develop a technique to classify static foreground blobs as abandoned or removed objects. The proposed technique, named as removed and abandoned blob classifier (RABC), is based on a widely used assumption that a removed region is similar to its neighborhood, while abandoned object regions usually have discontinuity, i.e., edges, defining their borders.
The RABC technique combines two features, derived from the aforementioned properties: 1) patch-based local histogram similarity and 2) contour sampling.
Both features were designed considering that some degree of inaccuracy is present in the input data. We argue that this care is essential for the classifier to deal with several different video scenarios. For example, combinations of edge operators, dynamic thresholds and patch sizes, and extended bounding boxes were designed based on this care.
The feature values are ratios in the range [ 0,1]. Thus, the feature values can be understood as confidence values. The final classification compares the feature values extracted from the background with those extracted from the input frame. If the feature outcomes are the same (whether abandoned or removed), the final result is the agreed outcome. Otherwise, the most confident outcome between them is chosen. This procedure avoids the unfeasible task of defining suitable thresholds while achieving high accuracy.
The results showed that our proposed technique outperformed recent state-of-the-art techniques with the same purpose.
There is potential research that could build-on our work and our findings. One potential future work would be replacing the squared patches with superpixels in the patch-based local histogram feature. Superpixels describe image regions more precisely. Such change needs a metric like earth mover distance (EMD) metric to compare histograms. EMD has the capability of comparing two distinct sets of image pieces.
The authors would like to thank Mr. Quanfu Fan, from the Thomas J. Watson Research Center, for sharing his ground truth data and results of his research . The authors would like to thank Mr. Jack Fitzsimons, from the University of Oxford, for confirming the actual number of annotated blobs on the ASOD dataset. This work has been supported by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES).
- F Porikli, Y Ivanov, T Haga, Robust abandoned object detection using dual foregrounds. EURASIP J. Adv. Signal Process. 2008, 30 (2008).View ArticleMATHGoogle Scholar
- Á Bayona, JC SanMiguel, JM Martínez, in Proceedings of IEEE Conference on Advanced Video and Signal Based Surveillance (AVSS) (Genoa, 2009). Comparative evaluation of stationary foreground object detection algorithms based on background subtraction techniques, pp. 25–30.Google Scholar
- D Ortego, JC SanMiguel, in Proceedings of IEEE Conference on Advanced Video and Signal Based Surveillance (AVSS) (Krakòw, 2013). Stationary foreground detection for video-surveillance based on foreground and motion history images, pp. 75–80.Google Scholar
- D Ortego, JC SanMiguel, in Proceedings of IEEE International Conference on Image Processing (ICIP). Multi-feature stationary foreground detection for crowded video-surveillance (Paris, 2014), pp. 2403–2407.Google Scholar
- A Singh, A Agrawal, in Proceedings of IEEE India Conference (INDICON). An interactive framework for abandoned and removed object detection in video (Bombay, 2013), pp. 1–6. doi:10.1109/INDCON.2013.6725905.Google Scholar
- JK Fitzsimons, TT Lu, in Proceedings of SPIE Conference on Applications of Digital Image Processing XXXVII, vol. 9217. Markov random fields for static foreground classification in surveillance systems (San Diego, 2014), pp. 92171O–92171O-10.Google Scholar
- J Kim, A Ramirez Rivera, B Ryu, K Ahn, O Chae, in Proceedings of IEEE Conference on Advanced Video and Signal Based Surveillance (AVSS). Unattended object detection based on edge-segment distributions (Seoul, 2014), pp. 283–288. doi:10.1109/AVSS.2014.6918682.Google Scholar
- T Kryjak, M Komorkiewicz, M Gorgon, Real-time foreground object detection combining the PBAS background modeling algorithm and feedback from scene analysis module. Int. J. Electron. Telecommunications. 60(1), 53–64 (2014).View ArticleGoogle Scholar
- HH Liao, JY Chang, LG Chen, in Proceedings of IEEE Conference on Advanced Video and Signal Based Surveillance (AVSS). A localized approach to abandoned luggage detection with foreground-mask sampling (Santa Fe, 2008), pp. 132–139.Google Scholar
- W Hassan, P Birch, R Young, C Chatwin, in Proceedings of SPIE Video Surveillance and Transportation Imaging Applications, vol. 8663. An improved background segmentation method for ghost removals, (2013), pp. 86630W–86630W-6.Google Scholar
- I Huerta, A Amato, X Roca, J Gonzàlez, Exploiting multiple cues in motion segmentation based on background subtraction. Neurocomputing. 100, 183–196 (2013).View ArticleGoogle Scholar
- M Magno, F Tombari, D Brunelli, L Di Stefano, L Benini, in Proceedings of IEEE Conference on Advanced Video and Signal Based Surveillance (AVSS). Multimodal abandoned/removed object detection for low power video surveillance systems (Genoa, 2009), pp. 188–193.Google Scholar
- YJ Chai, SW Khor, YH Tay, in Proceedings of SPIE International Conference on Digital Image Processing (ICDIP), vol. 8878. Object occlusion and object removal detection (Beijing, 2013), pp. 88782F–88782F-5.Google Scholar
- J SanMiguel, L Caro, J Martinez, Pixel-based colour contrast for abandoned and stolen object discrimination in video surveillance. Electron. Lett. 48(2), 86–87 (2012).View ArticleGoogle Scholar
- L Caro Campos, JC SanMiguel, JM Martínez, in Proceedings of IEEE Conference on Advanced Video and Signal Based Surveillance (AVSS). Discrimination of abandoned and stolen object based on active contours (Klagenfurt, 2011), pp. 101–106.Google Scholar
- RH Evangelio, T Sikora, in Proceedings of IEEE Conference on Advanced Video and Signal Based Surveillance (AVSS). Complementary background models for the detection of static and moving objects in crowded environments (Klagenfurt, 2011), pp. 71–76. doi:10.1109/AVSS.2011.6027297.Google Scholar
- P Spagnolo, A Caroppo, M Leo, T Martiriggiano, T D’Orazio, in Proceedings of IEEE Conference on Advanced Video and Signal Based Surveillance (AVSS). An abandoned/removed objects detection algorithm and its evaluation on pets datasets (Sydney, 2006), pp. 17–17.Google Scholar
- YL Tian, M Lu, A Hampapur, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1. Robust and efficient foreground analysis for real-time video surveillance (San Diego, 2005), pp. 1182–1187.Google Scholar
- J Connell, AW Senior, A Hampapur, YL Tian, L Brown, S Pankanti, in Proceedings of IEEE International Conference on Multimedia and Expo (ICME). Detection and tracking in the IBM peoplevision system (Taipei, 2004), pp. 1403–1406.Google Scholar
- C Grigorescu, N Petkov, MA Westenberg, Contour and boundary detection improved by surround suppression of texture edges. Image Vision Comput. 22(8), 609–622 (2004).View ArticleGoogle Scholar
- SM Smith, JM Brady, Susan-a new approach to low level image processing. Int. J. Comput. Vision. 23(1), 45–78 (1997).View ArticleGoogle Scholar
- LF Tu, SD Zhong, Q Peng, Moving object detection method based on complementary multi resolution background models. J. Cent. S. University. 21, 2306–2314 (2014).View ArticleGoogle Scholar
- B Hu, Y Li, Z Chen, G Xiong, F Zhu, in Proceedings of IEEE International Conference on Intelligent Transportation Systems (ITSC). Research on abandoned and removed objects detection based on embedded system (Qingdao, 2014), pp. 2968–2971.Google Scholar
- S Ferrando, G Gera, C Regazzoni, in Proceedings of IEEE Conference on Advanced Video and Signal Based Surveillance (AVSS). Classification of unattended and stolen objects in video-surveillance system (Sydney, 2006), pp. 21–21.Google Scholar
- C Tomasi, R Manduchi, in Proceedings of IEEE International Conference on Computer Vision (ICCV). Bilateral filtering for gray and color images (Bombay, 1998), pp. 839–846.Google Scholar
- N Goyette, P Jodoin, F Porikli, J Konrad, P Ishwar, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Changedetection. net: a new change detection benchmark dataset (Providence, 2012), pp. 1–8.Google Scholar
- IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), London (2007). http://www.eecs.qmul.ac.uk/~andrea/avss2007_d.html. September 2014.
- P Pérez, C Hue, J Vermaak, M Gangnet, in Proceedings of Conference on Computer Vision (ECCV). Color-based probabilistic tracking (SpringerCopenhagen, 2002), pp. 661–675.Google Scholar
- A Adam, E Rivlin, I Shimshoni, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Robust fragments-based tracking using the integral histogram (New York, 2006), pp. 798–805.Google Scholar
- R Wang, F Bunyak, G Seetharaman, K Palaniappan, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Static and moving object detection using flux tensor with split gaussian models (Columbus, 2014), pp. 414–418.Google Scholar
- J Gonzàlez, FX Roca, JJ Villanueva, in Proceedings of Conference on Computational Vision and Medical Image Processing (VipIMAGE). Hermes: a research project on human sequence evaluation (Porto, 2007).Google Scholar
- M De Berg, M Van Kreveld, M Overmars, O Cheong, Computational Geometry: Algorithms and Applications, 3rd ed. (Springer, Santa Clara, CA, USA, 2008).View ArticleMATHGoogle Scholar
- P Bratley, BL Fox, Algorithm 659: Implementing Sobol’s quasirandom sequence generator. ACM Trans. Math. Software (TOMS). 14(1), 88–100 (1988).View ArticleMATHGoogle Scholar
- JC San Miguel, JM Martínez, in Proceedings of IEEE Conference on Advanced Video and Signal Based Surveillance (AVSS). Robust unattended and stolen object detection by fusing simple algorithms (Santa Fe, 2008), pp. 18–25.Google Scholar
- PETS 2006 Benchmark Data. http://www.cvg.reading.ac.uk/PETS2006/data.html. January 2015.
- PETS 2007 Benchmark Data. http://www.cvg.reading.ac.uk/PETS2007/data.html. January 2015.
- Chroma-based Video Segmentation Ground-truth (CVSG). http://ww-vpu.eps.uam.es/CVSG, January 2015.
- R Vezzani, R Cucchiara, Video surveillance online repository (visor): an integrated framework. Multimedia Tools Appl. 50(2), 359–380 (2010).View ArticleGoogle Scholar
- Candela - Surveillance. http://www.multitel.be/cantata. January 2015.
- WCAM - Surveillance. http://www.vpu.eps.uam.es/DS/ASODds/index.html. January 2015.
- A Lopez-Mendez, F Monay, JM Odobez, in Proceedings of International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP). Exploiting scene cues for dropped object detection (Lisbon, 2014).Google Scholar
- G Szwoch, Extraction of stable foreground image regions for unattended luggage detection. Multimedia Tools Appl., 1–26 (2014). doi:10.1007/s11042-014-2324-4.Google Scholar
- YL Tian, RS Feris, H Liu, A Hampapur, MT Sun, Robust detection of abandoned and removed objects in complex surveillance videos. Systems, Man, Cybernetics, Part C: App. Rev. IEEE Trans. 41(5), 565–576 (2011). doi:10.1109/TSMCC.2010.2065803.View ArticleGoogle Scholar
- Q Fan, P Gabbur, S Pankanti, in Computer Vision (ICCV), 2013 IEEE International Conference On. Relative attributes for large-scale abandoned object detection, (2013), pp. 2736–2743. doi:10.1109/ICCV.2013.340.Google Scholar
- PL St-Charles, GA Bilodeau, R Bergevin, in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshops), Columbus, OH, USA, 2014. Flexible background subtraction with self-balanced local sensitivity, (2014).Google Scholar
- G Di Caterina, Video analytics algorithms and distributed solutions for smart video surveillance. PhD thesis, University of Strathclyde (2013).Google Scholar
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.