Skip to main content

Shadow removal with background difference method based on shadow position and edges attributes

Abstract

This article presents a shadow removal algorithm with background difference method based on shadow position and edges attributes. First, a novel background subtraction method is proposed to obtain moving objects. This method mainly includes three parts, namely detecting the moving regions approximately by calculating the inter-frames differences of symmetrical frames and counting the static index of each probable moving point; modeling for background by the statistics of brightness information and updating this model combining motion templates; then extracting moving objects and its edges. Second, based on the above processing, we suppress shadows in the HSV color space first, then the direction of shadow is determined by shadow edges and positions combining with the horizontal and vertical projections of the edge image, respectively, the position of the shadow is located accurately through proportion method, the shadow can be removed finally. Experimental results indicate that the proposed method is easy to be realized and can determine the direction of the shadow adaptively, then eliminate the shadow and extract the whole moving object accurately, especially when the chrominance invariant principle is ineffective.

1. Introduction

Video object segmentation is of fundamental importance in many advanced video applications such as tracking and interpreting human behavior, surveillance, motor traffic analysis or environmental monitoring, and so on [14]. Many segmentation approaches for background subtraction have been proposed over the past decades [59]. Some methods include parametric and non-parametric background density estimates and spatial correlation approaches [10]. However, most video objects extracted results are usually unsatisfactory in case that shadows exist in every frame of video sequence. So, moving shadow detection is critical for accurate object detection in video streams since shadow points are often misclassified as object points, causing errors in segmentation and tracking [11].

Shadows have two important visual properties. One is that they are so different from background that may wrongly be extracted as foreground. The other is that both shadows and moving objects have the same motion features. Because of these two visual properties, shadows usually make the geometrical shape of moving objects distorted, sometimes even cause the losing or merging of moving objects. Therefore, detecting and removing shadows from object regions have great practical significance and tremendous challenge in the field, which have attracted a great deal of attention recently.

A rather novel method [12] detects shadows and moving objects based on the sound physical models with good efficiency. In [13], all four assumptions (such as light source causing a cast shadow has a certain extent) are used at the same time to detect image regions changes by moving cast shadows. Because the chromaticity information is not affected by the change of illumination for some cases, a shadow region can be detected by selecting the region which is darker than its neighboring regions but has similar chromaticity information. According to this illumination invariant property of chromaticity, several efficient methods [14, 15] have been developed to detect shadows for color images efficiently. Tian et al. [16] use image information theory to deduce the tricolor attenuation model and employ blackbody irradiance to estimate its parameters, then detect shadows based on the new model. Tsai [17] presents an automatic property-based approach for the detection and compensation of shadow regions with shape information preserved in complex urban color aerial images for solving problems caused by cast shadows in digital image mapping. Lu et al. [18] first sampled shadow pixels based on the estimated shadow direction, afterwards, three shadow attributes were calculated based on sampled shadow pixels, but it costs significant processing time. In [19], moving shadows were detected by using its motion characteristics and the underlying physics based on the color segmentation, this method had poor accuracy since it could not distinguish shadows from black objects. Cucchiara et al. [20] proposed an approach which exploits color information for both background subtraction and shadow detection to improve object segmentation and background update.

Overall, the existing shadow removal methods can be divided into two categories: model-based method and feature-based method. The former one generally uses some prior knowledge such as scene, object, illumination condition, and so on, through which can match the arrises, lines, angles of the moving objects in order to detect shadows. Unfortunately, the prior knowledge is usually very difficult to obtain. This kind of method also has long processing time and it is minimally used in practice. Feature-based method as proposed in references prefers using shadow’s brightness, color information, saturation, texture, and geometric features directly to detect shadows. This kind of method is commonly used, while also has drawbacks. For example, if the color feature method is adopted, when the object and shadow have the similar color, then the invariant color features are not applicable, and the shadows cannot be differentiated.

To avoid drawbacks in featured-based method, we propose an approach of shadow detection and elimination based on shadow position and edge attributes after HSV shadow removal. This approach is shown in Figure 1, which is simple and easier to be carried out. It includes two important parts, namely moving object extraction and shadow detection and removal, shown in Sections 2 and 3. Section 2 lays foundation for future process. Section 3 is the emphasis of the whole approach, which is novel to some extent. The proposed method can still locate the shadow direction adaptively and suppress it exactly even under the circumstance that most color-based removal methods cannot work well when the invariant color features are not applicable.

Figure 1
figure 1

Flowchart of the proposed shadow removal method.

2. Moving object extraction by background subtraction method

2.1. Generation of change detection mask and motion template

The process for the generation of CDM (change detection mask and motion template) is clearly shown in Figure 2. Let f k (x,y) be the k frame of the video sequence which is gray-valued. The symmetrical frame distance is δ (one to two frames generally). So, the inter-frame difference for current k frame is defined as follows

d k x , y = { 255 if abs f k + δ x , y f k x , y > T 1 | | abs f k δ x , y f k x , y > T 2 0 else
(1)
Figure 2
figure 2

The flowchart of the proposed background difference method.

T1 and T2 denote frame difference threshold which should be chosen according to moving speed, range, noise distribution of the video object motion, and they are determined by experiments in the proposed algorithm. Parameters T1 and T2 are usually selected about 10 in most experimental video sequences. abs() denotes absolute operation. Although the accumulative frames difference can reflect the boundary and region of moving objects well, while it also increases noises. So, it is necessary to filter the noises.

After the noises filtering (using morphologic process), a method to obtain the motion template by using static index is proposed in this article. The main idea of this method is shown as follows: first, select the denoised video sequence for some frames (may be the whole video sequence) according to the requirement of real-time, supposing the total frames number is n_NumFrame, and the size of each frame is width × height. Since the background point remains still, that is to say d k (x,y) = 0 within most video sequences, those points can be seen as static. According to this, treat all the frames, count the number of times when the pixel point (X,Y) is static in n_NumFrame frames, which is called static index n_Static(X,Y) for pixel point (X,Y). Finally, if the static index n_Static(X,Y) of the selected pixel point is bigger than 0.93 × n_NumFrame (0.93 is determined by experiments in the proposed algorithm), then the pixel point (X,Y) is background point, otherwise it is foreground point. And the motion template PMask(X,Y) of the video sequence is generated as follows:

i f ( d k ( x , y ) = = 0 ) n _ Static ( X , Y ) = n _ Static ( X , Y ) + 1 ( x , X [ 0 , Width 1 ] y , Y [ 0 , Height 1 ] k [ 1 , n _ NumFrame ]
(2)
PMask X , Y = { 255 i f ( n _ Static ( X , Y ) 0.93 x n _ NumFrame ) 0 else
(3)

2.2. Background modeling and updating

The simplest way of background modeling happens when there is no moving object in the scene. However, it is hard to satisfy this requirement in actual application. Thus, it is very necessary to create and update background model adaptively when there is a moving object in the scene. We use statistical modeling approach to build the background model. Specific strategies are as follows: first, let G(x,y) be the set of all possible pixel values of pixel point (X,Y) in n_NumFrame frame, count the frequency that every pixel value appears at the pixel position (X,Y) in the n_NumFrame frame; second, define the pixel value which appears with most high frequency at the pixel position (X,Y) as the background pixel value Bkground(X,Y).

G X , Y = G 1 , G 2 G k | 0 < k n _ NumFrame
(4)
Bkground X , Y = G i | Count G i Count G j , 0 < i k , j = 1 , 2 i 1 , i + 1 , k
(5)

Count() in (5) denotes the operation of counting the number of pixels.

Combining the motion template PMask(X,Y) obtained in section 2.1, substitute the background pixels values of non-motion region with the corresponding pixels values of the current frame and substitute the background pixels values of motion region with 0. Then, the background model at (x,y) of n- th frame can be updated as follows

Bkgroun d n x , y = { f n ( x , y ) i f ( P M a s k ( X , Y ) = = 0 ) Bkground ( X , Y ) i f ( P M a s k ( X , Y ) = = 255 )
(6)

2.3. Motion object detection and edge extraction

Based on the above steps, it is able to extract the moving objects by using background subtraction.

pVOPalph a n x , y = abs f n x , y Bkgroun d n x , y
(7)

Then, we can get the alpha template pVOPalpha n (x,y) by threshold. If pVOPalpha n (x,y) is 255, it indicates that pixel point (X,Y) is considered as a foreground point, otherwise it is considered as a background point.

Both Figures 3 and 4c,f show the extraction results by the background subtraction method we presented above. The first three images in Figure 3a display the segmented results of frames 57, 59, and 64 in “mother–daughter” video sequence. The last three images in Figure 3b are the segmented results of frames 3, 14, and 47 in “Akiyo” video sequence, respectively. We can see clearly from Figure 3 that this new method is very applicable to the video sequence which does not have shadows on it.

Figure 3
figure 3

The extraction results of video sequence contain no shadows after the method of background subtraction.

Figure 4
figure 4

The extraction results with the existing of shadows. (a) The 1st frame of “Table”. (b) VOP of (a). (c) The extraction results of (a). (d) The 3rd frame of “Table”. (e) VOP of (b). (f) The extraction results of (b).

Figure 4 shows the extraction results of the 1st and 3rd frames of video sequence “Table”. As shown in Figure 4, the existing of shadows seriously affects the accuracy of extraction, though the moving object keeps itself integrated. Therefore, the shadows must be eliminated to ensure the accuracy of extraction results.

Before the shadow removal, edge detection could be done on the foreground region which has been obtained. Figure 5 shows the binary images of Sobel edge detection results of (c) and (f) in Figure 4.

Figure 5
figure 5

Sobel edge detection results of “Table”. (a) Sobel edge detection result of the 1st frame of “Table”. (b) Sobel edge detection result of the 3rd frame of “Table”.

Through a large number of experiments and observations, we can find that comparing with video objects edges, the shadow edges seem much simpler, they are adjacent to object points and most of them distribute mainly on the outer contour. Based on the attributes of shadow edges, first, we attempt to detect and eliminate shadow edges based on the edge extraction including moving objects edges and shadows edges; second, the moving object can be reconstructed by the remaining edges. This method not only can reduce the computational load greatly, but also be easier to detect shadows.

3. Shadow detection and removal

3.1. Shadow suppression in HSV

The existing approaches of shadow suppression based on the color characteristic are mainly concentrated in RGB and HSV color space. In RGB color space, human perception differences have less consistency with computational differences. Moreover, the correlation of the three components in RGB color space often leads to less effective detection. But these shortcomings could be overcome in HSV color space. It indicates that HSV color space can reflect the intensity and color information better than RGB color space, and it has better color perception consistency in HSV color space. In shadow detection, relative to the pixels of background region, V component becomes smaller with big change, which is an important parameter for distinguishing shadows from foreground regions. S component has little value and its difference with the background will be negative. H component varies hardly. We first eliminate shadows in HSV color space according to the rules described in (8).

S x , y = { 0 if α ( I V x , y / B V x , y β , I S x , y B S x , y T S , and abs I H x , y - B H x , y T H ) 255 else
(8)

In (8), α, β, T s , T H are, respectively, the threshold of intensity, hue, and saturation (0 < α < β <1). Figure 6 shows the edge images of foreground of 1st and 3rd frames of “Table” after HSV shadow suppression, where α = 0.1,β = 0.9,T s = T H = 0.2.

Figure 6
figure 6

Results of Figures4a and4d after HSV suppression. (a) Result of Figure 4a after HSV suppression. (b) Result of Figure 4d after HSV suppression.

3.2 Shadow removal based on shadow position

In most instances, shadow suppression in HSV color space seems effective. However, this method is not reliable when the background brightness is low or the background has the similar chrominance with foreground. Once the brightness of background is low, it is very difficult to distinguish all the shadows from background because its brightness will change a little when shadows cover on the background as shown in Figure 6. Meanwhile, some pixel points inside the moving object may be eliminated as shadow points.

To overcome the shortcomings of HSV suppression, we propose an approach based on the combination of shadow position and edge attributes after HSV suppression which can be divided into five parts as shown in Figure 1.

As mentioned in Section 2, most shadow regions have less inner edges and the shadow edges mainly concentrate on the outer contour when the background has simple texture. After the shadow suppression in HSV color space, the outer contour edges of undetected shadow become much sparser. In addition, we find that the outer contour edges of shadows are usually adjacent to the moving object through many experiments and observations. The two points mentioned above are the shadow edge attribute and shadow position feature, respectively.

We can acquire the initial shadow position by proportion method based on the shadow position feature. Then the precise shadow pixels positions can be determined by the shadow edge attribute and then the remaining shadow can be eliminated further. The specific steps are as follows.

Step 1: Distribution statistics after HSV suppression

In order to locate shadow position, we should project each frame of edge video sequence PVOPalpha n (j,i) to horizontal and vertical directions, respectively, as shown in Equation (9) after shadows have been eliminated initially in HSV. Figure 7 reflects the distribution statistics results, from which the foreground pixels number of each row or each column can be seen clearly.

if ( p V O P a l p h a n ( j , i ) = = 255 ) Horizonatal n [ i ] = Horizontal n [ i ] + 1 ; Vertical n [ j ] = Vertical n [ j ] + 1 ;
(9)
i 0 , Height 1 , j 0 , Width 1 , n 1 , n _ NumFrame
Figure 7
figure 7

The horizontal and vertical projection results of Figures6a and6b. (a) The horizontal projection result of Figure 6a. (b) The vertical projection result of Figure 6a. (c) The horizontal projection result of Figure 6b. (d) The vertical projection result of Figure 6b.

Step 2: Estimate the approximate shadow position and determine the search direction

After projection in Step 1, some key positions should be found to get ready for determining shadow position and search direction. These key positions include i n,min i n,max corresponding to i when Horizontal n [i] is not zero and j n,min j n,max corresponding to j when Vertical n [j] is not zero as shown in Figure 8 which have marked in red. And then, count pixels number as follows:

N 3 = Σ j = j n , min j n , min + j n , max / 2 Vertical n j N 4 = Σ j = ( j n , min + j n , max ) / 2 j n , max Vertical n j
(10)
Figure 8
figure 8

Marks of key positions for horizontal search.

In accordance with shadow edges attribute, the approximate shadow position can be judged preliminarily in the sparser part. Therefore, the corresponding relationship between the shadow approximate position and search direction is shown in Table 1.

Table 1 Shadow position and search direction

Step 3: Search for possible adjacent positions between shadows and moving objects

Shadow position feature denotes that shadow outer contour edges are usually adjacent to the moving objects. From Step 2 we have obtained search directions of each frame which have been marked by red arrows in Figure 7. So, the adjacent position could be searched by the strategy as follows:

i f ( Horizontal n [ i ] 0 & & Horizontal n [ i + m ] < 10 ( m = 1 , 2 , , 4 , 5 ) ) i HAdjacent = i ( 0 = i < h e i g h t ) i f ( V e r t i c a l n [ j ] 0 & & Vertical n [ j + m ] < 10 ( m = 1 , 2 , , 4 , 5 ) ) j VAdjacent = j ( 0 = j < w i d t h )
(11)

When the probable adjacent i or j appears, stop searching and go to Step 4, otherwise continue searching.

Step 4: Remove the false adjacent position

The adjacent positions i or j got in Step 3 by Equation (11) are very possible to be the cuspidal points of moving objects which are the false adjacent positions. These bad position points are supposed to be filtered out. The judgment rule based on proportion method is as follows.

{ true Σ k = j Width 1 Vertical n [ k ] < T 3 false Σ k = j Width 1 Vertical n [ k ] T 3
(12)

If j in Equation (12) is a true adjacent position, we will reserve it, otherwise eliminate it. This circumstance is just an example of horizontal direction which supposes that the shadow is in the right. Other seven cases should be judged according to the similar rules.

Step 5: Eliminate shadow edge points accurately

When the shadow position has been located accurately, count the pixel values of each line and column in horizontal and vertical directions of shadow region. In the shadow region, comparing the pixel values of each row or each column, once the number is smaller than the threshold obtained by experiments, then we remove the whole row or column by setting their values as zero in the edge image which got in Section 2 that the shadows will be eliminated from. That is the key to keep moving objects extracted completely. Figure 9 shows the shadow removal results without and with Step 3.

Figure 9
figure 9

The shadow removal results of Figure9a without step 4 and with step 4. (a) The 98th frame of “Table” after HSV suppression. (b) The shadow removal result of (a) without Step 4. (c) The shadow removal result of (a) with Step 4.

Finally, fill up the remaining moving object edges and process them with mathematical morphology, and then map the alpha plane to the moving object.

4. Experiment results and discussion

To verify the effectiveness of the proposed algorithm, we select two types of video sequences with shadow as displayed in Table 2 to conduct experiments.

Table 2 Selected video sequences

Figure 10 gives the shadow elimination result comparison between the algorithm in [5] (only considering HSV color space) and the proposed algorithm for the 1st and 3rd frames of video sequence “Table”. Figure 11 gives the shadow elimination result comparison between the algorithm in [5] and the proposed algorithm for the 10th and 13th frames of video sequence “Silent”.

Figure 10
figure 10

Results of the 1st and 3rd frames of “Table” by using the algorithm in [[5]] and the proposed algorithm. (a) Result of the 1st frame of “Table” by using the algorithm in [5]. (b) Result of the 3rd frame of “Table” by using the algorithm in [5]. (c) Result of the 1st frame of “Table” by using the proposed algorithm. (d) Result of the 3rd frame of “Table” by using the proposed algorithm.

Figure 11
figure 11

Results of the 10th and 13th frames of “Silent” by using the algorithm in [[5]] and the proposed algorithm. (a) The 10th frame of “Silent”. (b) Result of the 10th frame of “Silent” by using the algorithm in [5]. (c) Result of the 10th frame of “Silent” by using the proposed algorithm. (d) The 13th frame of “Silent”. (e) Result of the 13th frame of “Silent” by using the algorithm in [5]. (f) Result of the 13th frame of “Silent” by using the proposed algorithm.

In Figure 10, the pixel values of the background and foreground have a high contrast. But, it varies little when shadow projects on the background. If we still suppress shadow just in HSV color space, some shadow edges will be considered as moving object edges.

In Figure 11, the background brightness varies little when shadow projects on the background, meanwhile the background is complex and close to the foreground color. This will cause the color invariability ineffective and result in some shadow edges miss-detected and wrong-detected, and then make some moving object edges to be eliminated as shadow edges wrongly. From the comparison results in Figures 10 and 11, it is obvious that the proposed algorithm can overcome these problems successfully and extracts the complete moving object accurately and robustly.

To validate the adaptability to shadow direction of the proposed algorithm further, we also shoot two video sequences which have shadows in different directions. The experimental results are shown in Figure 12.

Figure 12
figure 12

Results of “Men” and “Wait” by using the proposed algorithm. (a) The 4th frame of “Men”. (b) The 93rd frame of “Men”. (c) Result of the 4th frame of “Men” by using the proposed algorithm. (d) Result of the 93rd frame of “Men” by using the proposed algorithm. (e) The 54th frame of “Wait”. (f) The 58th frame of “Wait”. (g) Result of the 54th frame of “Wait” by using the proposed algorithm. (h) Result of the 58th frame of “Wait” by using the proposed algorithm.

The proposed algorithm can also be applied in more complex situation, such as in the case of multiple objects. The experimental results of video sequence “Men walking” are shown in Figure 13.

Figure 13
figure 13

Results of “Men walking” by using the proposed algorithm. (a) The 37th frame of “Men walking”. (b) The 38th frame of “Men walking”. (c) Result of the 37th frame of “Men walking” by using the proposed algorithm. (d) Result of the 38th frame of “Men walking” by using the proposed algorithm.

To further prove our algorithm in the case where shadows connect with other foreground objects and their shadows, we shoot another two video sequences: “Lamp man” and “Car men”. Both of these two sequences contain multiple objects and crossed shadows. The results are shown in Figures 14 and 15. It is obvious that our proposed method can effectively cope with such situations.

Figure 14
figure 14

Results of “Lamp man” by using the proposed algorithm. (a) The 17th frame of “Lamp man”. (b) The 18th frame of “Lamp man”. (c) The 19th frame of “Lamp man”. (d) Result of 17th frame of “Lamp man” before the process of shadow elimination. (e) Result of 18th frame of “Lamp man” before the process of shadow elimination. (f) Result of 19th frame of “Lamp man” before the process of shadow elimination. (g) Result of the 17th frame of “Lamp man” by using the proposed algorithm. (h) Result of the 18th frame of “Lamp man” by using the proposed algorithm. (i) Result of the 19th frame of “Lamp man” by using the proposed algorithm.

Figure 15
figure 15

Results of “Car men” by using the proposed algorithm. (a) The 2nd frame of “Car men”. (b) The 3rd frame of “Car men”. (c) The 4th frame of “Car men”. (d) Result of 2nd frame of “Car men” before the process of shadow elimination. (e) Result of 3rd frame of “Car men” before the process of shadow elimination. (f) Result of 4th frame of “Car men” before the process of shadow elimination. (g) Result of the 2nd frame of “Car men” by using the proposed algorithm. (h) Result of the 3rd frame of “Car men” by using the proposed algorithm. (i) Result of the 4th frame of “Car men” by using the proposed algorithm.

In order to test the proposed algorithm for its ability in dealing with shadows of different directions, results of another three video sequences (“Ball man A”, “Ball man B”, and “Ball man C”) are illustrated in the article. These three video sequences are shot at the same spot in different times of the day. “Ball man A” and “Ball man B” are shot at around 9 and 10 am, respectively. While “Ball man C” is shot at noon when it is overcast, its shadows are very weak and much smaller comparing with the others. Apparently, the proposed method can also be applied in such circumstances. Results are shown in Figures 16, 17, and 18.

Figure 16
figure 16

Results of “Ball man A” by using the proposed algorithm. (a) The 5th frame of “Ball man A”. (b) The 6th frame of “Ball man A”. (c) The 7th frame of “Ball man A”. (d) Result of 5th frame of “Ball man A” before the process of shadow elimination. (e) Result of 6th frame of “Ball man A” before the process of shadow elimination. (f) Result of 7th frame of “Ball man A” before the process of shadow elimination. (g) Result of the 5th frame of “Ball man A” by using the proposed algorithm. (h) Result of the 6th frame of “Ball man A” by using the proposed algorithm. (i) Result of the 7th frame of “Ball man A” by using the proposed algorithm.

Figure 17
figure 17

Results of “Ball man B” by using the proposed algorithm. (a) The 5th frame of “Ball man B”. (b) The 6th frame of “Ball man B”. (c) The 7th frame of “Ball man B”. (d) Result of 5th frame of “Ball man B” before the process of shadow elimination. (e) Result of 6th frame of “Ball man B” before the process of shadow elimination. (f) Result of 7th frame of “Ball man B” before the process of shadow elimination. (g) Result of the 5th frame of “Ball man B” by using the proposed algorithm. (h) Result of the 6th frame of “Ball man B” by using the proposed algorithm. (i) Result of the 7th frame of “Ball man B” by using the proposed algorithm.

Figure 18
figure 18

Results of “Ball man C” by using the proposed algorithm. (a) The 21st frame of “Ball man C”. (b) The 22nd frame of “Ball man C”. (c) The 23rd frame of “Ball man C”. (d) Result of 21st frame of “Ball man C” before the process of shadow elimination. (e) Result of 22nd frame of “Ball man C” before the process of shadow elimination. (f) Result of 23rd frame of “Ball man C” before the process of shadow elimination. (g) Result of the 21st frame of “Ball man C” by using the proposed algorithm. (h) Result of the 22nd frame of “Ball man C” by using the proposed algorithm. (i) Result of the 23rd frame of “Ball man C” by using the proposed algorithm.

In order to evaluate the validity and effectiveness of the proposed algorithm objectively, we use the criterion proposed by Wollborn and Mech [21] to judge the extracted results of the proposed algorithm. The video object mask spatial accuracy (SA) of each frame defined in this criterion is as follows:

SA = 1 Σ x , y A t est x , y A t ref x , y Σ x , y A t ref x , y
(13)

A t ref(x, y) and A t est(x, y) denote the video object mask of the reference frame and the VOP alpha mask we obtained respectively. is “XOR” operation. SA reflects the level of similarity between the video object mask of the reference frame and the VOP alpha mask we obtained of each frame. Higher SA indicates more accurate segmentation, while lower SA indicates poor result.

In this article, we get the reference video object masks by hand. Both Figures 19 and 20 display the SA of the first 15 frames of video sequence “Table” and “Silent” by using two different shadow elimination methods.

Figure 19
figure 19

The contrast SA of extracted VOP after using two different shadow elimination methods for “Table” from frame 1 to 15.

Figure 20
figure 20

The contrast SA of extracted VOP after using two different shadow elimination methods for “Silent” from frame 1 to 15.

It can clearly be seen from Figure 19 that, if we just suppress shadow in HSV, the SA which shifts from 0.11 to 0.38 is much smaller than that of the proposed algorithm which shifts from 0.62 to 0.80. In Figure 20, the SA even keeps above 0.90 of the proposed algorithm. All these indicate the validity and accuracy of the proposed algorithm.

Tables 3, 4, 5, 6, 7, 8, 9, and 10 list the SA of extracted results by using the proposed method for the first 14 frames of video sequences “Men”, “Wait”, “Men walking”, “Lamp man”, “Car men”, “Ball man A”, “Ball man B”, and “Ball man C”, respectively.

Table 3 SA of “Men” from frame 1 to 14
Table 4 SA of “Wait” from frame 1 to 14
Table 5 SA of “Men walking” from frame 1 to 14
Table 6 SA of “Lamp man” from frame 1 to 14
Table 7 SA of “Car men” from frame 1 to 14
Table 8 SA of “Ball man A” from frame 1 to 14
Table 9 SA of “Ball man B” from frame 1 to 14
Table 10 SA of “Ball man C” from frame 1 to 14

Taking into consideration that parameters are set heuristically based on the test sequences, machine learning techniques could be useful for strengthening the adaptability of parameter determination. Franek and Jiang [22] address the parameter selection problem in image segmentation and presents a novel unsupervised framework for automatically choosing parameters. A supervised learning algorithm for quantum neural networks based on a novel quantum neuronnode implemented as a very simple quantum circuit is proposed and investigated in [23]. Methods like PSO, ML, and SVM are also well work in overcoming the inadaptability of parameters selection [2426]. After researching these references relevant to machine learning, we will take the method of revised SVM into use in our future work in order to realize automatic and adaptive parameter determination.

5. Conclusion

In this article, we propose an effective background difference approach of shadow detection and suppression based on shadow position and edge attributes after HSV shadow removal. Comparing with other methods, this method can locate various shadow positions and remove them accurately even in the cases that the chrominance invariant principle is ineffective, or the color of background texture is similar to the color of moving objects. Meanwhile the proposed method can keep the completeness of the extracted moving object while the most color-based shadow removal methods cannot work well. The results of the experiments also demonstrate that the proposed method is simple and robust.

References

  1. Jiang H, Ardö H, Öwall V: A hardware architecture for real-time video segmentation utilizing memory reduction techniques. IEEE Trans. Circuits Syst. Video Technol 2009, 19(2):226-235.

    Article  Google Scholar 

  2. Huang SS, Fu LC, Hsiao PY: Region-level motion-based foreground segmentation under a Bayesian network. IEEE Trans. Circuits Syst. Video Technol 2009, 19(4):522-531.

    Article  Google Scholar 

  3. Wang Y: Real-time moving vehicle detection with cast shadow removal in video based on conditional random field. IEEE Trans. Circuits Syst. Video Technol 2009, 19(3):437-441.

    Article  Google Scholar 

  4. Christopher W: Richard, A Ali, D Trevor, P Alex. Pfinder: real-time tracking of the human body. IEEE Trans. Pattern Anal. Mach. Intell 1997, 19(7):780-785. 10.1109/34.598236

    MathSciNet  Google Scholar 

  5. Yong H, Tian JW, Chu Y, Tang QL, Liu J: Spatiotemporal smooth models for moving object detection. IEEE Signal Process. Lett 2008, 15: 497-500.

    Article  Google Scholar 

  6. McHuch JM, Konrad J, Saligrama V, Pieree-Marc J: Foreground-adaptive background subtraction. IEEE Signal Process. Lett 2009, 16(5):390-393.

    Article  Google Scholar 

  7. Chien SY, Ma SY, Chen LG: Efficient moving object segmentation algorithm using background restoration technique. IEEE Trans. Circuits Syst. Video Technol 2002, 12(7):577-586. 10.1109/TCSVT.2002.800516

    Article  Google Scholar 

  8. Haritaoglu I, Harwood D, Davis LS: W4: real-time surveillance of people and their activities. IEEE Trans. Pattern Anal. Mach. Intell 2000, 22(8):809-830. 10.1109/34.868683

    Article  Google Scholar 

  9. Tsai DM, Lai SC: Independent component analysis-based background subtraction for indoor surveillance. IEEE Trans. Image Process 2009, 18(1):158-167.

    Article  MathSciNet  Google Scholar 

  10. Piccardi M: Background subtraction techniques. a review, in IEEE International Conference on Systems, Man and Cybernetics, vol. 4 (10-13 Oct., 2004), 3099-3104.

  11. Prati A, Mikic I, Trivedi MM, Cucchiara R: Detecting moving shadows: algorithms and evaluation. IEEE Trans. Pattern Anal. Mach. Intell 2003, 25(7):918-923. 10.1109/TPAMI.2003.1206520

    Article  Google Scholar 

  12. Nadimi S, Bhanu B: Physical models for moving shadow and object detection in video. IEEE Trans. Pattern Anal. Mach. Intell 2004, 26(8):1079-1087. 10.1109/TPAMI.2004.51

    Article  Google Scholar 

  13. Stauder J, Mech R, Ostermann J: Detection of moving cast shadows for object segmentation. IEEE Trans. Multimed 1999, 1(1):65-76. 10.1109/6046.748172

    Article  Google Scholar 

  14. Jodoin P-M, Mignotte M, Konrad J: Statistical background subtraction using spatial cues. IEEE Trans. Circuits Syst. Video Technol 2007, 17(12):1758-1763.

    Article  Google Scholar 

  15. Cucchiara R, Grana C, Piccardi M, Prati A, Sirotti S: Improving shadow suppression in moving object detection with HSV color information. In in Proceedings of the IEEE Intelligent Transportation Systems Conference. Oakland, CA; 334-339. 25-29 Aug 2001

    Google Scholar 

  16. Tian JD, Sun J, Tang YD: Tricolor attenuation model for shadow detection. IEEE Trans. Image Process 2009, 18(10):2355-2363.

    Article  MathSciNet  Google Scholar 

  17. Tsai VJD: A comparative study on shadow compensation of color aerial images in invariant color models. IEEE Trans. Geosci. Remote Sens 2006, 44(6):1661-1671.

    Article  Google Scholar 

  18. Lu YH, Xin HJ, Kong J, Li BB, Wang Y: Shadow removal based on shadow direction and shadow attributes. In in Proceedings of the IEEE International Conference on Computational Intelligence for Modeling Control and Automation. Sydney, NSW; 37-41. Nov. 28 2006-Dec. 1, 2006

  19. Pan X: Moving shadow detection based on color information and edge features. J. Zhejiang Univ. (Engineering Science) 2004, 38(4):389-391.

    Google Scholar 

  20. Cucchiara R, Grana C, Piccardi M, Prati A: Detecting moving objects, ghosts and shadows in video streams. IEEE Trans. Pattern Anal. Mach. Intell 2003, 25(10):1337-1342. 10.1109/TPAMI.2003.1233909

    Article  Google Scholar 

  21. Wollborn M, Mech R: Refined procedure for objective evaluation of video object segmentation algorithms. 1998.

    Google Scholar 

  22. Franek L, Jiang X: Adaptive parameter selection for image segmentation based on similarity estimation of multiple segmenters. Lecture Notes Comput. Sci 2011, 6493: 697-708. 10.1007/978-3-642-19309-5_54

    Article  Google Scholar 

  23. da Silva AJ, de Oliveira WR, Ludermir TB: Classical and superposed learning for quantum weightless neural networks. Neurocomputing 2012, 75(1):52-60. 10.1016/j.neucom.2011.03.055

    Article  Google Scholar 

  24. de Carvalho AB, Pozo A: Measuring the convergence and diversity of CDAS multi-objective particle swarm optimization algorithms: a study of many-objective problems. Neurocomputing 2012, 75(1):43-51. 10.1016/j.neucom.2011.03.053

    Article  Google Scholar 

  25. Lorena AC, Costa IG, Spolaor N, de Souto MCP: Analysis of complexity indices for classification problems: cancer gene expression data. Neurocomputing 2012, 75(1):33-42. 10.1016/j.neucom.2011.03.054

    Article  Google Scholar 

  26. Gomes TAF, Prudencio RBC, Soares C, Rossi ALD, Carvalho A: Combining meta-learning and search techniques to select parameters for support vector machines. Neurocomputing 2012, 75(1):3-13. 10.1016/j.neucom.2011.07.005

    Article  Google Scholar 

Download references

Acknowledgments

The authors would like to express their appreciation to the anonymous reviewers for their insightful comments, which help improving this article. The study was supported by the National Natural Science Foundation of China (NSFC) under Grants nos. 61075011 and 60675018, also the Scientific Research Foundation for the Returned Overseas Chinese Scholars from the State Education Ministry of China.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shiping Zhu.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Zhu, S., Guo, Z. & Ma, L. Shadow removal with background difference method based on shadow position and edges attributes. J Image Video Proc 2012, 22 (2012). https://doi.org/10.1186/1687-5281-2012-22

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-5281-2012-22

Keywords