Dynamic and robust method for detection and locating vehicles in the video images sequences with use of image processing algorithm
 Gholamreza Farahani^{1}Email author
https://doi.org/10.1186/s1364001702301
© The Author(s). 2017
Received: 17 August 2017
Accepted: 19 November 2017
Published: 19 December 2017
Abstract
There are various methods in the field of movingobject tracking in the video images that each of them implies on the specific features of object. Among tracking methods based on features, algorithms based on color are able to provide a precise description of the object and track the object with high speed. One of the efficient methods in the field of object tracking based on color information is meanshift algorithm. If the color of moving object approaches toward a background model or image background has a low contrast and brightness, then the color information is not enough for target tracking. In this paper, the new tracking method is proposed which with combination of moved object information with color information, the new proposed method will be capable to track object under condition that color information is not enough for tracking. With use of background subtraction method based on Gaussian combination, the binary image which includes moving information will use in the meanshift algorithm. Usage of object movement information will compensate the lack of spatial information and will increase robustness of algorithm especially in the complicated conditions. Also in order to achieve the robust algorithm against changes in shapes, size, and rotation of object, extended meanshift algorithm is used. Results show the robustness of proposed algorithm in object tracking especially under conditions which object color is same as background color and have better results in the low contrast condition in comparison to meanshift and extended meanshift algorithms.
Keywords
1 Introduction
Object tracking in the video images is one of the fields of applied science which for its difficulties has preoccupied many researches to itself. Although history of tracking systems comes back to design of radar system and GPS location, tracking systems based on twodimensional signal processing do not have a long history. Human perennial favorite to robots for the industrial or home works helps to fast growth of digital imagetracking systems. Many studies are carried out in the detection and tracking field and different methods is proposed for them, these studies still ongoing, because so far, a perfect method that works well everywhere and be fast and robust has not been achieved. The reasons which make the detection and tracking so hard are changes in the brightness of image, shape changes of desired object, changes of target number, nonGaussian noises, and occlusion problem.
These algorithms according to their weakness and strengths are used in the different applications such as medical, military and industrial. The most important factors in the procedure toward processing algorithms and machine vision techniques is insertion of intelligent systems in everyday life, so that to this day, robots and other intelligent machines are inserted into the human life.
Object tracking based on image has the difficulties and complexities. Movement of object from camera makes the image size of object larger, smaller or rotation in the object image. Environment light variations such as weather clouding or shadow creation during the day, causes changes in the object image. Despite many problems in object tracking, it has many applications.
Nowadays, control of traffic is one the most important and vital issues which it requires employing skilled and experienced people. Traffic engineering is based on control and management of traffic information collecting and analysis such as number of vehicles, speed and traffic flow. In this regard, recently, different strategies are proposed to control traffic mechanism automatically [1–3].
Different sensors such as microwave vehicle detector, inductive loop detector (ILD), video camera, and optical and electronic sensors are created to use for quantitative and qualitative analysis of traffic images [4].
But despite the application diversity, these sensors have some disadvantages. As an example, installation and maintenance of ILDs is expensive. Also ILDs cannot identify vehicles that are stopped or slow moving and only count the vehicles in the one point. Furthermore for measuring traffic flow or vehicles speed, several ILD sensors are needed [5].
Video cameras with covering a wide space of traffic scenes, will receive most information of these scenes compared to other sensors. Also, video camera installation has a lower cost in comparison with other sensors and services and maintenance of them is not requiring an interrupt to the passages traffic flow. Furthermore, specialist with use of analytical and processing methods that are applied to the received images from cameras can manage and control the traffic with lower cost and more efficient.

Mobile or fixed camera

Dynamic calibration of camera

Image quality such as noise and video bit rate

Night and day limitation

Weather limitation

Light reflection

Vehicle speed

Distance from camera

Busy route and targets occlusion

Processor limitations for complex algorithms
Sochor [6] uses a background subtraction method to detect vehicle and uses Kalman filter for tracking, then calculate movement direction and vehicle speed. The precision of this method will decease during vehicle occlusion and target detection in the night and rainy weather. Yang et al. [7] use background dynamic update to detect vehicle and spatialtemporal profile to calculate amount of traffic and type of vehicle. This method has a precision drop in the vehicle occlusion and dark shadows of vehicles but it is proper for mobile camera that background changes very fast. Jazayeri et al. [8] use Hidden Markov Model (HMM) to separate background from vehicle and vehicle tracking in the day light and night dark. The images used by Jazayeri et al. [8] are incar moving type. This method has a precision drop in the very fast or slow speed of vehicle and far vehicles in the images and it is suitable for images of mobile camera that background changes rapidly.
Chiu et al. [9] utilize background statistical calculations to detect vehicle and uses visual features to track the vehicle. The precision of this method drops in the occlusion vehicles but for light variation conditions and different weather has an acceptable performance.
Zhang et al. [10] make use of the method to detect the vehicle headlights, and with pairing them, tracks the vehicles in the night. This method for different speed of vehicle conditions and rainy weather and high traffic has an acceptable performance. O’Malley et al. [11] used a HSV (hue, saturation, value) space model for vehicle tracking in the night to find vehicle rearlamps and paired them. In this method to improve the performance, Kalman filter is used. Salvi et al. [12] have used the adaptive threshold method to find vehicle headlights and pairing them. Also for vehicle tracking and separation car and motorcycle will check the spatial and temporal analysis of emitted light pattern. This method has an error when the vehicle headlight is broken.
Other subject in the motion estimation is speed of estimation. Yan et al. [13] proposed a parallel framework for highefficiency video coding (HEVC) motion estimation on a 64core system, which compared with serial execution, achieves more than 30 and 40 times speedup for 1920 × 1080 and 2560 × 1600 video sequences, respectively. Also, Yan et al., [14] suggested a highly parallel framework for HEVC coding unit partitioning tree decision on the Tile64 platform that achieves averagely more than 11 and 16 times speedup for 1920 × 1080 and 2560 × 1600 video sequences, respectively, without any coding efficiency degradation.
2 Object detection
Object detection in different environment is one of the important subjects and challenges in the machine vision field. Many parameters such as light condition, size, and situation (speed and acceleration) are effective on the detection results. First step in the automatic machine vision systems is detection. Created problems in this step directly affects on the later steps.
Usually detection systems include three main parts: object searching and its motion detection, feature extraction, and classification. In the detection part, modeling algorithms and background elimination will use. After obtaining the moving object, for confirmation that it is the same desired object, its features should be extracted and compared with reference object features.
Algorithms that are used for feature extraction have high variability and each of them has different abilities. One of the existing methods in the image processing and machine vision is the detection based on color histogram. This method has a high speed but in the conditions such as weak contrast, homochromatic background and occlusion is not stable. Color histograms in addition to low cost computing and robustness against size variations and rotation, because in the HSV, space is calculated, therefore they are robust to the light variations and shadow too [15]. In this section, some detection methods that include parts such as background modeling and motion detection will be reviewed.
2.1 Scene analysis system
In scene segmentation block, scene pixels will classify into the background and objects on the scene, then necessary features for processing will extract from objects. Segmentation can be based on spatial, temporal, or spatialtemporal methods. The spatial segmentation methods use the edges, region, texture, color, corners, and contours.
Temporal segmentation methods that used in video use subtraction of two or multiple frames or subtraction from background. Spatialtemporal segmentation methods like as optical flow [16] and subtraction between frames, use spatial, and temporal information simultaneously.
As shown in Fig. 2, spatialtemporal segmentation method includes image acquisition system, background updating, change detection, object positioning, and object tracking. In some methods, background is not used, but in most common methods, static, or dynamic background will use [17].
One of the simplest methods of object detection is usage of distance between pixels to the reference image pixels [18]. In this method, the slightest change in appearance such as size and speed has adversely affected on the detection results. Therefore, some geometric features are added such as edges and corners which need a large amount of calculation especially in the complicated scenes.
Other simple method is thresholding the gray images. In this method, thresholding is carried out on the different levels, then according to the changes in the image, object will identify. Carrillo and his colleagues proposed the sperms morphology identifier based on threshold, and used it separate the sperms [19].
Change detection will be used for object finding. To find a position of an object in the image, first, the position of the object in the image should be determined. In these applications, for object detection with use of some features of object, object should be searched in the whole image. Different algorithms are proposed in this field that block matching is one of them. In these algorithms, searching space can be reduced with use of wavelet subspaces or background modeling.
2.2 Background modeling
A group of researches use a frames subtraction as a subtraction between input image and reference image. In this situation, reference image will change according to the variations of input images that named it background learning [20]. One of the problems in background updating is the objects that remain unchanged in the scene for a long time. One solution for this problem is that for the points which recognize as a background model pixels, updating carry out with low forgetting coefficient and in the point which moving object recognize, updating carry out with high forgetting coefficient [21]. Other problems are lighting conditions, scene lighting variation with time, and object shadow.
There are different algorithms in the background modeling, such as adaptive filtering, neural networks, and Gaussian models [22]. In some papers, background model is obtained with averaging several consecutive frames and Gaussian distribution for background points is considered. Therefore, if point stay on the distribution, it will consider as an object inside of scene otherwise it will consider as a background. For calculation of Gaussian model parameters, pixel background in the previous frame is required. Also, multiGaussian models are used for background modeling [23].
3 Tracking algorithms
One of the developing problems in the image processing and machine vision is object tracking problem that its aim is presentation of position changes of an object in the video image sequences. In many object tracking applications, it is most important that tracking methods to be a real time. Therefore, methods should use the algorithms with low time cost.
These methods could be classified into the three clusters. Tracking based on region, tracking based on active contour and tracking based on feature and combined methods. At continuation of this section, each of these methods will explain shortly.
3.1 Tracking based on region
In this model, video images processing system, will find a connected region in the image which is named bubble. For example to each vehicle a bubble is assigned and then this bubble will track with use of cross correlation measurement over time. In this algorithm, the number of bubbles increases with increasing the diversity of object color. For each of the bubbles, a Gaussian function is considered and during searching with use of Gaussian functions and distances between a bubble with other bubbles, recognition and identification of the vehicles will carry out. This algorithm mostly use for detection of object with large size [24].
Although this algorithm works well on the highway which has a few vehicles but it cannot manage the occlusion of vehicles reliably. Also it cannot obtain the 3D of vehicles therefore this algorithm cannot be used to acquire supervision requirements in the busy backgrounds with multiple moving objects.
3.2 Tracking based on active contour
A Duggan for tracking based on region is tracking based on active contour or snakes. The main idea is a presentation of environmental contour of object and dynamic updating of it. Usage of presentation based on contour instead of presentation based on region will reduce the computational complexity.
Tracking algorithms based on active contour will display the scheme of moving objects as a contour that will update in the consecutive frames dynamically. These algorithms present effective descriptions of the objects than algorithms based on region and have been applied for tracking more successfully.
The problem of these algorithms is that management of occlusion is hard because tracking precision due to precision weak at contour position is limited. Methods based on active contour according to the algorithm which is used to find object border will classify into the snakes and geodesic [25].
3.2.1 Snakes

Efficiency of snakes at sequences with complicated background is low, for example, backgrounds containing textures that have strong boundaries close to the border of moving object.

When the intended moving objects are covered with fix or mobile obstacles, these methods will have a problem.
These are main reasons that researchers present the other models to use active contours with use of information based on movement, color, texture, randomization procedures and more appropriate restrictions [27].
3.2.2 Geodesic
This method is a geometric method, which is as an alternative to snake method.This method is based on minimizing energy. This method has been used by Caselles et al. [28]. This method has several advantages than snake method. For example, this method is not parametric but the main disadvantage of the method is nonlinearity. In this method, like as snakes, there is closed curve which should be within the bounds of the target object. This curve will show as a C(p) that 0 ≤ p ≤ 1.
3.3 Tracking based on feature
In these methods, identification and object tracking based on features of intended object for tracking will be carried out. In each frame of video images sequence, these features will extract and with feature matching in consecutive frames, tracking process will carry out. This method is used in different systems [29]. These algorithms can be adapted rapidly and used in realtime processing and multiobject tracking successfully. Featurebased tracking algorithms could be break down into the three categories: algorithms based on global features, algorithms based on local features, and algorithms based on dependency graph.
4 Tracking methods based on color feature
Tracking methods based on color feature are robust against camera view angle changes, object size changes, rotation, and partial occlusion. Firstly, in this section, usage of color information in object description will introduce and target localization based on color feature will describe. Then, primary meanshift algorithm and extended meanshift algorithm as a robust method against shape and size changes will present and finally deficiencies of meanshift algorithm will explain.
4.1 Usage of color information in object description
One of the object tracking methods is tracking based on color feature. In the image processing and machine vision, color is as an important feature in the object description, because from one side usage of it does not have complexities of other methods and from other side it has a good robustness. Therefore, according to this method, while maintaining the robustness; amount of processing data can be reduced [30].
One of the criteria that show the object color information is color histogram. This information could be used for present or absence of object in the image. In fact, everywhere in the image that has a histogram like as pervious frame; it can be considered as a new position of the object [31].
The base of tracking in this section is according to color histogram. Histogram describes the share of different colors in the whole image. To obtain share of different colors, after transformation image color space to discrete space, repetition number of special color in image will count and this numeric value will show a histogram [32].
Description of an object based on color histogram is robust to size, angle, and rotation variations and under these conditions will change gradually [33].
In object description with color, it could be possible to use different color space such as RGB (Red, Green, Blue) or HIS (Hue, Intensity, Saturation). Each of these color space has a three components. In the color space RGB, each image pixel will present with combination of three main color components Red (R), Green (G), and Blue (B). In HIS color space, each pixel will split into Hue (H), Intensity (I), and Saturation (S) components. Description of object with color histogram clearly specifies the location of object in each frame of image sequences.
As shown in Fig. 4, histogram of the image areas in different location of object has no significant difference. This histogram similarity can be a worthy criterion to find object in consecutive frames of the image sequence.
4.2 Target localization based on color feature
Due to not change the color information during consecutive frames of a video sequence, color feature is a good criterion for target object localization. In next subsection object localization based on color histogram will describe.
4.2.1 Target reference model
In eq. (2), \( \overrightarrow{q} \) is a target reference model and q _{ u } is uth component of vector \( \overrightarrow{q} \).
4.2.2 Target candidate model
Object model that searches in the consecutive frames will introduce as a target candidate which PDF presents with \( \overrightarrow{p}\left(\overrightarrow{y}\right) \) and \( \overrightarrow{y} \) indicates the position vector of window center of target candidate. In order to reduce the amount of calculations, it is possible to reduce each component color from 255 to m. Therefore,
4.2.3 Similarity function
If only spectrum information of an object for finding target location in subsequent frames is considered, similarity function will include vast variations and will loss the spatial information of the target. To find maximums of similarity function, there are different ideas. Generally, gradientbased optimization methods have more complexity in implementation. Another general method is a block searching over the whole frame. In this method a searching window will slide over the whole frame and in each step, similarity function will calculate then \( \overrightarrow{y} \) which has a most similarity function will choose as a center of target window. Because each pixel in the window does not have an equal importance, therefor in calculation of histogram, weighted histogram will be used [34].
4.2.4 Bhattacharyya distance measure
Similarity function is a distance between target candidate and target reference model. Whatever target candidate is closer to the target model, similarity function will close to the one. For the condition which target candidate exactly meets the target, similarity function has its maximum value one.
In the statistics, Bhattacharyya distance will measure the similarity between two probability distributions. Bhattacharyya coefficients will determine the amount of overlap between the two statistics samples. These coefficients are a _{ i } and b _{ i } which are raised for the first time in 1930 in Indian statistical institute [33] as eq. (7).
Bhattacharyya distance=\( \sum \limits_{i=1}^n\sqrt{\sum {a}_i.\sum {b}_i} \) (7).
With consideration of eq. (6), similarity function can be determined with scalar product of two vectors \( \left[\sqrt{p_0\left(\overrightarrow{y}\right)}\;\sqrt{p_1\left(\overrightarrow{y}\right)}\dots \sqrt{p_m\left(\overrightarrow{y}\right)}\;\right] \) and \( \left[\sqrt{q_0}\;\sqrt{q_1}\dots \sqrt{q_m}\right] \).
Approaching the target candidate to reference model will increase the similarity criterion and will decrease defined distance [35].
4.2.5 Target localization
To find the object location in the current frame, distance criterion between \( \overrightarrow{y} \) and target model should be minimized. Localization process begins from initial \( \overrightarrow{y} \) in the previous frame at a neighborhood of the current window and in each step, distance criterion will calculate. Between all of the obtained \( \overrightarrow{y} \), for it that distance is minimized, it will choose as a current target location. It is clear that if the object movement is very fast so that object stays out of window, it is not possible to find target location. Also implementation of this method will apply high computational cost.
4.3 Meanshift algorithm
 1
Location and size of window will determine manually by operator.
 2
Defines meanshift vector (eq. (19))
 3
Searching window will move according to the meanshift vector and center of window will place at the end of vector.
 4
In new window location, new meanshift vector recalculate
 5
Repeat step 3 until convergence is reached or the stop condition is met.
Meanshift algorithm is a nonparametric recursive algorithm to find the local modes of PDF. This method uses Kernel Density Estimator (KDE) as an estimator of PDF.
Generally estimation of PDF carries out with parametric or nonparametric methods [37, 38]. One of the nonparametric methods in estimation of PDF is kernel method. The idea of kernel estimator was raised in 1956 by Rosenblatt [39] and in 1962 by Parzen [40]. Till now, study on the extension of kernel estimator is continuing.
Meanshift algorithm with finding maximum of similiraity function will determine the target location. In order to optimize gradient method, in this algorithm samples of target histogram will convolve with the mask which generally is an isotopic kernel.
With definition m(x) as below.
Equation (18) will be as eq. (20).
4.3.1 Meanshift calculations
As explained in subsection 4.3, meanshift algorithm uses color feature to find target location in subsequent frames. For usage of this feature, color histogram of object is calculated and weighted. Target localization will find with finding situation that has a most similarity to the target reference model. Therefore, a similarity criterion between target reference model \( \overrightarrow{q} \) and target candidate \( \overrightarrow{p}\left(\overrightarrow{y}\right) \), \( \rho \left(\overrightarrow{p}\left(\overrightarrow{y}\right),\overrightarrow{q}\right) \) will define and measure in each step. Maximum of this function means the placement point of target. This algorithm will find the maximum points of function to reduce the calculations and processing time, with use of method based on gradient.
4.4 Extended meanshift algorithm
Proposed tracking algorithm by Comaniciu et al. [36] will not adapt well with object shape variation and size of object. To overcome this problem, in meanshift algorithm instead calculation local modes, with estimation of covariance matrix that includes local modes shape, this algorithm will extend.
With assumption estimation parameters as V _{ k } and θ _{ k }, two following essential steps will consider.
2) Having values of a _{ i } coefficients, phrase g will maximize based on V ^{(k)} and θ ^{(k)}. Due to constant coefficients a _{ i }, maximization will carry out with maximization of following function.
4.5 Deficiencies of meanshift algorithm
Meanshift algorithm, despite a significant advantage in some circumstances it loses its capabilities and targettracking stops [44, 45]. Because this method uses the color feature to describe the object, therefore under complex conditions, that color information of object is not able to describe it; or this information does not give a precise description of the object position, this algorithm loses its efficiency. Usually this complex condition occurs when the color of target object is too close to the background color or under conditions that contrast of image is low so that color histogram cannot describe the precise object. On the other hand, this algorithm loses information about the movement of the object or location information. Under these conditions, the accuracy of the algorithm will drop sharply and target localization will be difficult.
In this paper, a method will propose that with providing spatial information of object for meanshift algorithm, this method is robust under complicated dynamic conditions and situations that color information will not present precise description of object.
5 Methods/experimental
For presentation of robust tracking method and elimination of algorithm limitations under complicated dynamic condition, in this section with providing object movement information for meanshift algorithm, a combined algorithm is proposed.
Usage of object movement information could compensate lacks of spatial information in meanshift algorithm. Also, with providing this information, algorithm will be robust under conditions where object color information is not enough for tracking. In facts, color and object movement information could be used as a complement of each other.
For providing object movement information, background subtraction method is used. Output image of Gaussian Mixture Model (GMM) background subtraction algorithm is a binary image which moving points have value 1 and fix points have value 0 [46]. Output image is not an ideal image and usually includes fake points (points where they have taken a wrong label 1). To obtain better results for output image includes movement information; post processing operations should apply to them.
5.1 Postprocessing operation
5.1.1 Reduction of shade effect
Shade effect could label the pixel as a moving pixel and degrade the performance of algorithm. Therefore, it is necessary before subtraction operation, possibly shade effect reduces. To overcome the shade effect, with use of color intensity and brightness intensity simultaneously, shade pixels will determine.
5.1.2 Noise reduction and connected component analysis
In order to eliminate noise of image, in this paper median filter is used. Also to eliminate small and fake points of image and filling holes and empty region of image, series of morphological operations is applied on the image. For elimination of fake points of image, morphological opening filter is used and for filling holes, morphological closing filter is applied on the image. Then, for unification of separate points of image, connected component analysis is used [47].
5.2 Description of proposed algorithm
To eliminate shortcomings of meanshift algorithm and make it robust under complicated dynamic conditions, usage of spatial information with meanshift algorithm simultaneously is proposed (eq. (41)).
In the proposed eq. (45), first sentence is same as meanshift equation and second sentence includes spatial information to compensate shortcomings of meanshift algorithm.
At eq. (45), proposed method not only provide spatial information in conditions that color information is not enough and meanshift algorithm will lost the target, but also will move the object position vector toward the target.
5.3 Explanation of proposed algorithm operation
 1.
Meanshift algorithm tracks the target correctly
In this condition, according to Fig. 12, correction vector that obtained from binary image includes spatial information, will guide the algorithm toward the denser points (target).
 2.
Meanshift algorithm is not capable to track the target
As shown in Fig. 12, when meanshift algorithm track the target correctly, proposed algorithm is effective at convergence of algorithm, but as explained before, under condition that color histogram is not capable to have a discriminable description of object, meanshift algorithm will lose its performance and gradually meanshift vector will be away from the main target. Therefore, tracking error will increase significantly.
Also in this paper, to propose a method which could be robust against object size and shape variation, extended meanshift algorithm as explained in the subsection 4.4 will replace with meanshift algorithm.
 1.
Target reference model q _{ u } will determine
 2.
Evaluate value of similarity function with consideration search window position in the current frame and calculation of target candidate model p _{ u }
 3.
Use of background subtraction algorithm GMM to create binary image which includes implemented movement information and post processing operation series to create optimize image
 4.
Weight values ω _{ i } will calculate according to eq. (27)
 5.
With use of ω _{ i }, a _{ i } coefficients will calculate.
 6.
According to eq. (46), target candidate position \( \overrightarrow{\theta} \) will determine
 7.
With use of \( \overrightarrow{\theta} \) according to eq. (42) value V will calculate
 8.
In the case of convergence conditions, algorithm will stop, otherwise with substitution k ← k + 1 algorithm will repeat from step 2.
To avoid falling algorithm in the loop, usually a stop condition will consider. Therefore, if the number of repetitions is more than the defined limit, algorithm will stop.
5.3.1 Tracking parameters
In this paper, Epanechnikov kernel estimator function is used in the space of feature based on RGB color histogram. In order to reduce amount of computations, color space is quantized into the 8 × 8 × 8 space. Therefore, color histogram vector will reduce to the vector with length of 512.
Also in order to adapt search window size with object size variations, after establishing convergence condition, similarity distance criterion will calculate for three values smaller, larger and equal window size. Among them, the window size that has a minimum amount of similarity function will choose as a size of window. (Usually window size variations will be ±10% of window size). Also, in the proposed method, convergence condition is that similarity distance value in each step be less than threshold value 0.01.
At continuation, results of implementation for proposed algorithm on the number of video image sequences is presented specially under complicated condition on the MATLAB software environment.
6 Results and discussion
As shown in Figs. 16 and 18, proposed algorithm based on meanshift has good results in target tracking in comparison to initial meanshift algorithm. According to the evaluation results of proposed method on the other image sequences, there were circumstances which in that condition, proposed algorithm had error in target tracking but not more than meanshift algorithm.
As shown in Fig. 21, region of histogram that is related to the object is changed essentially with object deformation; therefore, extended meanshift algorithm could be a better option to improve this situation. Significant reduction in tracking error for proposed algorithm based on extended meanshift in Fig. 20 confirms it correctly.
As shown in Fig. 23, object position based on proposed method is tracked correctly. When the color information of target in comparison to background of target is the same, error for meanshift algorithm has a significant jump and meanshift algorithm is not capable to track the target while proposed algorithm has a less error and is found the target correctly.
In Fig. 24, target is a vehicle that tracking is based on the area which has a distinct color from background.
Since meanshift algorithm is a method based on colors, therefore under the condition that color information of object could not have a precise description of the object or object color information to be close to the background color, this algorithm stops object tracking. In order to address the limitation of detection methods based on color histogram, some strategies are proposed that includes hybrid algorithms.
Nummiaro et al. [49] use the particle filter to estimate object location in meanshift algorithm, which has led to the good resistance of algorithm in the occlusion condition. Although in this algorithm, there is still a problem of the loss of color information under complicated condition, this method has highly complicated computations. Therefore, in order to reduce the amount of computations related to particle filter, FaLiang et al. [50] proposed the usage of two different movement patterns of particles to estimate the object location. Combination of color and texture information simultaneously for precise object tracking based on meanshift algorithm is proposed by Ning et al. [45]. Although in this method, color information will amplify but still the problem of information loss during the object movement has not been studied.
Chen et al. [51] proposed usage of Kalman filter in meanshift algorithm. In this method, first with Kalman filter, object location will estimate, then exact location of target with use of meanshift algorithm will determine. Although in this method use of Kalman filter will provide the location of object but using only the color information as a key feature will reduce accuracy of algorithm under complicated condition of image and makes a problem for this method. Also Zheng et al. [52] describe the object with two color and tissue features to prevent problems caused by the absence of color information and brightness fluctuations. Xuguang et al. [53] first propose a model based on oriented histogram and then provides a tracking algorithm based on meanshift for gray images. Ju et al. [54] proposed tracking algorithm based on fuzzy histogram to reduce noise interference of meanshift algorithm. This method under low contrast condition lost its performance and needs high computations.
7 Conclusions
In this paper, a method for robustness of meanshift algorithm under dynamically complex condition is proposed. Results of simulation for proposed algorithm on the video images sequences have shown the success of proposed method with precise object tracking. Also implementation of proposed method on the sequences of images which the shape of object changes significantly during consecutive frames is shown that proposed algorithm with use of meanshift algorithm has a low precision. To overcome this limitation, method which is adaptive with object deformation should be replaced with meanshift algorithm. Therefore, usage of extended meanshift algorithm is proposed. The results in the normal and complicated conditions that color information is lost and under low contrast condition proposed method with use of extended meanshift algorithm showed vehicle tracking is carried out properly and error of tracking is less. Therefore, proposed method can be used as a precise tracking method in the normal and complicated conditions.
Declarations
Acknowledgements
Not applicable
About the author
Gholamreza Farahani received his BSc degree in electrical engineering from Sharif University of Technology, Tehran, Iran, in 1998 and MSc and PhD degrees in electrical engineering from Amirkabir University of Technology (Polytechnic), Tehran, Iran in 2000 and 2006, respectively. Currently, he is an assistant professor with the Institute of Electrical and Information Technology, Iranian Research Organization for Science and Technology (IROST), Iran. His research interest is signal processing especially image processing.
Availability of data and materials
The data are uses from standard video database CAVIAR.
Funding
This paper is partly supported with Iranian Research Organization for Science and Technology (IROST) grant.
Authors’ contributions
There are two main contributions in this paper which are: a robust method for object tracking in image under dynamically complex condition (loss of color information and low contrast condition) is proposed. An adaptive method for object tracking with object deformation (extended meanshift algorithm) is proposed.
Ethics approval and consent to participate
The author approves the originality of this paper and agrees to participate.
Consent for publication
The author completely agrees for paper publication.
Competing interests
The author declares that he has no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 H Cho, SY Hwang, Highperformance onroad vehicle detection with nonbiased Cascade classifier by weightbalanced training. EURASIP J Image Video Process 16, 1 (2015). https://doi.org/10.1186/s1364001500745 Google Scholar
 X Zhuang, W Kang, Q Wu, Realtime vehicle detection with foregroundbased Cascade classifier. IET Image Process. 10(4), 289–296 (2016)View ArticleGoogle Scholar
 T Tang, S Zhou, Z Deng, H Zou, L Lei, Vehicle detection in aerial images based on region Convolutional neural networks and hard negative example mining. J Sensors 17(2), 336 (2017). https://doi.org/10.3390/s17020336 View ArticleGoogle Scholar
 AC Chachich, A Pau, K Kenedy, E Olejniczak, J Hackney, Q Sun, E Mireles, Traffic sensor using a color vision method. Transp Sensors Controls 29, 156–165 (1996)Google Scholar
 Z Sun, G Bebis, R Miller, On road Vehicle Detection Using Optical Sensors: A Review, The 7th International IEEE Conference on Intelligent Transportation Systems (Institute of Electrical and Electronics Engineers (IEEE), Washington, 2004)Google Scholar
 J Sochor, Fully Automated RealTime Vehicles Detection and Tracking with Lanes Analysis, 18th Central European Seminar on Computer Graphics (CESCG) (Technical University Wien, Smolenice, Slovakia, 2014)Google Scholar
 MT Yang, RK Jhang, JS Hou, Traffic flow estimation and vehicletype classification using visionbased spatial–temporal profile analysis. IET Comput. Vis. 7(5), 394–404 (2013)View ArticleGoogle Scholar
 A Jazayeri, H Cai, JY Zheng, M Tuceryan, Vehicle detection and tracking in car video based on motion model. IEEE Trans Intell Transp Syst 12(2), 583–595 (2011)View ArticleGoogle Scholar
 CC Chiu, MY Ku, CY Wang, Automatic traffic surveillance system for visionbased vehicle recognition and tracking. J. Inf. Sci. Eng. 26, 611–629 (2010)Google Scholar
 W Zhang, QMJ Wu, G Wang, X You, Tracking and pairing vehicle headlight in night scenes. IEEE Trans. Intell. Transp. Syst. 13(1), 140–153 (2012)View ArticleGoogle Scholar
 R O’Malley, E Jones, M Glavin, Rearlamp vehicle detection and tracking in lowexposure color video for night conditions. IEEE Trans. Intell. Transp. Syst. 11(2), 453–462 (2010)View ArticleGoogle Scholar
 G Salvi, An Automated Nighttime Vehicle Counting and Detection System for Traffic Surveillance, International Conference on Computational Science and Computational Intelligence (Conference Publishing Services (CPS), Las Vegas, 2014)Google Scholar
 C Yan, Y Zhang, J Xu, et al., A highly parallel framework for HEVC coding unit partitioning tree decision on manycore processors. IEEE Signal Process Letters 21(5), 573–576 (2014)View ArticleGoogle Scholar
 C Yan, Y Zhang, J Xu, et al., Efficient parallel framework for HEVC motion estimation on manyCore processors. IEEE Trans Circuits Syst Video Technol 24(12), 2077–2089 (2014)View ArticleGoogle Scholar
 L Dan, J Qian, Research on Moving Object Detecting and Shadow Removal. 1st International Conference on Information Science and Engineering (Institute of Electrical and Electronics Engineers (IEEE), Nanjing, 2009)Google Scholar
 T Kodama, T Yamaguchi, H Harada, A method of Object Tracking Based on Particle Filter and Optical Flow. ICCASSICE, p. 2685–2690 (Institute of Electrical and Electronics Engineers (IEEE), Fukuoka, 2009)Google Scholar
 D Gao, J Zhou, Adaptive Background Estimation for RealTime Traffic Monitoring, IEEE Intelligent Transportation Systems, p. 330–333 (Institute of Electrical and Electronics Engineers (IEEE), Oakland (CA), 2001)Google Scholar
 T Kawanishi, T Kurozumi, K Kashino, S Takagi, A Fast Template Matching Algorithm with Adaptive Skipping using Innersub Template’s Distances. Proceedings of the IEEE International Conference on Pattern Recognition, p. 654–657. (Institute of Electrical and Electronics Engineers (IEEE), England, 2004)Google Scholar
 H Carrillo, J Villarreal, M Sotaquira, MA Goelkel, R Gutierrez, A Computer Aided Tool for the Assessment of Human Sperm Morphology. Proceedings of the 7th IEEE International Conference on Bioinformatics and Bioengineering, p. 1152–1157. (Institute of Electrical and Electronics Engineers (IEEE), Boston, 2007)Google Scholar
 W Long, YH Yang, Stationary background generation: An alternative to the difference of two images. Pattern Recogn. 23(2), 1351–1359 (1990)View ArticleGoogle Scholar
 KP Karmann, A Brandt, R Gerl, Moving Object Segmentation Based on Adaptive Reference Images. In Proceeding 5th European Signal Processing Conference, p. 951–954. (Elsevier Science Publishers, Barcelona, 1990)Google Scholar
 PG Michalopoulos, Vehicle detection video through image processing: The auto scope system. IEEE Trans. Veh. Technol. 40(1), 21–29 (1991)View ArticleGoogle Scholar
 J Kan, J Tang, K Li, X Du, Background Modeling Method Based on Improved MultiGaussian Distribution (International Conference on Computer Application and System Modeling (ICCASM), Taiyuan, 2010), pp. 22–24Google Scholar
 S Gupte, O Masoud, RFK Martin, NP Papanikolopoulos, Detection and classification of vehicles. IEEE Trans Intell Transp Syst 3(1), 37–47 (2002)View ArticleGoogle Scholar
 SHA Musavi, BS Chowdhry, J Bhatti, Object Tracking based on Active Contour Modeling, International Conference on Wireless Communications, Vehicular Technology, Information Theory and Aerospace & Electronic Systems (VITAE) (Institute of Electrical and Electronics Engineers (IEEE) Aalborg, Denmark, 2014)Google Scholar
 M Kass, A Witkin, D terzopoulos, Snakes: Active Contour Models. Proceeding of European Conference on Automation (Kluwer Academic Publishers, Birmingham, 1987)Google Scholar
 M Rousson, N Paragios, Shape Priors for Level Set Representation. Proceeding of European Conference on Computer Vision (SpringerVerlag Berlin Heidelberg, Copenhagen, 2002)Google Scholar
 V Caselles, R Kimmel, G Sapiro, Geodesic active contours. Int. J. Comput. Vis. 22(1), 61–79 (1997)MATHView ArticleGoogle Scholar
 B Han, L Davis, Object Tracking by Adaptive Feature Extraction. International Conference on Image Processing (Institute of Electrical and Electronics Engineers (IEEE), Singapore, 2004.Google Scholar
 I Kim, MM Khan, TW Awan, et al., Multitarget tracking using color information. Int J Comput Commun Eng 3(1), 11–15 (2014)View ArticleGoogle Scholar
 M Mason, Z Duric, Using Histograms to Detect and Track Objects in Color Video (Applied Imagery Pattern Recognition Workshop, Washington DC, 2001)View ArticleGoogle Scholar
 P Withagen, K Schutte, F Groen, LikelihoodBased Object Detection and Object Tracking Using Color Histograms and EM (International Conference on Image Processing, Rochester, 2002), pp. 22–25Google Scholar
 A Yilmaz, O Javed, M Shah, Object tracking: A survey. J ACM Comput Surv (CSUR) 38(4), 1–45 (2006)Google Scholar
 Q Wang, RK Ward, Fast image/video contrast enhancement based on weighted Thresholded histogram equalization. IEEE Trans. Consum. Electron. 53(2), 757–764 (2007)View ArticleGoogle Scholar
 D Comaniciu, V Ramesh, P Meer, Kernelbased object tracking. IEEE Trans. Pattern Anal. Mach. Intell. 25(5), 564–577 (2003)View ArticleGoogle Scholar
 D Comaniciu, P Meer, Meanshift: A robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 24(5), 603–619 (2002)View ArticleGoogle Scholar
 MR Leadbetter, GS Watson, On the estimation of the probability density. Ann Math Stat 34(2), 408–491 (1963)MATHGoogle Scholar
 VA Epanechnikov, Nonparametric estimation of a multivariate probability density. Theory Probability Appl 14(1), 153–158 (1969)MathSciNetMATHView ArticleGoogle Scholar
 M Rosenblatt, Remarks on some nonparametric estimation of a density function. Ann Math Stat 27(3), 832–837 (1956)MATHView ArticleGoogle Scholar
 E Parzen, On estimation of a probability density function and mode. Ann Math Stat 33(3), 1065–1076 (1962)MathSciNetMATHView ArticleGoogle Scholar
 W Zucchini, Applied smoothing techniques Part 1: Kernel density estimation (Temple University Press, Philadephia, 2003)Google Scholar
 TM Cover, JA Thomas, Elements of Information Theory (Wiley, New York, 1991)MATHView ArticleGoogle Scholar
 MP Wand, MC Jones, Kernel Smoothing, (Chapman & Hall/CRC, London; New York, 1995).Google Scholar
 L Wei, L Yining, S Nan, Meanshift tracking algorithm based on background optimization. J Comput Appl 29(4), 1015–1017 (2009)Google Scholar
 J Ning, L Zhang, D Zhang, C Wu, Robust object tracking using joint colortexture histogram. Int. J. Pattern Recognit. Artif. Intell. 23(7), 1245–1263 (2009)View ArticleGoogle Scholar
 C Stauffer, WEL Grimson, Adaptive Background Mixture Models for RealTime Tracking (IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Colorado, 1999), pp. 246–252Google Scholar
 RC Gonzalez, RE Woods, Digital image processing, Third Edition, (PrenticeHall, New Jersey, 2008)Google Scholar
 http://datasets.visionbib.com. Accessed 21 Mar 2017.
 K Nummiaro, E KollerMeier, LV Gool, An adaptive colorbased particle filter. Image Vis. Comput. 21, 99–110 (2003)MATHView ArticleGoogle Scholar
 C FaLiang, Z Yao, C ZhenXue, et al., Nonrigid object tracking algorithm based on mean shift and adaptive prediction. Control Decis 24(12), 1821–1825 (2009)Google Scholar
 L Chen, W Li, W Yin, Joint Feature Points Correspondences and Color Similarity for Robust Object Tracking (International Conference on Multimedia Technology (ICMT), Hangzhou, 2011), pp. 403–407Google Scholar
 L YuanZheng, L ZhaoYang, G QuanXue, L Jing, Particle filter and mean shift tracking method based on multifeature fusion. J. Electron. Inf. Technol. 32(2), 411–415 (2010)View ArticleGoogle Scholar
 Z Xuguang, Z Enliang, W Yanjie, A New, Algorithm for tracking gray object based on meanshift. Opt Tech 33(2), 226–229 (2007)Google Scholar
 MY Ju, CS Ouyang, HS Chang, MeanShift Tracking Using Fuzzy Color Histogram (International Conference on Machine Learning and Cybernetics, Qingdao, 2010), pp. 2904–2908Google Scholar
 Z Zivkovic, B Krose, An EMlike Algorithm for Colorhistogrambased Object Tracking. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (CVPR) (Institute of Electrical and Electronics Engineers (IEEE), Washington, 2004), pp. 798–803Google Scholar