Detecting and tracking honeybees in 3D at the beehive entrance using stereo vision
© Chiron et al.; licensee Springer. 2013
Received: 15 January 2013
Accepted: 14 October 2013
Published: 5 November 2013
In response to recent needs of biologists, we lay the foundations for a real-time stereo vision-based system for monitoring flying honeybees in three dimensions at the beehive entrance. Tracking bees is a challenging task as they are numerous, small, and fast-moving targets with chaotic motion. Contrary to current state-of-the-art approaches, we propose to tackle the problem in 3D space. We present a stereo vision-based system that is able to detect bees at the beehive entrance and is sufficiently reliable for tracking. Furthermore, we propose a detect-before-track approach that employs two innovating methods: hybrid segmentation using both intensity and depth images, and tuned 3D multi-target tracking based on the Kalman filter and Global Nearest Neighbor. Tests on robust ground truths for segmentation and tracking have shown that our segmentation and tracking methods clearly outperform standard 2D approaches.
There is currently much debate about pesticides and risk assessment procedures. It has been demonstrated that the cumulative effect of pesticides, even at doses below the detectable threshold, weakens bees and causes significant mortality in the colony . Indeed, beekeepers regularly observe bees with abnormal behaviors at the beehive entrance. As far as we know, no quantification or qualification of these abnormal behaviors has been attempted, possibly due to technical difficulties. In this context, it has become urgent to develop new approaches for phytosanitary product evaluation based on measurable indicators .
Thus, in order to meet the needs of biologists, it is essential to collect data on bees at different levels: numbers of bees, trajectories, and behaviors. When done manually with videos, the process is time-consuming and suffers from a lack of precision due to human error. We believe that computer vision can effectively achieve these tasks with accuracy.
Monitoring bees automatically in an uncontrolled outdoor environment involves a lot of constraints. Working in completely natural conditions raises problems such as sudden changes in light and background soiling. However, the main problem is the nature of the target. Bees are small and fast-moving targets, and their motion may be chaotic. There is often a lot of activity around the flight board, in front of the beehive, which results in a lot of occlusions. Given the real-time acquisition constraint of our application regarding the amount of data that has to be processed, a simple and efficient approach was required.
1.1 Related work
Automated honeybee counters were the first technically feasible application to be introduced. Over the last 40 years, different approaches have been explored: mechanical counters , less intrusive infrared sensors , individual identification by radio frequency identification , and more recently, video-based counters with  and without identification . In , tagged honeybees were detected using a circular Hough transform, and tag characters were recognized using a support vector machine.
The development of increasingly powerful computers has led to a growing interest among biologists in applications based on computer vision. The papers discussed below used trajectometry based on videos. The whole tracking process can be split into three parts which are presented separately below: segmentation, tracking, and behavior analysis.
For target detection, several methods have been proposed. Detect-before-track approaches are generally based on a segmentation process. Potential targets are associated with existing or new tracks using an assignment method. Many studies based on this approach used a background subtraction of varying sophistication (e.g., [8–10]). In , potential false alarms were filtered using a shape (ellipsis-based) matching process. In contrast, other methods do not require any background subtraction. The authors of  detected bees using the well-known Viola-Jones method , and the authors of  introduced an approach based on vector quantization, which is able to detect individual bees among hundreds of walking bees. In , flying bats were detected taking advantage of multiple cameras by directly applying a direct linear transform. In the case of track-before-detect approaches, the position of each target is first estimated, and the probability that the estimation corresponds to a potential target drives the next estimation. For this kind of approach, a likelihood function based on appearance models (e.g., a precomputed ‘eigenbee’ in  or an adaptive appearance model in [16, 17]) is used.
Many methods have been proposed for tracking. When following only a clearly detected target moving along a simple trajectory, approaches such as those used in  and  (nearest neighbor or mean shift) may be sufficient. However, tracking multiple targets involves assignment problems because of miss detections and false alarms. The authors of [8, 9] used Global Nearest Neighbor (GNN) for track assignment, instantiation, and destruction. In , the authors tracked multiple flying targets using a Multiple Hypothesis Tracker (MHT). In contrast to GNN, MHT considers different hypotheses for the assignment decision process. In [15–17, 19], a non-linear motion model was considered for their targets and they based their tracking on a particle filter , which corresponds to a track-before-detect approach. In , the authors introduced an MRF-augmented particle filter for multiple targets at a reasonable computational cost. This contrasts with the three other methods used in [15–17] which are less suitable when working on multiple interacting targets.
Few studies have looked at behavior analysis based on trajectories. In , supervised k-means clustering was used to detect low-level motion patterns from tracks, and then a hidden Markov model was employed to identify high-level behaviors from the patterns. The authors of  and  went further, improving tracking by adapting the motion model driven by the detected behavior.
Papers related to insect tracking
< 15 ants
< 20 ants
Adaptive appearance model (e.g., color image)
PF supported by a behavior model
Weighted adaptive appearance model (e.g., color image)
Idem + geometric constraints
< 100 bees
Vector quantization (VQ)
< 100 bees
Tag detection (method not mentioned)
Combined NN classification of BOF
ABS + ellipse matching
< 100 bats
Direct linear transform
< 25 bees
Hybrid intensity/depth segmentation
GNN and 3D (re)projection
This paper is organized as follows. First, Section 2 details the constraints of the application and presents a suitable stereo vision acquisition system. Section 3 introduces our hybrid intensity depth segmentation, a hybrid segmentation method based on both depth and intensity images. Then, Section 4 details an approach based on the Kalman filter and GNN to track multiple targets in 3D. Section 5 shows the tracking and segmentation results, which rely on an appropriate ground truth. Finally, Section 6 concludes our work and opens promising perspectives for tracking and behavioral analysis.
2 Acquisition system
In this section we will present the constraints related to our application before summarizing the suitable 3D sensors that were available on the market in 2012. Finally, we will focus on our stereo vision system and detail its configuration.
2.1 Application constraints
Several constraints had to be taken into account in the choice of the 3D camera such as the number, the size and the dynamics of the targets, the lighting conditions and the background. Each constraint is outlined below:
Size. To ensure accurate counting, the camera needs to capture the entire 50-cm-wide board from where bees enter and leave the hive. Adult bees measure on average 12 mm×6 mm, so to detect them on the flight board, we set a limit of 8 pixels per bee on the images. Thus, Xres=(8 pixels/0.6 cm)×50 cm=667 pixels is the minimum horizontal resolution which satisfies this small target size constraint.
Dynamic. Bee motion is highly unpredictable. They can fly at a speed of 8 m/s, so they can cross the entire flight board and only be captured on one or two images with a conventional 24-fps sensor. Even when mostly slower bees are observed around the beehive, a high-frequency capturing system is recommended. Besides, an average exposure time results in blurring due to wing movement, although this is not important in our application.
Light. The acquisitions were performed outdoors, so the lighting conditions are almost impossible to control. Images can contain more bee shadows than bees themselves. Moreover, it is worth noting that sunlight interferes with 3D sensor technologies such as infrared grid projection/sensors (e.g., Microsoft Kinect).
Background. The authors of  segmented bees from a white flight board, which would appear to be optimal. However in most cases, the flight board gradually becomes soiled. Our application was therefore designed to work on a textured flight board (e.g. due to dirt), which could even acquire a color similar to that of bees after a while.
Given the constraints mentioned above, we believe that in view of the high occlusion rate and the chaotic dynamics of the targets, additional data (third dimension) is required to ensure robust detection and tracking of the targets.
2.2 Candidate 3D sensors
We focused our attention on two kinds of 3D sensors (also called 2.5D sensors): time of flight (TOF) and stereo vision cameras. In contrast to homemade multiple camera systems , built-in 3D cameras do not require any calibration and directly provide gray (or RGB) and the corresponding depth images (also called disparity maps for stereo vision). As we will focus on stereo vision systems, additional information on TOF cameras can be found in .
Comparison of camera specifications
Point Grey Research
pixels, 48 fps
TYZX (Menlo Park,
pixels, 50 fps
pixels, 30 fps
(Hudson, NY, USA)
pixels, 90 fps
pixels, 50 fps
pixels, 50 fps
Comparison of capture with a TOF and a stereo vision camera
Small target detection
Fast-moving target detection
Depth map accuracy
Depth map consistency
2.3 Stereo camera configuration
3 Hybrid intensity depth segmentation
In this section, we highlight the shortcomings of using only intensity images or disparity images to detect flying bees at the beehive entrance. Accordingly, we introduce our hybrid intensity depth segmentation (HIDS) method, a hybrid segmentation based on depth and intensity images.
In terms of intensity images, many motion detection methods are based on background modeling. Depending on the conditions, simple methods (e.g., approximated median filtering) can perform nearly as well as more complex techniques (e.g., Gaussian mixture models) . However, most methods based on intensity images reveal their limits when used under difficult conditions. In our application, intensity values were strongly affected by recurrent and rapid changes in lighting, shadows, and reflections. When lowering thresholds for motion detection, the adaptive methods generally tend to include near-static elements in the background too quickly. Even when focusing on small temporal windows, the results are not satisfying.
The strength of our segmentation method is that it relies first on the depth map, on which potential targets (peaks and holes) are detected, and this is then confirmed using the motion calculated from the corresponding intensity images. Depending on the light, flying bees project shadows onto the flight board which may be detected as motion on the intensity map. However, it is unlikely that a hole would be observed on the depth map in an area where there is motion because a detected motion indicates a significant change in texture. Furthermore, significant changes in texture allow matching for disparity computation in most cases and do not result in holes. This therefore constitutes the strength of our method: the combination of both disparity maps and intensity images prevents false detections that are generally triggered by the shadows of flying bees.
3.1 Flying target detection
Our segmentation method is an extension of standard motion detection methods with adaptive background modeling. The main improvement is the use of the depth information to drive the adaptation of the background intensity model.
The stereo camera provides a pair of grayscale images (left and right) and a corresponding disparity map. Below, I t,u,v refers to the intensity of the pixel at time t and position (u,v), while D t,u,v refers to the distance from the camera at time t and position (u,v). The objective here is to compute two binarized masks based on I and D: a determined depth target mask DDTM and an undetermined depth target mask UDTM. The DDTM represents targets with depth information that may be recovered, and UDTM represents targets with no direct recoverable depth information.
where Δ d is a threshold. A morphological opening is then applied to remove the noise in the depth map D.
Unlike intensities, disparity values are generally stable over time regardless of changes in lighting. A small jitter effect (few millimeters) is caused by imperfections in intensity image matching, but the values remain around an average value that corresponds to the real depth. The quality of the depth background DBG depends on the crowding condition. An increase in the frame number k used in the median computation and the increase in the time Δ t between two frames improve the robustness of the depth background computation with respect to passing (flying or walking) targets.
where e is the value assigned by the stereo camera to pixels that have an undetermined depth and ∩ is the logical conjunction operator.
where ⊕ is a dilation using the structuring element S 1, ⊖ is an erosion using the structuring element S 2, ∗ is a convolution with the mean filter M, and Δ m is the threshold for binarization. We chose S 1 to be bigger than S 2.
where ∘ is a morphological opening using the structuring element S 3 to enlarge potential motion regions, and Δ rm is a threshold for binarization.
where δ is the learning rate used for the adaptation.
3.2 Target extraction
where d is the depth and s is the size of the target. Concerning targets without depth information, our initial idea was to approximate their depth from their size, but we invalidated this idea given the standard deviation of the models, which was generally quite high.
4 Multi-target tracking in 3D
In this section, we propose an approach based on the Kalman filter  and Global Nearest Neighbors  to achieve multi-target tracking in 3D (called 3D-GNN below). Each target was associated to a Kalman filter, which was used to estimate the trajectory based on incoming observations. Then, GNN associated uncertain measurements with known tracks.
The trajectory of a target can be estimated by different methods, including the extended Kalman filter  and particle filters . Both are particularly suitable for non-linear systems, and particle filters belong to the track-before-detect approach. The particle filter (PF) approach was not used in our work for two reasons: first, it is difficult to obtain a reasonable computation time for multiple targets (up to 18 in our application) and our prototypes had difficulty maintaining tracks for multiple close targets. Secondly, our segmentation is efficient enough to adopt a detect-before-track approach. Despite the apparent rough dynamics of bees, we acquired frames at a sufficiently high frequency (about 47 fps) to allow us to assume a constant speed model. Thus, an approach based on the standard Kalman filter was suitable for our application.
GNN or JPDA
FP or MHT
4.1 Kalman filter model
To ensure coherent tracking in 3D, the model (state and measure vectors) was defined in camera coordinate space (3D Euclidian space) where the reference sensor (the left imager of our stereo camera) was located at position (0,0,0). The projection of observations from the image coordinate space onto the camera coordinate space is explained in Section 4.2.
4.2 Projection onto a 3D Euclidian space
with (c u,c y) as the stereo camera calibration parameters, which refer to the pixel coordinates of the principal point, and (f u ,f v ) as the focal lengths in pixels along the x- and y-axis, respectively.
4.3 Missing depth management
Given the segmentation method used and the reliance placed on the ability to recover the corresponding depth, some targets were clearly detectable in 2D (u,v), but the depth could not be recovered or considered reliable (see Section 3). An observation cannot be projected onto camera coordinate space without a depth d. Moreover, as mentioned in Section 3.2, the relation between the size of the segmented ellipse and the depth is not reliable enough to directly infer the depth from the size. To this end, we propose the following missing depth management method based on estimation and reprojection.
When the depth was missing, the estimated z from the KF was temporally used as d for the projection in (14). Consequently, a complete observation could be provided for the KF in the update step. However, in order to avoid degeneration, the number of estimated projections was limited. When the depth information became available again for a later observation, a new estimation of the depth was interpolated (using cubic spline interpolation ) over the window during which the depth was missing. Then, because of the change of z, all the observations associated with the KF over that window were reprojected using the new z value. Finally, the trajectory was re-estimated using the new reprojected observations in the KF, starting from the state where d was unavailable. The advantage of this method is that it keeps estimations as close as possible to the available data. Algorithm 1 illustrates the process.
Algorithm 1 : Tracking with reprojection mechanism
As mentioned before, a track can be initialized with an unknown depth. In this case, an arbitrary depth is given to the new KF. Then, when an observation with a known depth is finally associated, all the previous observations associated with the KF are reprojected using that known depth.
4.4 Multi-target assignment
Our approach is based on GNN, which is detailed in this section. GNN is a widely used method for data association and has the advantage of having a relatively low degree of complexity (see Table 4). We also address in this section the MHT, which is used in Section 5 as a basis for comparison with GNN.
4.4.1 Global Nearest Neighbor
where dC and dN are respectively the false alarm and track apparition density functions relative to the surveillance space. As an example, dN can be modeled by a map where each position is weighted by its distance from the closest potential target apparition point.
Finally, the associated observations are processed using the associated Kalman filter, and non-associated observations become candidates to initiate a new track. A track is destroyed if it is not associated to an observation three times in succession or if the sum of its historical association costs reaches zero. More details on GNN are given in .
4.4.2 Multiple Hypothesis Tracking
In contrast to GNN, which only maintains the single most likely hypothesis, MHT differs the tracking in order to consider alternative hypotheses within a limited period of time. The hypotheses are stored in a decision tree, which grows by one level at each step of the tracking. In order to avoid a combinatory explosion, a pruning process is applied to keep the tree at a reasonable size. In addition, only a limited number of best hypotheses are considered at each step (see the Murty algorithm). The score of a hypothesis corresponds to the sum of all the associated costs of tracks belonging to that hypothesis from the creation of the track. Concerning the association costs, MHT shares a common base with GNN (see Section 4.4.1). Finally, a fusion mechanism checks and deletes similar tracks, which are likely to follow the same target. More details on MHT and its implementation are given in .
This section deals with the evaluation of our HIBS segmentation using a dedicated segmentation ground truth and, in addition, the evaluation of our 3D-GNN tracking using a second dedicated ground truth. The establishment of these two ground truths for segmentation and tracking was essential since they correspond to different constraints and needs, which are detailed below.
5.1 Segmentation evaluation
5.1.1 Segmentation ground truth
5.1.2 Segmentation results
Detailed segmentation results
Overall segmentation results
True negative rate (%)
False positive rate (%)
5.2 Tracking evaluation
5.2.1 Tracking ground truth
5.2.2 Tracking results
This section compares the tracking results for GNN and MHT under different conditions (2D and 3D). GNN is a particular case of MHT that keeps only the best hypothesis for each step. In this section, P defines the depth of the MHT tree (pruning level), and H is the number of best hypotheses considered at each node of the tree. A track was considered to be well recovered when the associated observations matched at least 90% of the original track (the margin of 10% corresponds to the potential delay for track initialization and destruction). The scenarios were generated with an element of randomness. To ensure our results were relevant, we ran all of the following experiments on 100 distinct scenarios, which provided an acceptable stability for the averages.
The details of the basic configuration (when not redefined) used for our evaluation are given below. Miss detection and false alarm rates were set at 11.0% and 28.8%, respectively, which correspond to the most difficult sequence found in the segmentation evaluation (capture 2). A similar distribution was seen with dC, as illustrated in Figure 13. Concerning dN, as the targets do not only appear along the edge of the surveillance volume but also at the beehive entrance and from below the beehive, a uniform distribution was considered. In order to highlight our contributions, we ran the following four experiments.
Experiment 2: A comparison of trackers taking advantage of the third dimension (3D-GNN and 3D-MHTs) with trackers relying only on 2D data (2D-GNN and 2D-MHTs). In this experiment, 2D data were obtained by projecting the 3D data onto a 2D plane. Figure 16 shows the dominance in 2D of MHTs (2D-MHTs) over GNN (2D-GNN), especially with a restricted number of targets. However, it also confirms the importance of the third dimension; 3D-GNN provided better results than complex 2D-MHT trackers. Therefore, in the following experiments, we focused only on 3D-GNN.
In this paper, we have presented a system designed to acquire and track small, fast-moving targets in 3D. Our application for tracking honeybees in natural conditions is subject to many constraints (real-time acquisition, number, size and dynamics of the targets, lighting, and gradual soiling). Accordingly, after a comparison of potential suitable 3D acquisition systems (time of flight and stereo vision), we chose the G3 EV stereo vision camera. The advantage of the G3 EV is that it combines a high resolution of 752×480 pixels with a sufficiently high frame rate of 47 fps, which is sufficient to track bees.
Moreover, we propose a complete detect-before-track chain to track the targets in 3D space. To this end, we developed a hybrid 3D segmentation method called hybrid intensity depth segmentation. Our HIBS relies on both depth and intensity images and therefore works in completely natural conditions. It outperforms the state-of-the-art methods, which mostly use intensity images only. Furthermore, our HIBS segmentation has the advantage of recovering targets with no recoverable depth, which is essential for maintaining the corresponding tracks. Our segmentation was evaluated by relying on a robust ground truth. The triple annotation process revealed that 4.8% of bees were incorrectly marked due to human error. The evaluation of our segmentation results with respect to the ground truth resulted in 4.15% of miss detections and 19.54% of false alarms. The false alarms were mainly located in complex areas such as a crowded flight board.
Each target was associated a the Kalman filter, which was used to estimate the trajectory based on incoming observations. A data association method, either Global Nearest Neighbor or Multiple Hypothesis Tracking, was employed to associate uncertain measurements to known tracks. In addition, a mechanism of temporary reprojection was used with observations for which depth information was missing. We based our tracking evaluation on a semi-simulated ground truth that relied on annotated trajectories in 3D. As expected with any tracker, the efficiency decreased as the number of targets increased. Among our captured sequences, we identified some situations with up to 18 targets. Even in these conditions, thanks to our robust HIBS segmentation, GNN provided relatively good results with respect to MHT and considering its computational complexity. In addition, the use of the third dimension, which is the strong point of our application, largely compensated for the choice of GNN, which in fine remains a simple but fast tracker. Moreover, we have shown that when reprojection is not taken into account in the tracking process, the results are much less satisfying.
In relation to the short-term perspectives, the assignment process is a constraint by gating. Currently, gating is independent from target location. However, the distance between a target and its predicted position is not uniformly distributed over 3D space. Bees arriving at the beehive entrance are less prone to sudden changes of direction or velocity. It would be interesting to add an adapted gating process depending on the location of the target. A stereo camera provides a partial topology of the scene, so the 3D position of interesting elements (flight board, entrance) could be recovered.
Concerning the longer-term perspectives, biologists are interested in high-level applications such as abnormal behavior detection. Such applications include many parameters and require robust models from which observations can then be compared. In this context, the environmental platform presented in Section 1 offers encouraging perspectives. Thanks to the modules under development (e.g., air quality monitors) and existing modules (e.g., counter, weather monitor), information of a different nature could be compared and used to model behavior. On the one hand, low-level behavior models could focus on individual bee trajectories, for example, a tracker that takes into consideration environmental parameters to adapt motion models for estimation. On the other hand, more general models could focus on colony activity such as abnormal colony behavior based on some simple rules (e.g., low activity during a sunny day). The authors of  demonstrated that the composition of agricultural landscapes influences life history traits of honeybee workers. It would be interesting to find a correlation between their observations and individual or general behavior detected in the trajectories of bees.
This work was supported by the European Regional Development Fund (contract: 35053) and the Poitou-Charente region. The videos used in this work were taken during autumn 2012 on the INRA site (National Institute for Agronomy Research) at Magneraud. We would like to thank INRA’s biologists for their support and expertise in helping to collect exploitable data under different conditions (e.g., activity, weather).
- Vidau C, Diogon M, Aufauvre J, Fontbonne R, Viguès B, Brunet J, Texier C, Biron D, Blot N, El Alaoui H, Belzunces LP, Delbac F: Exposure to sublethal doses of fipronil and thiacloprid highly increases mortality of honeybees previously infected by Nosema ceranae. PLoS One 2011, 6(6):e21550. 10.1371/journal.pone.0021550View ArticleGoogle Scholar
- Simon N: Un pas en avant pour la protection contre les pesticides. Abeille et Cie 2012, 1(149):25-27.Google Scholar
- Blois J: Vidéosurveillance d‘abeilles, comptage d‘entrées/sorties à l‘entrée de la ruche. Master’s thesis,. University of La Rochelle (France) (2011)Google Scholar
- Chauvin R: Sur la mesure de l’activité des Abeilles au trou de vol d’une ruche a dix cadres. Insectes Sociaux 1976, 23: 75-81. 10.1007/BF02283906View ArticleGoogle Scholar
- Struye M, Mortier H, Arnold G, Miniggio C, Borneck R: Microprocessor-controlled monitoring of honeybee flight activity at the hive entrance. Apidologie 1994, 25(4):384-395. 10.1051/apido:19940405View ArticleGoogle Scholar
- Streit S, Bock F, Pirk CWW, Tautz J: Automatic life-long monitoring of individual insect behaviour now possible. Zoology (Jena) 2003, 106(3):169-71. 10.1078/0944-2006-00113View ArticleGoogle Scholar
- Chen C, Yang E, Jiang J, Lin T: An imaging system for monitoring the in-and-out activity of honey bees. Comput Electron Agric 2012, 89: 100-109.View ArticleGoogle Scholar
- Balch T, Khan Z, Veloso M: Automatically tracking and analyzing the behavior of live insect colonies. In Proceedings of the Fifth International Conference on Autonomous Agents, vol. 2001. New York: ACM; 2001:521-528.Google Scholar
- Campbell J, Mummert L, Sukthankar R: Video monitoring of honey bee colonies at the hive entrance. Workshop Vis Observation Anal Anim Insect Behav (ICPR) 2008, 8: 1-4.Google Scholar
- Estivill-Castro V, Lattin D, Suraweera F, Vithanage V: Tracking bees - a 3d, outdoor small object environment. In Proceedings of the 2003 International Conference on Image Processing, 2003. ICIP 2003. Piscataway: IEEE; 2003:1021-1024.Google Scholar
- Miranda B, Salas J, Vera P: Bumblebees detection and tracking. Workshop Vis. Observation Anal. Anim. Insect Behav. ICPR, 2012. Piscataway: IEEE; 2012.Google Scholar
- Viola P, Jones M: Rapid object detection using a boosted cascade of simple features. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2001. CVPR 2001., vol. 1. Piscataway: IEEE; 2001:I-511.Google Scholar
- Kimura T, Ohashi M, Okada R, Ikeno H: A new approach for the simultaneous tracking of multiple honeybees for analysis of hive behavior. Apidologie 2011, 42(5):607-617. 10.1007/s13592-011-0060-6View ArticleGoogle Scholar
- Theriault D, Wu Z, Hristov N, Swartz S, Breuer K, Kunz T, Betke M: Reconstruction and analysis of 3D trajectories of Brazilian free-tailed bats in flight. Technical report, CS Department, Boston University. 2010.Google Scholar
- Khan Z, Balch T, Dellaert F: A Rao-Blackwellized particle filter for eigentracking. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004, vol. 2. Piscataway: IEEE; 2004:II-980.Google Scholar
- Veeraraghavan A, Chellappa R, Srinivasan M: Shape-and-behavior encoded tracking of bee dances. IEEE Trans Pattern Anal Mach Intell 2008, 30(3):463-476.View ArticleGoogle Scholar
- Maitra P, Schneider S, Shin M: Robust bee tracking with adaptive appearance template and geometry-constrained resampling. In 2009 Workshop on Applications of Computer Vision (WACV). Piscataway: IEEE; 2009:1-6.View ArticleGoogle Scholar
- Hendriks C, Yu Z, Lecocq A, Bakker T, Locke B, Terenius O: Identifying all individuals in a honeybee hive - progress towards mapping all social interactions. Workshop Vis. Observation Anal. Anim. Insect Behav. ICPR 2012. Piscataway: IEEE; 2012.Google Scholar
- Khan Z, Balch T, Dellaert F: Efficient particle filter-based tracking of multiple interacting targets using an MRF-based motion model. In Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems, (IROS 2003), vol.1. Piscataway: IEEE; 2003:254-259.Google Scholar
- Ristic B, Arulampalam S, Gordon N: Beyond the Kalman Filter: Particle Filters for Tracking Applications. Boston, London: Artech House; 2004.MATHGoogle Scholar
- Feldman A, Balch T: Representing honey bee behavior for recognition using human trainable models. Adaptive Behav Anim Animats Softw Agents Robots Adaptive Syst 2004, 12(3–4):241-250.Google Scholar
- Nummiaro K, Koller-meier E, Svoboda T, Roth D, Gool LV: Color-based object tracking in multi-camera environments. Lecture Notes Comput Sci 2003, 2781: 591-599. 10.1007/978-3-540-45243-0_75View ArticleGoogle Scholar
- Piatti D: Time-of-Flight cameras: test, calibration and multi-frame registration for automatic 3D object reconstruction. PhD thesis,. Politecnico di Torino, Italy, 2011Google Scholar
- Parks D, Fels S: Evaluation of background subtraction algorithms with post-processing. In IEEE Fifth International Conference on Advanced Video and Signal Based Surveillance, 2008. AVSS 2008. Piscataway: IEEE; 2008:192-199.View ArticleGoogle Scholar
- Kalman R: A new approach to linear filtering and prediction problems. J Basic Eng 1960, 82: 35-45. 10.1115/1.3662552View ArticleGoogle Scholar
- Blackman S, Popoli R: Design and Analysis of Modern Tracking Systems, vol. 685. Artech House: Norwood; 1999.MATHGoogle Scholar
- Julier S, Uhlmann J: Unscented filtering and nonlinear estimation. Proc IEEE 2004, 92(3):401-422. 10.1109/JPROC.2003.823141View ArticleGoogle Scholar
- Rasmussen C, Hager G: Probabilistic data association methods for tracking complex visual objects. Pattern Anal Mach Intell IEEE Trans 2001, 23(6):560-576. 10.1109/34.927458View ArticleGoogle Scholar
- De Boor C: A Practical Guide to Splines. Heidelberg: Springer; 1978.View ArticleMATHGoogle Scholar
- Reid D: An algorithm for tracking multiple targets. Automatic Control IEEE Trans 1979, 24(6):843-854. 10.1109/TAC.1979.1102177View ArticleGoogle Scholar
- Requier F, Brun F, Aupinel P, Henry M, Odoux JF, Bretagnolle V, Decourtye A: The composition of agricultural landscapes influences life history traits of honeybee workers. In European Conference on Behavioural Biology (ECBB VI). Germany: Essen; 2012.Google Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.