# Error Evaluation in a Stereovision-Based 3D Reconstruction System

- Abdelkrim Belhaoua
^{1}, - Sophie Kohler
^{2}Email author and - Ernest Hirsch
^{1}

**2010**:539836

https://doi.org/10.1155/2010/539836

© Abdelkrim Belhaoua et al. 2010

**Received: **1 December 2009

**Accepted: **29 June 2010

**Published: **20 July 2010

## Abstract

The work presented in this paper deals with the performance analysis of the whole 3D reconstruction process of imaged objects, specifically of the set of geometric primitives describing their outline and extracted from a pair of images knowing their associated camera models. The proposed analysis focuses on error estimation for the edge detection process, the starting step for the whole reconstruction procedure. The fitting parameters describing the geometric features composing the workpiece to be evaluated are used as quality measures to determine error bounds and finally to estimate the edge detection errors. These error estimates are then propagated up to the final 3D reconstruction step. The suggested error analysis procedure for stereovision-based reconstruction tasks further allows evaluating the quality of the 3D reconstruction. The resulting final error estimates enable lastly to state if the reconstruction results fulfill *a priori* defined criteria, for example, fulfill dimensional constraints including tolerance information, for vision-based quality control applications for example.

## 1. Introduction

Quality control is the process applied to ensure a given level of quality for a product, especially in the automotive industry sector. Implementation of such a control at all stages of a production, from design to manufacturing, is unavoidable to guarantee a high level of quality, requiring often high measurement accuracy in order to avoid both loss of manufacturing time and material. Thus, when making use of vision-based approaches, it is necessary to develop fully automated tools for the accurate computation of 3D descriptions of the object of interest out of the acquired image contents. The latter can then be compared, taking tolerance information into account, with either ground truth or the CAD model of the object under investigation, in order to evaluate quantitatively its quality. An autonomous cognitive vision system is currently being developed for the optimal quantitative 3D reconstruction of manufactured parts, based on *a priori* planning of the task. As a result, the system is built around a cognitive intelligent sensory system using so-called situation graph trees as a planning/control tool [1]. The planning system has been successfully applied to structured light and stereovision-based 3D reconstruction tasks [2–4], with the aim to favor the development of an automated quality control system for manufactured parts evaluating quantitatively their geometry. This requires the coordination of a set of complex processes performing sequentially data acquisition, its quantitative evaluation (i.e., extraction of geometric features and their 3D reconstruction), and the comparison with a reference model (e.g., CAD model of the object) in order to evaluate quantitatively the object, including means allowing adjusting/correcting online either data acquisition or processing.

Stereovision is a direct approach for obtaining three-dimensional information using two images acquired from two different points of view. The approach, when efficiently implemented, enables to achieve reconstruction and consequently dimensional measurements, with rather high accuracy. Thus, the method is widely used in many applications, ranging from 3D object recognition to machine inspection. In this last case, the outline of the object can be determined, based on the set of geometric contour-based features defining this outline. As a result, such a contour-based approach provides good (i.e., reasonable) reconstruction quality at low cost. However, in order to assess the quality of the results, specifically when high levels of accuracy are required, it is mandatory to carry out a thorough analysis of the errors occurring in the system. More precisely, presence of noise in the acquired data affects the accuracy of the subsequent image processing steps and, thus, of the whole reconstruction process. 3D reconstruction errors depend on the quality of the acquired images (resulting from the environment; i.e., the specific acquisition conditions used) as well as on their processing (e.g., segmentation, feature extraction, matching, and reconstruction).

The work presented in this paper deals with the performance analysis of the 3D reconstruction process of a set of geometric primitives (in our case, lines and arcs of or full ellipses) from a pair of images knowing their associated camera models. The suggested analysis focuses firstly on the error estimation affecting the edge detection process, the starting processing step for the whole reconstruction procedure. Using fitting techniques in order to describe the geometric elements and assuming that the noise is independent and uniformly distributed in the images, error bounds are established for each geometric feature that composes the outline of the object to be evaluated. The error bounds are then propagated through the following processing steps, up to the final 3D reconstruction step. Specifically devised here to enable error analysis for stereovision-based reconstruction tasks, it can be straightforwardly extended to other types of image-based features. This will then help to evaluate the quality of the 3D reconstructions obtained applying various imaging techniques, as illustrated in this paper by stereovision-based results. Lastly, the resulting final reconstruction error estimates enable to state if the reconstruction results fulfills *a priori* defined criteria including tolerance information (e.g., dimensions with their corresponding tolerances).

## 2. Related Work

A literature survey shows that over the last years some efforts have been devoted to error analysis in stereovision-based computer vision systems, see, for example, [5–12]. As an illustrative example, using a stereoscopic camera setup, Blostein and Huang in [5], have investigated the accuracy in obtaining 3D positional information based on triangulation techniques using point correspondences. In particular, they have been able to derive closed form expressions for the probability distributions of the position error along each direction (horizontal, vertical, and range) of the coordinate system associated with the stereo rig. With the same aim, a study of various types of error and of their effect on 3D reconstruction results obtained using a structured light technique has been presented by Yang and Wang in [6]. In their work, expressions have been derived for the errors observed in the 3D surface position, orientation, and curvature measurements. Similarly, Ramakrishna and Vaidvanathan [7] have proposed a new approach for estimating tight bounds on measurement errors, considering, as a starting point, the inaccuracies observed during calibration and the triangulation reconstruction step. Further, Balasuramanian et al. describe in [9] an analysis of the effect of noise (assumed to be independent and uniformly distributed) and of the geometry of the imaging setup on the reconstruction error for a straight line. Their analysis, however, relies mainly on experimental results based on simulation studies. Also, Rivera-Rios et al. [10] have analyzed the error when trying to evaluate dimensionally line entities, the errors being assumed to be mostly due to localization errors in the image planes of the stereo setup. Consequently, in order to determine optimal camera poses, a nonlinear optimization problem has been formulated, that allows to minimize the total MSE (Mean Square Error) for the line to be measured, while satisfying sensor related constraints. Lastly, the accuracy of 3D reconstruction results has been evaluated, based on a comparison with ground truth, in contributions presented by Park and Subbarao [11] and Albouy et al. [12]. More recently, Jianxi et al. [13] have presented an error analysis approach for 3D reconstruction problems taking only into account the accuracy of the camera calibration parameters. They note that the largest error appears in the -direction, when computing 3D coordinates. Furthermore, their contribution shows that the 3D error is smaller, if one uses more calibration points (i.e., the calibration is more accurate).

In our case, the error analysis focuses on the estimation of errors bounds for the results outputted by the edge detection process, which is considered to be the starting step for the whole reconstruction procedure (and which includes indirectly the effect of the error sources related to the acquisition system). This analysis is thus of importance for evaluating the quality of the final 3D reconstruction result, as it helps to estimate the associated error bounds.

## 3. Stereovision-Based Reconstruction Setup

- (i)
calibration of both cameras and of their relation,

- (ii)
segmentation of the stereo pair of images (extraction of edge point, determination of contour point lists),

- (iii)
segmentation and classification of the edge point lists, in order to define geometric features,

- (iv)
contour matching based on so-called epipolar constraint matrices [2],

- (v)
euclidian reconstruction using the calibration parameters.

To solve this problem, we have chosen to analyze the placement of lighting sources moving within a virtual geodesic sphere containing the scene to be imaged, in order to define positions that allow acquisition of adequate images (i.e., well contrasted, with minimized illumination artifacts), that allow to reach, after detection of the required edges, the desired accuracy for the following 3D reconstruction step [14]. This enables to acquire best-quality images, with high contrast and minimum variance. This latter is deduced from fitting technique parameters.

## 4. Common Error Sources Affecting 3D Reconstruction

The stereovision-based 3D reconstruction in view of inspection tasks is sensitive to several types of error. Since uncertainty is a critical issue when evaluating the quality of a 3D reconstruction, the estimation/assessment of errors in the final reconstruction result has to be carried out carefully. In our case, the errors in obtaining the desired 3D information are estimated using expressions defined in terms of either system parameters or error source characteristics likely to be observed. In addition to this quantitative determination of errors, other likely error sources affecting both system modeling and calibration are also considered. An overview of potential error sources is given in [15]. Similarly, in [16], taking the specificities of our application into account, we were able to define three major types of error sources. These are related to camera model inaccuracies, resulting camera calibration errors and image processing errors when extracting the contour data. The 3D reconstruction results can also be affected by correspondence errors occurring during the matching process. Indeed, imprecise disparities due to noise in the images and to calibration errors (through the fundamental matrix) can be observed. As a consequence, during the standard triangulation step (i.e., the reconstruction process), these errors in disparity can sometimes be magnified when applying the projection matrices (also defined using calibration parameters).

## 5. Error Evaluation of the Edge Detection Step

The extraction of edge information is the first fundamental processing step of the whole 3D reconstruction procedure. There are many ways to perform edge detection. In our case, the edge or contour point detection relies on well-known gradient-based methods. This latter convolves the image with first derivatives of a Gaussian smoothing kernel to find and locate the edge points. After evaluation of the processed images, a selection criterion is then applied to decide whether or not a pixel belongs to an edge. Edge-thinning algorithms are applied in some cases, in order to improve the results, specifically when the thickness of the primarily extracted edges is considered being too large.

One of the aims of our contributions is to develop a fully automated system (i.e., with very limited human control). Accordingly, the parameters that control the edge detection process (e.g., the width of the Gaussian smoothing kernel or threshold parameter values) are determined automatically. For example, the parameter value is determined using a measure of the amount of camera noise observed in the image and by the fine-scale texture of specific object surfaces seen in the image. In the current implementation of our approach, a fixed value ( ) is used as a starting value. Following contour point extraction, chains of contour points are determined to form possibly closed contours. Each contour point list is then further subdivided into subchains defining either straight-line segments or elliptical arcs using the method described in [17]. These geometric features build the basis of the outline describing the imaged objects, as it will be exemplified later on our test workpieces.

However, these simple geometric features do not contain necessarily only true edge points. This leads for the set of edge point positions belonging to a given geometric feature to uncertainties, which are further reflected in the parameter values of the fitting equation used to describe the data. Despite this observation, the line segment and curve descriptions can be determined using more or less standard fitting technique for the edge points supposed to be on these lines or curves. For that purpose, linear or nonlinear least-squares fitting techniques are widely used. These latter minimize predefined figures of merit relying on the sum of squared errors. When using a geometric fitting approach, also known as the "best fitting" technique, the error distance is defined by the orthogonal, or the shortest, distance of a given point to the geometric feature to be fitted. Applying geometric fitting to a line segment is a linear problem, whereas its application to ellipses is a nonlinear one, which is usually solved iteratively. Gander et al. [18] have proposed a geometric ellipse fitting algorithm in parametric form, which involves a large number of fitting parameters ( unknowns in the set of equations to be solved, where is the number of measurement points), each used measurement point carrying an individual angular parameter to be estimated simultaneously, together with the five ellipse parameters.

In a related contribution, for line segments, Yi et al. [19] discussed the relationship between the random perturbations of edge point positions and the variance of the least squares estimates of the corresponding line parameters. Assuming that the noise is independent and uniformly distributed, as a result, they propose an optimized fitting technique for lines. As the outline of our test workpiece is composed of simple geometric features such as lines and elliptical arcs, one can determine descriptors of these lines or arcs using fitting techniques similar to the ones described above, in order to obtain the parameters of these features. In our work, we have extended Yi's results to analyze the edge detection error for both straight line segments and elliptical arcs. The parameters of the fitted geometric features are finally used as a quality measure to estimate the 3D reconstruction error of the reconstruction procedure carried out after having matched corresponding geometric descriptors.

### 5.1. Error Analysis for Line Segments

The standard equations for the parameters of a line ( ) that best describes ( ) data pairs, when all of the measurement error is assumed to belong to the -axis (i.e., the values are assumed to be error-free), are well known and easily derived. In this case, one can apply a standard least-squares method. This latter involves minimizing the sum of squared differences between the fitted line and the data points in the -direction. However, for contour points extracted from an image, it is more reasonable to assume that the uncertainties are observed in both the - and -axis directions. In this case, application of a fitting procedure is somewhat more complex. Several methods have been published and discussed, as in [20–27]. For example, in [27], an algorithm is developed, which copes with the problem of fitting a straight line (with the restriction that vertical lines are avoided) to data with uncertainties in both coordinates. The problem has been reduced by the authors to a one dimensional search for the minimum. Expressions for the fitting parameters (variances and covariance matrix) are derived, which are further used for the estimation of uncertainties.

Equation (1) represents all geometric lines in the plane, including verticals ( ) and horizontals ( ). Also, this relation ensures finiteness of the moments of the estimates of the parameters, and helps to secure numerical stability when applying the fitting procedure. The fitting line is then determined by minimizing orthogonal distances. An additional constraint (e.g., as in our case, ) is imposed in order to ensure uniqueness of the solution, as suggested in [28] for so-called linear equality-constrained least squares (LSE) fitting methods.

To estimate the parameters , and , we have used the singular value decomposition (SVD) approach. Errors are quantified by the perpendicular distances of the edge pixels to the fitting line. These errors are interpreted as representing the error for the edge detection process in the case of straight lines.

### 5.2. Error Analysis for Arcs of Ellipses

For computer vision based applications such as inspection tasks, ellipses, or arcs of ellipse are a major type of geometric primitives. They are the image (perspective projections) of circles, which on workpieces can be associated with holes. Due to the importance of this kind of features, a large set of fitting methods have been suggested, as, for example, in [18, 29–31], to compute ellipse equations from data point sets. In particular, Gander et al. [18] propose a geometric ellipse fitting algorithm in parametric form. Their method achieves a good compromise between accuracy of the results and required computing time.

are the vectors and describe the set of points to be fitted, is the center of the ellipse, and are, respectively, the semimajor and the semiminor ellipse axis (assuming ) and is the angle between the -axis and the major axis of the ellipse.

Even though the ellipse could also have been described by the canonical equation , the description given by (2) is preferred, as it leads to more efficient implementations of the fitting procedure.

where ; and are the uniformly distributed unknown angles associated with the set of points to be fitted.

Here also, the sum of orthogonal distances is interpreted as representing the error for the edge detection process in the case of ellipses.

### 5.3. 2D Error Propagation

The error estimates, based on the distances computed as described in the previous section, are then propagated up to the 3D reconstruction step.

Stereovision refers to the ability to infer information on the 3D structure and distance of a scene from two images taken from different viewpoints, assuming that the relationship between the 3D objective world and the acquired 2D images (see Figure 5), namely the projection matrix, is known. Two main problems have to be solved: finding the correspondences and the reconstruction itself. Determining the correspondences leads to find which parts in the left and right images are projections of the same scene element, whereas the reconstruction problem determines the 3D structure from the correspondences, using additional information such as the calibration parameters if a metric reconstruction is desired.

where is a scale factor, is the principal point, and are focal lengths, is the skew angle, and are extrinsic parameters describing the attitude of the camera in the scene with respect to a reference coordinate system. is a rotation matrix, which relates the camera coordinate axes with those of the reference coordinate system. is the translation in the , and directions representing the camera center in the reference coordinate system [32]. For stereovision, the same reference coordinate system is used for both views of the stereo couple.

where and are the homogeneous coordinates of the corresponding spatial vectors, and is a matrix, called the perspective projection matrix, representing the collineation: .

The quantities in (8) are further propagated through the 3D reconstruction process using (7), leading to the 3D measurements. Calibration and correspondence errors occurring during the calibration and the matching processes have not yet been considered.

## 6. Results

Firstly, and in order to simplify the error analysis, the experimental images are acquired under good illumination conditions in order to obtain images of good quality. Figure 3(a) shows an example of a pair of images of the object acquired, respectively, with the left and the right camera of the stereo rig.

The first image processing step in our system provides the desired edge detection results. For each line or curve of the object, the corresponding contour points have been firstly extracted using Canny's edge operator and secondly classified either as lines or ellipses. Figure 3(b) shows the results of the edge detection and classification steps, after segmentation of the lists of contour points. The same color is used in the two pictures to show the geometric features that have been associated, that is, matched. These pairs of features are the input data for the matching and the reconstruction step.

In a second step, we have then applied the fitting techniques described in Sections 5.1 and 5.2 to the edge point lists supposed to belong to either a line or a curve. However, as noted above, edge point positions are always affected by uncertainty, due to, for example, the image digitization process, the various noise sources in the system, and the nonideal behavior of the image processing steps. In [16], the encountered error sources are assumed to be independent and identically distributed. The resulting errors are statistically expressed as "confidence intervals." Such an interval is computed for each pixel belonging to the projection of either a line or ellipse on the image plane. In our case, 95% confidence intervals are calculated such as to include in the intervals at least 95% of the points predicted to belong to the associated geometric feature. Here, we compute the perpendicular distances of a given contour, defining a given geometric feature, to its corresponding geometric feature described using the parameters resulting from the fitting procedure. These distances are believed to be more significant and reliable, specifically when taking the target applications (e.g., quality control) into account.

Solving the reconstruction problem involves two main steps. Firstly, the correspondence problem has to be addressed and, secondly, the depth has to be estimated. In this work, establishing correspondences (i.e., defining pairs of corresponding points belonging to matched geometric primitives) relies on matching functions based on the so-called epipolar constraint [2]. For that purpose, a dedicated similarity criterion is computed, which is based on the number of intersections of the set of epipolar lines with a given contour of each image. As a result, for each image, so-called matching matrices are obtained. The matrix *Match* (*right, left*) (resp., *Match* (*left, right*)) contains all possible matches between right and left primitives (resp., between left and right primitives). Finally, matching can be finalized by searching for corresponding maxima in the two matrices, as indicated in the algorithm given below, which summarizes the whole process (Algorithm 1).

**Algorithm 1:**Matching procedure.

- (1)
Estimation of the epipolar geometry

- (2)Determination of the candidate matches
- (a)
*For each left primitive, computation of the number of intersections of epipolar lines with right primitives. The result is the matrix Match (left, right).* - (b)
*For each right primitive, computation of the number of intersections of epipolar line with left primitives in the other image. The result is the matrix Match (right, left).*

- (a)
- (3)Finalization/validation of the matches
- (c)
*Computation of the similarity criterion between the two matrices: searching for corresponding maxima in the two matrices.* - (d)
*Elimination of the matched contours according to step (c) in the two matrices.* - (e)
*Repeat steps (a) to (d) until all contours have been processed.*

- (c)

Estimating the 3D coordinates from each matched point pairs, leading to the desired 3D reconstruction, is routinely and easily achieved applying a simple procedure known as triangulation.

If both the intrinsic and extrinsic parameters of the camera setup are known, the resulting reconstruction is called Euclidean, that is, the reconstruction is metric.

As a prerequisite for applications such as inspection or quality control tasks, estimation of the error affecting the 3D reconstruction is essential in order to be able to evaluate quantitatively the dimensions of the object or to state if the reconstruction results fulfill tolerance rules.

Such a configuration could be obtained by rotating the cameras from their original positions to reach alignment. An alternative is accordingly needed to rectify the stereo pair of images. As a result, matched image points corresponding to point features in a rectified image pair will lie on the same horizontal scanline and differ only in horizontal displacement. This horizontal displacement or disparity between rectified feature points is directly related to the depth of the 3D feature point. Further, the rectifying process does not require a high computation cost.

Work is currently being devoted to enhance the overall performance of the matching procedure. As could be expected (the behavior being routinely observed for stereovision-based reconstruction procedures), the error in the direction (i.e., range) is much higher than that in the and directions (corresponding approximately to the directions of the coordinate axes of the image plane). This behavior is induced by the vergence of the cameras. Indeed, controlling the vergence of the optical axes of the cameras defines a basic tool to obtain good results when applying a matching algorithm, because the disparity values related to the object of interest can be reduced. Knowledge of the vergence also allows a distance measure of the so-called fixation point (defined as the intersection point between the focal axes and the surface of the object of interest in the scene). This measure is obtained by carrying out a triangulation involving the focal axes (i.e., their intersection). Knowing this distance enables to control the vergence by calculating the distance information of the object of interest. One can also use disparity information related to the object, according to its position in the left and right images [33].

Lastly, as a final test, we have also propagated the 2D errors observed for edge points through to the 3D computation step, in order to estimate the final 3D error for single points. Figure 8 shows the obtained 3D error distributions in the , and directions after propagation of the 2D errors. Again, we can observe that the error in the Z direction is much higher than in the other two directions.

## 7. Conclusions and Outlook

In this paper, we have introduced a method for analyzing and evaluating the uncertainties encountered during vision-based reconstruction tasks. Specifically, the uncertainties have been estimated from the feature extraction step up to the triangulation based 3D reconstruction embedded within a cognitive stereovision system. As a result, the reconstruction process is sensitive to various types of error sources, as indicated in [16]. In this paper, we have mainly considered the errors introduced by the image segmentation procedure, their estimates being propagated through the whole reconstruction procedure.

Approximating the contours point lists describing the piece under evaluation using fitting results, assuming further that the various errors, specifically those related to image processing, are independent and identically distributed, we have estimated error bounds for the edges detection process. These errors are defined as being equal to the orthogonal distances of the contour pixels to the geometric feature described by the corresponding fitting parameters. These data are computed for each geometric feature being part of the object of interest. Using a dedicated error propagation scheme, the estimates have also been propagated through the whole processing chain setup for analyzing the images acquired with our stereovision system.

Since the resulting 3D reconstruction depends on image quality, the experimental images are usually acquired under good illumination conditions, in order to obtain images of sufficient quality using, for example, the approach described in [14]. Controlling the acquisition conditions allows minimizing the 3D measurement error. The experimental results presented here validate the error estimation technique and show that reconstructions of good quality with reasonable accuracy can be automatically computed. In comparison with previously obtained results, the measures presented above, with our suggested error propagation scheme, are of much higher precision and lead to reduced 3D reconstruction errors.

Generalization of the error propagation scheme is currently in progress in order to take into account all the other sources of errors, more specifically the matching errors.

## Authors’ Affiliations

## References

- Khemmar R, Lallement A, Hirsch E:
**Steps towards an intelligent self-reasoning system for the automated vision-based evaluation of manufactured parts.***Proceedings of the Workshop on Applications of Computer Vision, in Conjunction with European Conference on Computer Vision (ECCV '06), May 2006, Graz, Austria*136-143.Google Scholar - Far BA, Kohler S, Hirsch E:
**3D reconstruction of manufactured parts using bi-directional stereovision-based contour matching and comparison of real and synthetic images.***Proceedings of the 9th IAPR Conference on Machine Vision Applications, May 2005, Tsukuba Science City, Japan*456-459.Google Scholar - Kohler S, Far BA, Hirsch E:
**Dynamic (re)planning of 3D automated reconstruction using situation graph trees and illumination adjustment.***Quality Control by Artificial Vision, May 2007, Le Creusot, France, Proceedings of SPIE***6356:**-11.Google Scholar - Khemmar R, Lallement A, Hirsch E:
**Design of an intelligent self-reasoning system for the automated vision-based evaluation of manufactured parts.***proceedings of 7th International Conference on Quality Control by Artificial Vision (QCAV '05), May 2005, Nagoya, Japan*241-246.Google Scholar - Blostein SD, Huang TS:
**Error analysis in stereo determination of 3D point positions.***IEEE Transactions on Pattern Analysis and Machine Intelligence*1987,**9**(6):752-765.View ArticleGoogle Scholar - Yang Z, Wang Y-F:
**Error analysis of 3D shape construction from structured lighting.***Pattern Recognition*1996,**29**(2):189-206. 10.1016/0031-3203(95)00076-3View ArticleGoogle Scholar - Ramakrishna RS, Vaidvanathan B:
**Error analysis in stereo vision.***Proceedings of the Asian Conference on Computer Vision (ACCV '98), 1998***1351:**296-304.Google Scholar - Kamberova G, Bajcsy R:
*Precision of 3D points reconstructed from stereo.*Department of Computer & Information Science; 1997.Google Scholar - Balasuramanian R, Sukhendu D, Swaminathan K:
**Error analysis in reconstruction of a line in 3D from two arbitrary perspective views.***International Journal of Computer Vision and Mathematicss*2000,**78:**191-212.View ArticleGoogle Scholar - Rivera-Rios AH, Shih F-L, Marefat M:
**Stereo camera pose determination with error reduction and tolerance satisfaction for dimensional measurements.***Proceedings of the IEEE International Conference on Robotics and Automation, 2005, Barcelona, Spain*423-428.Google Scholar - Park S-Y, Subbarao M:
**A multiview 3D modeling system based on stereo vision techniques.***Machine Vision and Applications*2005,**16**(3):148-156. 10.1007/s00138-004-0165-2View ArticleGoogle Scholar - Albouy B, Koenig E, Treuillet S, Lucas Y:
**Accurate 3D structure measurements from two uncalibrated views.***Advanced Concepts for Intelligent Vision Systems*2006,**4179:**1111-1121. 10.1007/11864349_101View ArticleGoogle Scholar - Jianxi Y, Jianting L, Zhendong S:
**Calibrating method and systematic error analysis on binocular 3D position system.***Proceedings of the 6th International Conference on Automation and Logistics, September 2008, Qingdao, China*2310-3214.Google Scholar - Belhaoua A, Kohler S, Hirsch E:
**Determination of optimal lighting position in view of 3D reconstruction error minimization.***Proceedings of the 10th European Congress of International Society of Stereology (ISS '09), June 2009, Bologna, Italy*408-414.Google Scholar - Egnal G, Mintz M, Wildes RP:
**A stereo confidence metric using single view imagery with comparison to five alternative approaches.***Image and Vision Computing*2004,**22**(12):943-957. 10.1016/j.imavis.2004.03.018View ArticleGoogle Scholar - Belhaoua A, Kohler S, Hirsch E:
**Estimation of 3d reconstruction errors in a stereo-vision system.**In*Modeling Aspects in Optical Metrology II, June 2009, Münich, Germany, Proceedings of the SPIE*.*Volume 7390*. Optical Metrology; 1-10.View ArticleGoogle Scholar - Daul C:
*Construction et utilisation de liste de primitives en vue d'une analyse dimensionnelle de pièce à géométrie simple, Ph.D. thesis*. Université Louis Pasteur, Strasbourg, France; 1989.Google Scholar - Gander W, Golub GH, Strebel R:
**Least-squares fitting of circles and ellipses.***BIT Numerical Mathematics*1994,**34**(4):558-578. 10.1007/BF01934268View ArticleMathSciNetMATHGoogle Scholar - Yi S, Haralick RM, Shapiro LG:
**Error propagation in machine vision.***Machine Vision and Applications*1994,**7**(2):93-114. 10.1007/BF01215805View ArticleGoogle Scholar - York D:
**Least-squares fitting of a straight line.***Canadian Journal of Physics*1966,**44:**1079-1086. 10.1139/p66-090View ArticleMathSciNetMATHGoogle Scholar - Lybanon M:
**A better least-squares method when both variables have uncertainties.***American Journal of Physics*1984,**52:**22-26. 10.1119/1.13822View ArticleGoogle Scholar - Reed BC:
**Linear least-squares fits with error in both coordinates.***American Journal of Physics*1989,**58:**642-646.View ArticleGoogle Scholar - Gonzalez AG, Marquez A, Sanz JF:
**An iterative algorithm for consistent and unbiased estimation of linear regression parameters when there are errors in both the x and y variables.***Computers and Chemistry*1992,**16**(1):25-27. 10.1016/0097-8485(92)85004-IView ArticleGoogle Scholar - Macdonald JR, Thompson WJ:
**Least-squares fitting when both variables contain errors : pitfalls and possibilities.***American Journal of Physics*1992,**60:**66-73. 10.1119/1.17046View ArticleGoogle Scholar - Reed BC:
**Linear least-squares fits with errors in both coordinates. II: comments on parameter variances.***American Journal of Physics*1992,**60:**59-62. 10.1119/1.17044View ArticleGoogle Scholar - York D, Evensen NM, Martínez ML, De Basabe Delgado J:
**Unified equations for the slope, intercept, and standard errors of the best straight line.***American Journal of Physics*2004,**72**(3):367-375. 10.1119/1.1632486View ArticleGoogle Scholar - Krystek M, Anton M:
**A weighted total least-squares algorithm for fitting a straight line.***Measurement Science and Technology*2007,**18**(11):3438-3442. 10.1088/0957-0233/18/11/025View ArticleGoogle Scholar - Van Huffel S, Vandewalle J:
*The Total Least Squares Problem, Computational Aspects and Analysis*. Society for Industrial and Applied Mathematics, Philadelphia, Pa, USA; 1990.MATHGoogle Scholar - Halir R, Flusser J:
**Numerically stable direct least squares fitting of ellipses.***Proceedings of the 6th International Conference in Central Europe on Computer Graphics and Visualization (WSCG '98), February 1998, Plzen, Czech*125-132.Google Scholar - Ahn SJ, Rauh W, Recknagel M:
**Ellipse fitting and parameter assessment of circular object targets for robot vision.***Proceedings of the International Conference on Intelligent Robots and Systems, October 1999, Kyongju, Korea***1:**525-530.Google Scholar - O'Leary P, Zsombor-Murray P:
**Direct and specific least-square fitting of hyperbolæ and ellipses.***Journal of Electronic Imaging*2004,**13**(3):492-503. 10.1117/1.1758951View ArticleGoogle Scholar - Bouguet J-Y:
*Visual methods for three-dimensional modeling, Ph.D. thesis*. California Institute of Technology, Pasadena, Calif, USA; 1999.Google Scholar - Kwon K-C, Lim Y-T, Kim N, Song Y-J, Choi Y-S:
**Vergence control of binocular stereoscopic camera using disparity information.***Journal of the Optical Society of Korea*2009,**13**(3):379-385. 10.3807/JOSK.2009.13.3.379View ArticleGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.