Skip to main content

Research on 3D measurement model by line structure light vision

Abstract

For serious radial distortion and high precision measurement, 3D measurement model is studied in the paper. Based on the two-step calibration algorithm, a round initialization window is proposed to calculate the initial value of the camera parameters for nonlinear optimization. In order to solve the problem that the Supersonic Transport Evaluation Group algorithm of light stripe center extraction has a large amount of computation, a method is presented by normalized correlation coefficient (NCC) method and principal component analysis (PCA). Finally, the coordinates of the central stripe could be obtained based on the bilinear interpolation and parabolic fitting. With the parameters of the structured light plane that could be obtained by the coordinates of the central stripe, the height of block gauge is measured by the 3D measurement model. The experiment results showed that the average measurement error is 0.0191 mm, and the strip extraction speed is improved.

1 Introduction

In recent years, 3D measurement of mechanical parts has been studied and developed by machine vision technology [1,2,3]. Because of a wide range, high flexibility, and precision, structural light vision has been widely used in three-dimensional measurement [4,5,6,7]. In 2010, Liu et al. [8] reported a method for measuring shaft diameters by a line-structured laser. The coordinates of the light stripe centers on the shaft were obtained by a novel grayscale barycenter extraction algorithm along the radial direction. The shaft diameter was then obtained by circle fitting. In 2015, Liu et al. [9] presented a model for measuring shaft diameters using structured light vision. A virtual plane was established perpendicular to the measured shaft axis, and the coordinates of the light stripe centers on the shaft were projected to the virtual plane. On the virtual plane, the shaft diameter was measured by the projected coordinates. In 2016, Zheng [10] reported a method for measuring the diameter of a train wheel by line structured-light measurement system. The axle of the wheelset and the wheel tread are measured by structured-light sensors, and the center of the rolling round is calculated by the axle. At last, the diameter of the rolling round is determined by the center and the contact points on the wheel tread. The method had been proven to be quite stable and accurate by static experiment and dynamic field experiment. According to the different shape of the projection feature, the structured light can be divided into different types and the line-structured light is mainly studied in this paper.

Based on the line-structured vision, the measurement steps mainly include three parts: camera calibration, calibration of light plane parameters, and three-dimensional measurement model. The purpose of the camera calibration is to obtain the internal parameters of camera and distortion coefficients of camera lens [11, 12]. For the large field of view and serious distortion of the occasion, Zhou [13] proposed a partition calibration model of line structure light. The method solved the calibration error caused by the nonlinear optimization at the regions far from the optical axis. However, the method takes too long to operate, and the calibration plate is complicated to make. Light stripe center extraction is one of the key steps in the whole measurement process. Hession matrix methods were common detection algorithm thanks to high robustness and accuracy [14], but the algorithm needs large-scale Gaussian convolution operation and a long computation time. Cai [15] proposed a light stripe center extraction algorithm based on the principal component analysis. Although the number of Gaussian convolution operations was reduced in the normal direction calculation, the sub-pixel coordinates of the stripe centers were acquired by Taylor expansion in the normal direction, which still needs to calculate the two-dimensional Gaussian convolution.

In view of the above problem, the paper presents a circular initialization window and discusses the size of the window for the calculation of the initial camera parameters in the nonlinear calibration. To ensure the measurement accuracy under a low computation cost, a method of extraction stripe center is proposed.

This report is organized as follows: Section 2 proposes an improved two-step camera calibration model. Section 3 outlines the model of light stripe center detection algorithm and the light plane calibration process. Section 4 puts forward the measurement model by the line structure light and reports the experimental results used to test the measuring accuracy. Section 5 provides the study’s conclusions.

2 Improved two-step camera calibration model

2.1 Linear calibration mathematical model

Firstly, lens distortion is neglected by a two-step calibration algorithm [4], and the linear calibration model is as shown in Eq. (1).

$$ {Z}_c\left[\begin{array}{c}u\\ {}v\\ {}1\end{array}\right]=\boldsymbol{A}\left[\boldsymbol{R}\kern0.5em \boldsymbol{T}\right]\left[\begin{array}{c}{X}_w\\ {}{Y}_w\\ {}{Z}_w\\ {}1\end{array}\right] $$
(1)

where A is the camera internal matrix, \( \boldsymbol{R}=\left[\begin{array}{ccc}{r}_{11}& {r}_{12}& {r}_{13}\\ {}{r}_{21}& {r}_{22}& {r}_{23}\\ {}{r}_{31}& {r}_{32}& {r}_{33}\end{array}\right] \) is external the rotation matrix, \( \boldsymbol{T}=\left[\begin{array}{c}{T}_x\\ {}{T}_y\\ {}{T}_z\end{array}\right] \) is external translation matrix, Zc is the depth factor, (u, v) are the pixel coordinates of feature point on the calibration plate, and (Xw, Yw, Zw) are the world coordinates of the feature point.

Based on the two-step calibration algorithm, it is necessary to select a reasonable initial value. Because the lens distortion is very severe at the position far from the center of the lens, the image center is always used as the calculation center to design a window in calculating the initial value, and the homography matrix calculated in the window is used as the initial value of the nonlinear optimization. Because of the image noise, the inconsistency between different variable dimensions, and the degree of distortion, it is necessary to reasonably choose to initialize the size and shape of the extraction window.

In this paper, a rectangular window and a circular window are designed respectively, and the coordinates of the corner points extracted in the window are calculated for the initial value of the homography matrix. The rectangular window is shown in Fig. 1, and the corner coordinate field satisfies the following relationship:

$$ \frac{\mathrm{Width}}{2}-\mathrm{Threshold}\le X\left(i,j\right)\le \frac{\mathrm{Width}}{2}+\mathrm{Threshold} $$
(2)
$$ \frac{\mathrm{Height}}{2}-\mathrm{Threshold}\le Y\left(i,j\right)\le \frac{\mathrm{Height}}{2}+\mathrm{Threshold} $$
(3)
Fig. 1
figure 1

Rectangular window corner point extraction

where, Threshold is the size threshold of the set window, and size is related to the image resolution. The unit of Threshold is pixel.

The circular window is shown in Fig. 2, and the corner coordinate field satisfies the following relationship:

$$ {\left(X\left(i,j\right)-\frac{W}{2}\right)}^2+{\left(Y\left(i,j\right)-\frac{H}{2}\right)}^2\le {\left(\mathrm{Threshold}\right)}^2 $$
(4)
Fig. 2
figure 2

Round window corner point extraction

where W is the image width, and H is the image height. Threshold is the window size threshold and ensured by the image resolution.

According to the type of lens distortion, the rectangular window has a certain precision in the case of a small field of view. However, compared with the square optimization window, the circular optimization window is more in line with the lens distortion law. To illustrate this problem, the paper uses the distortion coefficient to simulate the lens distortion model, and the simulation results are shown in Fig. 3.

Fig. 3
figure 3

Distortion simulations. a Total distortion simulation. b Radial distortion simulation. c Tangential distortion simulation

The distortion coefficient of the simulation results is calculated from the calibration results of 17 calibration images. In Fig. 3, “×” represents the center of the image, and “o” represents the calculated center position; the starting point of the arrow represents the ideal point, the end point of the arrow represents the distortion point, and the direction of the arrow represents the distortion direction. As shown in the simulation results, the lens distortion model is similar to a circle in condition of radial distortion and tangential distortion. Moreover, the distortions caused by radial distortion are the most serious, and the degree of distortion is relatively small in the approximate circular area of the center of the image.

In order to study the effect of the initialization window size on the calibration accuracy, circular initialization windows of different radii are designed, and the centers of these circular windows are the center of the image, as shown in Fig. 4. The radius of the circular search window is 50 pixels in the initial state, and search step size is incremented by 12.5 pixels. Then, the average residual of all real points and back projection points is calculated after the first optimization iteration, as shown in Eq. (5).

$$ {\overline{\delta}}_m=\frac{1}{N}{\left\Vert {M}_{ij}-\tilde{M}\right\Vert}_2 $$
(5)
Fig. 4
figure 4

Circular search region of the initial values

The average residuals corresponding to different windows are shown in Fig. 5. The abscissa indicates the range of the search box, and the unit is the pixel. The ordinate indicates the average residual, and the unit is millimeter.

Fig. 5
figure 5

Optimization residual for different search domains

As shown in Fig. 5, the radius is less than 100 pixels, and the optimization residual is large. The reason for this result is that the calculation point distortion is small, but the number of points involved in the calculation is small and the resulting constraints are less. When the radius is larger than 100 pixels, the calibration residual area is stable, and the increase in the size of the window is not significant for further improvement in accuracy. Choosing the appropriate initialization window not only improves the calibration accuracy but also saves computation time.

According to the results of this section, the circular extraction window should be chosen to calculate the parameters of the linear processing camera, and the optimal window size of the paper is 100 pixels.

2.2 Nonlinear calibration model

In the precision measurement, the effects of lens distortion which mainly include radial distortion and tangential distortion [16, 17] should be considered. The polynomial model is a commonly used distortion model, but the analytical solution could not be obtained by the model. In order to calculate the distortion coefficients and more accurate camera parameters, the above parameters need to be nonlinearly optimized. The distortion model is shown in Eq. (6), and the nonlinear objective optimization equation is given by Eq. (6).

$$ \left[\begin{array}{c}{x}_d{r}^2\kern0.5em {x}_d{r}^4\kern0.5em 2{x}_d{y}_d\kern0.5em \left({y_d}^2+3{x}_d^2\right)\\ {}\begin{array}{cccc}{y}_d{r}^2& {y}_d{r}^4& \left({y_d}^2+3{y}_d^2\right)& 2{x}_d{y}_d\end{array}\end{array}\right]\left[\begin{array}{c}{k}_1\\ {}{k}_2\\ {}{p}_1\\ {}{p}_2\end{array}\right]=\left[\begin{array}{c}{x}_u-{x}_d\\ {}{y}_u-{y}_d\end{array}\right] $$
(6)

where (xu, yu) are the ideal image coordinates, and (xd, yd) are the actual image coordinates. k1 and k2 are the radial distortion coefficients, and p1 and p2 are the tangential distortion coefficients. A nonlinear function can be established by minimizing the distance between the calculated world coordinates of the corner points in the calibration board and the actual world coordinates. For the nonlinear solution, Zhang [11] used the maximum likelihood estimation, and the objective equation optimization plane is located in the image coordinate plane, that is, the optimization objective equation:

$$ \sum \limits_{i=1}^n\sum \limits_{j=1}^m{\left\Vert {M}_{ij}-\tilde{M}\left(\boldsymbol{A},\boldsymbol{k},\boldsymbol{R},\boldsymbol{T},\boldsymbol{M}\right)\right\Vert}^2 $$
(7)

where i is the index of the number of images, and j is the index of the position of the corners.

Due to the error of the corner extraction and the error caused by the distortion, the camera calibration accuracy is random to a certain extent by solved Eq. (6). In order to improve the calibration accuracy, the camera calibration parameters are optimized on the world coordinate plane in the paper, and the physical coordinate accuracy of the calibration plate used in this paper can reach 1 μm level in the world coordinate plane. The extracted image coordinates are back-projected to the world coordinate plane, and the maximum likelihood estimation is performed on the parameters. The objective function could be expressed as:

$$ \sum \limits_{i=1}^n\sum \limits_{j=1}^m{\left\Vert {M}_{ij}-\tilde{M}\left(\boldsymbol{A},\boldsymbol{k},\boldsymbol{R},\boldsymbol{T},\boldsymbol{m}\right)\right\Vert}^2 $$
(8)

where n is the number of images, and m is the number of corner points. m is the sub-pixel coordinates of the corner, Mij is the world coordinates, and \( {\tilde{M}}_{ij} \) is calculated by the back projection though Mij. The nonlinear function can be solved using the Levenberg-Marquardt algorithm [18]. Figure 6 shows an improved optimization algorithm for back projection error.

Fig. 6
figure 6

Projection measurement error

3 Method: calibration of structured light plane parameters

3.1 Coordinates extraction of structured light strip

The extraction accuracy of light strip center coordinates determines the calibration quality of the light plane parameters [19]. Based on the traditional Supersonic Transport Evaluation Group (STEG) algorithm, the normal direction is firstly solved by using the eigenvector corresponding to the largest eigenvalue of the Hessian matrix in each pixels, then the sub-pixel coordinates of light stripes centers could be obtained by derivatives of the Taylor expansion for the grayscale in the normal direction [17]. However, a single pixel requires five Gaussian convolutions in each calculation. For example, the camera resolution is 1292 × 964 in the paper, so the calculation of a single picture can be expressed as:

$$ N=5\times {n}^2\times 1292\times 964 $$
(9)

where N is the number of operations, n is the size of the Gaussian template used for the convolution operation and is 11 in the paper, and n depended on the width of the light stripe.

In order to reduce the complexity of the light stripe extraction algorithm, a new method of extracting light stripe center is presented in the paper. Firstly, the location of the light stripe in the image was located by the normalized correlation coefficient (NCC) in the paper, and NCC is shown as:

$$ \mathrm{NCC}=\frac{1}{W\times H}\sum \limits_{\left(u,v\right)\in T}\frac{\left[I\left(r+u,c+v\right)-\overline{I}\right]\left[T\left(r,c\right)-\overline{T}\right]}{\sigma_T\sigma } $$
(10)

where \( \overline{I} \) is the average gray value of the pixels of the image in the window, \( \overline{T} \) is the average gray value of the light stripe template, σ is the standard deviation of grayscale in the image window, and \( \overline{\sigma} \) is the standard deviation of the template window. If NCC is 1, the image window is highly correlated with the template, and Fig. 7 shows the matching process.

Fig. 7
figure 7

The light stripe matching process. a Coplanar target. b Light stripe template. Matched light stripe image

In order to improve the calculation speed and template matching speed, the improved algorithm introduces the concept of the image pyramid. This process mainly consists of two parts, the first step is downsampling. Through subsampling the original image, images of different resolutions could be obtained from bottom to top. The number of the pyramid layers is determined by the size of the original image and the tower top image, as shown in Eq. (11). Second, Gaussian filtering should be used on the basis of sampling; the size of the filtering kernel is determined by Eq. (12).

$$ \mathrm{num}={\log}_2\left\{\min \left(M,N\right)\right\}-{\log}_2\left\{\min \left({m}^{\prime },{n}^{\prime}\right)\right\} $$
(11)
$$ \mathrm{Kernel}\approx \left(6\sigma +1\right)\times \left(6\sigma +1\right) $$
(12)

When template matching is performed, the template instance matched by the top image is mapped to the next layer, and the corresponding coordinates are simultaneously changed. When the target area is determined, the strip center extraction can be performed in the target area.

The normal direction of the light stripe could be obtained by the principal component analysis in the image sample data. First, the image of the light stripe is preprocessed by threshold segmentation. By traversing all pixels of the light stripe, if eight neighborhoods of the pixel are more than the specified threshold, the grayscale of the pixel is set to 0. The initial of light stripe could be obtained by the gray gravity method. Second, the gradient vector of an image is built in the partial image window which is based on the image area of every initial center point. Covariance matrix could be obtained by the gradient vector of an image, and the eigenvector corresponding to the largest eigenvalue of the covariance matrix is the normal direction. The covariance matrix is given by Eq. (13).

$$ \boldsymbol{H}={\left[\begin{array}{cc}\operatorname{cov}\left({G}_x,{G}_x\right)& \operatorname{cov}\left({G}_x,{G}_y\right)\\ {}\operatorname{cov}\left({G}_x,{G}_y\right)& \operatorname{cov}\left({G}_y,{G}_y\right)\end{array}\right]}_{\Omega} $$
(13)

Because the local window is approximately symmetric about the initial point, the following relationship is:

$$ \left\{\begin{array}{c}E\left({G}_x\right)\approx 0\\ {}E\left({G}_y\right)\approx 0\end{array}\right. $$
(14)

Then, Eq. (13) is further simplified.

$$ \boldsymbol{H}=\left[\begin{array}{cc}{G}_x^2& {G}_x{G}_y\\ {}{G}_x{G}_y& {G}_y^2\end{array}\right] $$
(15)

The eigenvector corresponding to the largest eigenvalue of matrix H is the normal vector N (nx, ny) of its corresponding point. The gradient window is shown in Fig. 8.

Fig. 8
figure 8

Gradient window. a The local gradient image window. b A single gradient window

Assuming that the coordinates of the initial center are (X0, Y0) = (Column_0, Row_0), the pixel coordinates of the two neighborhoods could be obtained by Eq. (16) in the normal direction.

$$ \left\{\begin{array}{c}\left({X}_1,{Y}_1\right)=\left(\mathrm{Column}\_0+\frac{n_y}{\sqrt{n_x^2+{n}_y^2}},\kern0.5em \mathrm{Row}\_0+\frac{n_x}{\sqrt{n_x^2+{n}_y^2}},\right)\\ {}\left({X}_2,{Y}_2\right)=\left(\mathrm{Column}\_0-\frac{n_y}{\sqrt{n_x^2+{n}_y^2}},\kern0.5em \mathrm{Row}\_0-\frac{n_x}{\sqrt{n_x^2+{n}_y^2}}\right)\end{array}\right. $$
(16)

F(X0), F(X1), and F(X2) represent the gray corresponding to the sub-pixel and could be acquired by the bilinear interpolation. A parabolic curve is fitted using the initial point (X0, F(X0)), and (X1, F(X1)), (X2, F(X2), and the position where the first derivative of the fitted quadratic is 0 are the sub-pixel coordinates of the stripe center.

In order to show that the normal direction obtained by the presented algorithm is consistent with the normal direction obtained by the STEG algorithm, the paper compares the vertical and non-vertical states of the normal vector trends, and the trends are obtained by two algorithms respectively. The results are shown in Figs. 9 and 10 and are proved that the normal directions obtained by the two methods are consistent.

Fig. 9
figure 9

Image of the plane target and the normal vector of center points (non-vertical state). a The tilt light stripe. b The normal vector

Fig. 10
figure 10

Image of the plane target and the normal vector of center points (approximate vertical state). a The approximate vertical bars. b The normal vector

3.2 Calculation of structured light plane

The calculation of the light plane equation used the free-moving coplanar target as shown in Fig. 10a. In order to obtain the coordinates of the light stripe center on the target plane, the calibration plate is coplanar with the target plane. In structured light plane calibration, the position of the laser sensor and the camera’s field of view are fixed, and the position of the coplanar target is changed six times. Although the position of the target has changed, the position of the light stripe is constant relative to the camera view, only the coplanar target relative to the camera coordinate system is changing. During the calibration process, the light strip region only needs to be extracted once under offline state, and then, the coordinate of the light center could be calculated by camera parameters, the lens distortion coefficient, and the camera’s external parameters on the coplanar target. Figure 11 is shown as the position of light stripe center.

Fig. 11
figure 11

Structured-light stripe center 3D coordinates in a different position of plane targets

Set P is a light stripe center, so Pij is the jth point on the intersection line when the coplanar target is in the ith position. Setting the equation of the light plane under a camera coordinate system:

$$ AX+ BY+ CZ-1=0 $$
(17)

The parameters A, B, and C can be obtained by the objective function:

$$ \min \sum \limits_{i=1}^n\sum \limits_{j=1}^k\left\Vert {AX}_{Cj}^i+{BY}_{Cj}^i+{CZ}_{Cj}^i-1=0\right\Vert $$
(18)

where n is a number of the coplanar target turns, n is 6 in the paper, k is the number of light stripe centers on the intersection line when the target is in the ith position, and (XiCj, YiCj, ZiCj) are the camera coordinates of Pij. According to the principle of least squares, the coefficients of Eq. (18) can be solved as:

$$ \left[\begin{array}{c}A\\ {}B\\ {}C\end{array}\right]={\left[\begin{array}{ccc}\sum \limits_{i=1}^n\sum \limits_{j=1}^k{{X^i}_{cj}}^2& \sum \limits_{i=1}^n\sum \limits_{j=1}^k{X^i}_{cj}{Y^i}_{cj}& \sum \limits_{i=1}^n\sum \limits_{j=1}^k{X^i}_{cj}{Z^i}_{cj}\\ {}\sum \limits_{i=1}^n\sum \limits_{j=1}^k{X^i}_{cj}{Y^i}_{cj}& \sum \limits_{i=1}^n\sum \limits_{j=1}^k{{Y^i}_{cj}}^2& \sum \limits_{i=1}^n\sum \limits_{j=1}^k{Y^i}_{cj}{Z^i}_{cj}\\ {}\sum \limits_{i=1}^n\sum \limits_{j=1}^k{X^i}_{cj}{Z^i}_{cj}& \sum \limits_{i=1}^n\sum \limits_{j=1}^k{Y^i}_{cj}{Z^i}_{cj}& \sum \limits_{i=1}^n\sum \limits_{j=1}^k{{Z^i}_{cj}}^2\end{array}\right]}^{-1}\left[\begin{array}{c}\sum \limits_{i=1}^n\sum \limits_{j=1}^k{X^i}_{cj}\\ {}\sum \limits_{i=1}^n\sum \limits_{j=1}^k{Y^i}_{cj}\\ {}\sum \limits_{i=1}^n\sum \limits_{j=1}^k{Z^i}_{cj}\end{array}\right] $$
(19)

Because the camera parameters and coordinates of the centers have errors, the coefficients of the light plane obtained by Eq. (19) will be used as initial values for optimizing the light plane. The objective function of the optimized plane is established by the distances between the points and the plane:

$$ \min \sum \limits_{i=1}^n\sum \limits_{j=1}^k\frac{\left|{AX}_{Cj}^i+{BY}_{Cj}^i+\mathrm{C}{Z}_{Cj}^i-1\right|}{\sqrt{A^2+{B}^2+{C}^2}} $$
(20)

4 The measurement results and experimental discussions

4.1 Measurement model

The three-dimensional measurement model based on structured light vision is shown in Fig. 12, and the measurement system consists of an industrial camera, single-line structure light source, work table, etc. In Fig. 12, Ow-XwYwZw is a world coordinate system, OC-XCYCZC is a camera coordinate system, and Ou-xuyu is an image coordinate system. OL represents the line structure of light projector location, and Ol-xmymzm is a local coordinate system on the light plane. The equation of the light plane under OC-XCYCZC could be obtained by Eq. (20).

Fig. 12
figure 12

3D measurement model about the single line-structured light mode

In the measurement, the image coordinates of the light stripe center are first transformed to the local coordinate system, as shown in Eq. (21).

$$ \left\{\begin{array}{c}{X}_w=\frac{r_{11}^{\hbox{'}}{x}_u+{r}_{12}^{\hbox{'}}{y}_u+{T}_x^{\hbox{'}}}{r_{31}^{\hbox{'}}{x}_u+{r}_{32}^{\hbox{'}}{y}_u+{T}_z^{\hbox{'}}}\\ {}{Y}_w=\frac{r_{21}^{\hbox{'}}{x}_u+{r}_{22}^{\hbox{'}}{y}_u+{T}_y^{\hbox{'}}}{r_{31}^{\hbox{'}}{x}_u+{r}_{32}^{\hbox{'}}{y}_u+{T}_z^{\hbox{'}}}\end{array}\right. $$
(21)

where \( \left[\begin{array}{ccc}{r}_{11}^{\hbox{'}}& {r}_{12}^{\hbox{'}}& {T}_x^{\hbox{'}}\\ {}{r}_{21}^{\hbox{'}}& {r}_{22}^{\hbox{'}}& {T}_y^{\hbox{'}}\\ {}{r}_{31}^{\hbox{'}}& {r}_{32}^{\hbox{'}}& {T}_z^{\hbox{'}}\end{array}\right]={\left[\begin{array}{ccc}{r}_{11}& {r}_{12}& {T}_x\\ {}{r}_{21}& {r}_{22}& {T}_y\\ {}{r}_{31}& {r}_{32}& {T}_z\end{array}\right]}^{-1} \).

Then, the local world coordinates are changed to the camera coordinate system, as shown in Eq. (22).

$$ \left[\begin{array}{c}{X}_c\\ {}{Y}_c\\ {}{Z}_c\\ {}1\end{array}\right]=\left[R,T\right]\left[\begin{array}{c}{X}_w\\ {}{Y}_w\\ {}0\\ {}1\end{array}\right] $$
(22)

The rotation matrix is shown in Eq. (23), and the translation matrix is shown in Eq. (24).

$$ \boldsymbol{R}=\left[\begin{array}{ccc}\cos \beta \cos \gamma & \sin \alpha \sin \beta \cos \gamma -\cos \alpha \sin \gamma & \cos \alpha \sin \beta \cos \alpha +\sin \alpha \sin \gamma \\ {}\cos \beta \sin \gamma & \sin \alpha \sin \beta \sin \gamma +\cos \alpha \cos \gamma & \cos \alpha \sin \beta \sin \gamma -\sin \alpha \cos \gamma \\ {}-\sin \beta & \sin \alpha \cos \beta & \cos \alpha \cos \beta \end{array}\right] $$
(23)
$$ T=\left[\begin{array}{c}0\\ {}0\\ {}\frac{1}{C}\end{array}\right] $$
(24)

where \( \left\{\begin{array}{l}\alpha =\arctan \left(-\frac{B}{A\sin \beta +C\cos \beta}\right)\\ {}\beta =\arctan \left(\frac{A}{C}\right)\\ {}\\ {}\gamma =0\end{array}\right. \).

4.2 Measurement experiments

Experiments are conducted to assess the utility of the measurement mode. The experimental equipment employed is shown in Fig. 13, and the main parameters of the equipment are shown in Table 1. The experiment used a set of block gauges. When the nominal height of block gauges is in the range of 1 to 10 mm, the limit deviation is ± 0.20 μm. When the nominal height is in the range of 10 to 25 mm, the limit deviation is of ± 0.30 μm.

Fig. 13
figure 13

Experimental equipment for measuring block gauge. a Camera calibration. b Measuring block experiment

Table 1 Experimental equipment parameters

The images of gauges are captured using the camera, as shown in Fig. 14. Firstly, the sub-pixel coordinates of the light stripe 1 and the light stripe 2 are respectively obtained. The depth information of the two the light stripes changes at the turning point which is shown in Fig. 14. Secondly, the corresponding external parameters of a light plane and target plane could respectively be obtained by the equations of the light plane and the target plane.

Fig. 14
figure 14

Gauge measurement

The sub-pixel coordinates of the light stripe centers could be obtained by as presented in the paper, and the depth information of the two light stripes changed at the turning points. Through the coordinates of the two light stripes centers, the two spatial line equations could be calculated by the straight line fitting algorithm. In order to improve the fitting accuracy, the constraint is as shown in Eq. (25).

$$ {K}_1={K}_2 $$
(25)

where K1 and K2 are respectively the slopes of the two fitted lines.

According to the light plane equation and the target plane equation, the distance between two parallel lines on the light plane and the distance between two parallel lines on the target plane can be obtained respectively. Based on the triangle relationship as shown in Fig. 8, the actual height of the measured object can be obtained, and the result is the height of the block gauge in the experiment. The experimental results are shown in Table 2.

Table 2 Measurement results of gauge blocks (unit: mm)

Through the calculation of the experimental data, the mean value of the error is 0.0191 mm, and the error standard deviation is 0.0029.

In order to compare the complexity of the strip extraction method proposed in the paper and the STEG algorithm, the running time of the gauge measurement experiments is recorded separately. The running time of experiments are shown in Table 3, and the detection speed of the light stripe center is a significant improvement.

Table 3 Experimental time of gauge blocks measurement (unit: s)

5 Conclusions

In this paper, the three-dimensional measurement model based on single-line structure is studied. For improving the calibration accuracy, a circular initialization window is proposed in camera calibration. A method for calculating the normal direction of the light stripe based on principal component analysis is proposed, and the rationality of the algorithm is verified by comparing with the normal direction obtained by the Hessian matrix. To further avoid a large number of two-dimensional Gaussian convolutional operations, bilinear interpolation and parabola in the direction of the normal are used in the paper. Finally, by measuring the height of the standard block gauge, it is proved that the improved measurement algorithm has a higher accuracy and speed for the measurement of the 3D objected.

Abbreviations

3D:

Three-dimensional

NCC:

Normalized correlation coefficient method

PCA:

Principal component analysis

STEG:

Supersonic Transport Evaluation Group

References

  1. Q. Sun, Y. Hou, Q. Tan, C. Li, Shaft diameter measurement using a digital image. Opt. Lasers Eng. 55, 183–188 (2014)

    Article  Google Scholar 

  2. P. D’Angelo, C. Wöhler, Image-based 3D surface reconstruction by combination of photometric, geometric, and real-aperture methods. ISPRS J Photogramm Remote Sens 63(3), 297–321 (2008)

    Article  Google Scholar 

  3. H. Schreier, J.J. Orteu, M.A. Sutton, Image correlation for shape, motion and deformation measurements (Springer US, 2009)

  4. S.V.D. Jeught, J.J.J. Dirckx, Real-time structured light profilometry: a review. Opt Lasers Eng 87, 18–31 (2016)

    Article  Google Scholar 

  5. C. Guan, L.G. Hassebrook, D.L. Lau, V.G. Yalla, Improved composite-pattern structured-light profilometry by means of postprocessing. Opt. Eng. 47(47), 7203 (2008)

    Google Scholar 

  6. Y. Zhu, A. Li, W. Pan, Discussions on phase-reconstructing algorithms for 3D digitizing structure-light profilometry. Optik-Int J Light Electron Opt 122(2), 162–167 (2011)

    Article  Google Scholar 

  7. X.-y. Su, Q.-c. Zhang, Dynamic 3-D shape measurement method: a review. Opt. Lasers Eng. 48(2), 191–204 (2010)

    Article  Google Scholar 

  8. B. Liu, P. Wang, Y. Zeng, C. Sun, Measuring method for micro-diameter based on structured-light vision technology. Chin. Opt. Lett. 8, 666–669 (2009)

    Google Scholar 

  9. S. Liu, Q. Tan, Y. Zhang, Shaft diameter measurement using structured light vision. Sensors 15(8), 19750–19767 (2015)

    Article  Google Scholar 

  10. Z. Gong, J. Sun, G. Zhang, Dynamic measurement for the diameter of a train wheel based on structured-light vision. Sensors 16(4), 564 (2016)

    Article  Google Scholar 

  11. Z. Zheng-you, A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000)

    Article  Google Scholar 

  12. D. Claus, A.W. Fitzgibbon, A rational function lens distortion model for general cameras, In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), IEEE Computer Society, vol 1 (2005), pp. 213–219

  13. Z. Xiao-dong, Z. Ya-chao, T. Qing-chang, et al., New method of cylindricity measurement based on structured light vision technology. J Jilin Univ 47(2), 524–529 (2017)

    Google Scholar 

  14. L. Qi, Y. Zhang, X. Zhang, S. Wang, F. Xie, Statistical behavior analysis and precision optimization for the laser stripe center detector based on Steger’s algorithm. Opt. Express 21(11), 13442–13449 (2013)

    Article  Google Scholar 

  15. H. Cai, Z. Feng, Z. Huang, Centerline extraction of structured light stripe based on principal component analysis. Chin J Lasers 42(3), 270–275 (2015)

    Google Scholar 

  16. J. Heikkila, Geometric camera calibration using circular control points. IEEE Trans. Pattern Anal. Mach. Intell. 22, 1066–1076 (2000)

    Article  Google Scholar 

  17. R.Y. Tsai, A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J. Robot. Autom. 3(4), 323–344 (1987)

    Article  Google Scholar 

  18. J.J. Moré, The Levenberg-Marquardt algorithm implementation and theory (Numerical Analysis, Berlin, Heidelberg, 1978), pp. 105–116

    MATH  Google Scholar 

  19. D. Fa-jjie, L. Feng-mei, Y. Sheng-hua, A new accurate method for the calibration of line structured light sensor. Chin J Sci Instrum 21(1), 108–110 (2000)

    Google Scholar 

Download references

About the authors

Siyuan Liu was born in Shanxi, China, in 1985. He earned his Ph.D. degree in Mechanical Engineering from Jilin University in 2016. He is currently working in College of Mechanical Science and Engineering, Jilin University. His research interests include machine vision, structured light measurement, and intelligent manufacturing.

Yachao Zhang was born in Henan, China, in 1990. He earned his Master’s degree in Mechanical Engineering from Jilin University in 2017. He is currently pursuing a Ph.D. degree in Biomedical Imaging and Guidance Technology at the City University of Hong Kong. His research interests include photoacoustic imaging reconstruction algorithm, high-intensity focused ultrasound (HIFU), and intelligent medical instrumentation development.

Yunhui Zhang is currently a professor and a master’s tutor. She earned her Ph.D. degree in Mechanical Engineering from Jilin University in 2011. She graduated in 1995 and stayed in school to teach at the College of Mechanical Science and Engineering, Jilin University. Her research interests include machine vision and digital image measurement technology.

Tianchi Shao received the B.S. degree in Engineering from Changchun University of Science and Technology, Changchun, China, in 2017. He is currently pursuing a master’s degree in Jilin University. His research interests include image processing and machine vision.

Minqiao Yuan received the B.S. degree in Software Engineering from Jilin University, Changchun, China, in 2015. He is currently pursuing a master’s degree in Jilin University. His research interests include image processing and machine vision.

Funding

This work was supported in part by a grant from the Natural Science Foundation of Jilin Province (No. 2017010121JC), the Advanced Manufacturing Project of Provincial School Construction of Jilin Province (No. SXGJSF2017-2).

Availability of data and materials

We can provide the data.

Author information

Authors and Affiliations

Authors

Contributions

All authors take part in the discussion of the work described in this paper. SL wrote the first version of the paper. YcZ did the experiments of the paper. YhZ participated in the design of partially structured light measurement algorithms. TS and MY assisted to participate in the validation experiments that verify the model accuracy and organize the experimental data. SL, YhZ, YcZ, and TS revised the paper in different version of the paper, respectively. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Yunhui Zhang.

Ethics declarations

Ethics approval and consent to participate

Approved.

Consent for publication

Approved.

Competing interests

The authors declare that they have no competing interests. And all authors have seen the manuscript and approved to submit to your journal. We confirm that the content of the manuscript has not been published or submitted for publication elsewhere.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, S., Zhang, Y., Zhang, Y. et al. Research on 3D measurement model by line structure light vision. J Image Video Proc. 2018, 88 (2018). https://doi.org/10.1186/s13640-018-0330-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13640-018-0330-6

Keywords