 Research
 Open Access
 Published:
Research on 3D measurement model by line structure light vision
EURASIP Journal on Image and Video Processing volume 2018, Article number: 88 (2018)
Abstract
For serious radial distortion and high precision measurement, 3D measurement model is studied in the paper. Based on the twostep calibration algorithm, a round initialization window is proposed to calculate the initial value of the camera parameters for nonlinear optimization. In order to solve the problem that the Supersonic Transport Evaluation Group algorithm of light stripe center extraction has a large amount of computation, a method is presented by normalized correlation coefficient (NCC) method and principal component analysis (PCA). Finally, the coordinates of the central stripe could be obtained based on the bilinear interpolation and parabolic fitting. With the parameters of the structured light plane that could be obtained by the coordinates of the central stripe, the height of block gauge is measured by the 3D measurement model. The experiment results showed that the average measurement error is 0.0191 mm, and the strip extraction speed is improved.
Introduction
In recent years, 3D measurement of mechanical parts has been studied and developed by machine vision technology [1,2,3]. Because of a wide range, high flexibility, and precision, structural light vision has been widely used in threedimensional measurement [4,5,6,7]. In 2010, Liu et al. [8] reported a method for measuring shaft diameters by a linestructured laser. The coordinates of the light stripe centers on the shaft were obtained by a novel grayscale barycenter extraction algorithm along the radial direction. The shaft diameter was then obtained by circle fitting. In 2015, Liu et al. [9] presented a model for measuring shaft diameters using structured light vision. A virtual plane was established perpendicular to the measured shaft axis, and the coordinates of the light stripe centers on the shaft were projected to the virtual plane. On the virtual plane, the shaft diameter was measured by the projected coordinates. In 2016, Zheng [10] reported a method for measuring the diameter of a train wheel by line structuredlight measurement system. The axle of the wheelset and the wheel tread are measured by structuredlight sensors, and the center of the rolling round is calculated by the axle. At last, the diameter of the rolling round is determined by the center and the contact points on the wheel tread. The method had been proven to be quite stable and accurate by static experiment and dynamic field experiment. According to the different shape of the projection feature, the structured light can be divided into different types and the linestructured light is mainly studied in this paper.
Based on the linestructured vision, the measurement steps mainly include three parts: camera calibration, calibration of light plane parameters, and threedimensional measurement model. The purpose of the camera calibration is to obtain the internal parameters of camera and distortion coefficients of camera lens [11, 12]. For the large field of view and serious distortion of the occasion, Zhou [13] proposed a partition calibration model of line structure light. The method solved the calibration error caused by the nonlinear optimization at the regions far from the optical axis. However, the method takes too long to operate, and the calibration plate is complicated to make. Light stripe center extraction is one of the key steps in the whole measurement process. Hession matrix methods were common detection algorithm thanks to high robustness and accuracy [14], but the algorithm needs largescale Gaussian convolution operation and a long computation time. Cai [15] proposed a light stripe center extraction algorithm based on the principal component analysis. Although the number of Gaussian convolution operations was reduced in the normal direction calculation, the subpixel coordinates of the stripe centers were acquired by Taylor expansion in the normal direction, which still needs to calculate the twodimensional Gaussian convolution.
In view of the above problem, the paper presents a circular initialization window and discusses the size of the window for the calculation of the initial camera parameters in the nonlinear calibration. To ensure the measurement accuracy under a low computation cost, a method of extraction stripe center is proposed.
This report is organized as follows: Section 2 proposes an improved twostep camera calibration model. Section 3 outlines the model of light stripe center detection algorithm and the light plane calibration process. Section 4 puts forward the measurement model by the line structure light and reports the experimental results used to test the measuring accuracy. Section 5 provides the study’s conclusions.
Improved twostep camera calibration model
Linear calibration mathematical model
Firstly, lens distortion is neglected by a twostep calibration algorithm [4], and the linear calibration model is as shown in Eq. (1).
where A is the camera internal matrix, \( \boldsymbol{R}=\left[\begin{array}{ccc}{r}_{11}& {r}_{12}& {r}_{13}\\ {}{r}_{21}& {r}_{22}& {r}_{23}\\ {}{r}_{31}& {r}_{32}& {r}_{33}\end{array}\right] \) is external the rotation matrix, \( \boldsymbol{T}=\left[\begin{array}{c}{T}_x\\ {}{T}_y\\ {}{T}_z\end{array}\right] \) is external translation matrix, Z_{c} is the depth factor, (u, v) are the pixel coordinates of feature point on the calibration plate, and (X_{w}, Y_{w}, Z_{w}) are the world coordinates of the feature point.
Based on the twostep calibration algorithm, it is necessary to select a reasonable initial value. Because the lens distortion is very severe at the position far from the center of the lens, the image center is always used as the calculation center to design a window in calculating the initial value, and the homography matrix calculated in the window is used as the initial value of the nonlinear optimization. Because of the image noise, the inconsistency between different variable dimensions, and the degree of distortion, it is necessary to reasonably choose to initialize the size and shape of the extraction window.
In this paper, a rectangular window and a circular window are designed respectively, and the coordinates of the corner points extracted in the window are calculated for the initial value of the homography matrix. The rectangular window is shown in Fig. 1, and the corner coordinate field satisfies the following relationship:
where, Threshold is the size threshold of the set window, and size is related to the image resolution. The unit of Threshold is pixel.
The circular window is shown in Fig. 2, and the corner coordinate field satisfies the following relationship:
where W is the image width, and H is the image height. Threshold is the window size threshold and ensured by the image resolution.
According to the type of lens distortion, the rectangular window has a certain precision in the case of a small field of view. However, compared with the square optimization window, the circular optimization window is more in line with the lens distortion law. To illustrate this problem, the paper uses the distortion coefficient to simulate the lens distortion model, and the simulation results are shown in Fig. 3.
The distortion coefficient of the simulation results is calculated from the calibration results of 17 calibration images. In Fig. 3, “×” represents the center of the image, and “o” represents the calculated center position; the starting point of the arrow represents the ideal point, the end point of the arrow represents the distortion point, and the direction of the arrow represents the distortion direction. As shown in the simulation results, the lens distortion model is similar to a circle in condition of radial distortion and tangential distortion. Moreover, the distortions caused by radial distortion are the most serious, and the degree of distortion is relatively small in the approximate circular area of the center of the image.
In order to study the effect of the initialization window size on the calibration accuracy, circular initialization windows of different radii are designed, and the centers of these circular windows are the center of the image, as shown in Fig. 4. The radius of the circular search window is 50 pixels in the initial state, and search step size is incremented by 12.5 pixels. Then, the average residual of all real points and back projection points is calculated after the first optimization iteration, as shown in Eq. (5).
The average residuals corresponding to different windows are shown in Fig. 5. The abscissa indicates the range of the search box, and the unit is the pixel. The ordinate indicates the average residual, and the unit is millimeter.
As shown in Fig. 5, the radius is less than 100 pixels, and the optimization residual is large. The reason for this result is that the calculation point distortion is small, but the number of points involved in the calculation is small and the resulting constraints are less. When the radius is larger than 100 pixels, the calibration residual area is stable, and the increase in the size of the window is not significant for further improvement in accuracy. Choosing the appropriate initialization window not only improves the calibration accuracy but also saves computation time.
According to the results of this section, the circular extraction window should be chosen to calculate the parameters of the linear processing camera, and the optimal window size of the paper is 100 pixels.
Nonlinear calibration model
In the precision measurement, the effects of lens distortion which mainly include radial distortion and tangential distortion [16, 17] should be considered. The polynomial model is a commonly used distortion model, but the analytical solution could not be obtained by the model. In order to calculate the distortion coefficients and more accurate camera parameters, the above parameters need to be nonlinearly optimized. The distortion model is shown in Eq. (6), and the nonlinear objective optimization equation is given by Eq. (6).
where (x_{u}, y_{u}) are the ideal image coordinates, and (x_{d}, y_{d}) are the actual image coordinates. k_{1} and k_{2} are the radial distortion coefficients, and p_{1} and p_{2} are the tangential distortion coefficients. A nonlinear function can be established by minimizing the distance between the calculated world coordinates of the corner points in the calibration board and the actual world coordinates. For the nonlinear solution, Zhang [11] used the maximum likelihood estimation, and the objective equation optimization plane is located in the image coordinate plane, that is, the optimization objective equation:
where i is the index of the number of images, and j is the index of the position of the corners.
Due to the error of the corner extraction and the error caused by the distortion, the camera calibration accuracy is random to a certain extent by solved Eq. (6). In order to improve the calibration accuracy, the camera calibration parameters are optimized on the world coordinate plane in the paper, and the physical coordinate accuracy of the calibration plate used in this paper can reach 1 μm level in the world coordinate plane. The extracted image coordinates are backprojected to the world coordinate plane, and the maximum likelihood estimation is performed on the parameters. The objective function could be expressed as:
where n is the number of images, and m is the number of corner points. m is the subpixel coordinates of the corner, M_{ij} is the world coordinates, and \( {\tilde{M}}_{ij} \) is calculated by the back projection though M_{ij}. The nonlinear function can be solved using the LevenbergMarquardt algorithm [18]. Figure 6 shows an improved optimization algorithm for back projection error.
Method: calibration of structured light plane parameters
Coordinates extraction of structured light strip
The extraction accuracy of light strip center coordinates determines the calibration quality of the light plane parameters [19]. Based on the traditional Supersonic Transport Evaluation Group (STEG) algorithm, the normal direction is firstly solved by using the eigenvector corresponding to the largest eigenvalue of the Hessian matrix in each pixels, then the subpixel coordinates of light stripes centers could be obtained by derivatives of the Taylor expansion for the grayscale in the normal direction [17]. However, a single pixel requires five Gaussian convolutions in each calculation. For example, the camera resolution is 1292 × 964 in the paper, so the calculation of a single picture can be expressed as:
where N is the number of operations, n is the size of the Gaussian template used for the convolution operation and is 11 in the paper, and n depended on the width of the light stripe.
In order to reduce the complexity of the light stripe extraction algorithm, a new method of extracting light stripe center is presented in the paper. Firstly, the location of the light stripe in the image was located by the normalized correlation coefficient (NCC) in the paper, and NCC is shown as:
where \( \overline{I} \) is the average gray value of the pixels of the image in the window, \( \overline{T} \) is the average gray value of the light stripe template, σ is the standard deviation of grayscale in the image window, and \( \overline{\sigma} \) is the standard deviation of the template window. If NCC is 1, the image window is highly correlated with the template, and Fig. 7 shows the matching process.
In order to improve the calculation speed and template matching speed, the improved algorithm introduces the concept of the image pyramid. This process mainly consists of two parts, the first step is downsampling. Through subsampling the original image, images of different resolutions could be obtained from bottom to top. The number of the pyramid layers is determined by the size of the original image and the tower top image, as shown in Eq. (11). Second, Gaussian filtering should be used on the basis of sampling; the size of the filtering kernel is determined by Eq. (12).
When template matching is performed, the template instance matched by the top image is mapped to the next layer, and the corresponding coordinates are simultaneously changed. When the target area is determined, the strip center extraction can be performed in the target area.
The normal direction of the light stripe could be obtained by the principal component analysis in the image sample data. First, the image of the light stripe is preprocessed by threshold segmentation. By traversing all pixels of the light stripe, if eight neighborhoods of the pixel are more than the specified threshold, the grayscale of the pixel is set to 0. The initial of light stripe could be obtained by the gray gravity method. Second, the gradient vector of an image is built in the partial image window which is based on the image area of every initial center point. Covariance matrix could be obtained by the gradient vector of an image, and the eigenvector corresponding to the largest eigenvalue of the covariance matrix is the normal direction. The covariance matrix is given by Eq. (13).
Because the local window is approximately symmetric about the initial point, the following relationship is:
Then, Eq. (13) is further simplified.
The eigenvector corresponding to the largest eigenvalue of matrix H is the normal vector N (n_{x}, n_{y}) of its corresponding point. The gradient window is shown in Fig. 8.
Assuming that the coordinates of the initial center are (X_{0}, Y_{0}) = (Column_0, Row_0), the pixel coordinates of the two neighborhoods could be obtained by Eq. (16) in the normal direction.
F(X_{0}), F(X_{1}), and F(X_{2}) represent the gray corresponding to the subpixel and could be acquired by the bilinear interpolation. A parabolic curve is fitted using the initial point (X_{0}, F(X_{0})), and (X_{1}, F(X_{1})), (X_{2}, F(X_{2}), and the position where the first derivative of the fitted quadratic is 0 are the subpixel coordinates of the stripe center.
In order to show that the normal direction obtained by the presented algorithm is consistent with the normal direction obtained by the STEG algorithm, the paper compares the vertical and nonvertical states of the normal vector trends, and the trends are obtained by two algorithms respectively. The results are shown in Figs. 9 and 10 and are proved that the normal directions obtained by the two methods are consistent.
Calculation of structured light plane
The calculation of the light plane equation used the freemoving coplanar target as shown in Fig. 10a. In order to obtain the coordinates of the light stripe center on the target plane, the calibration plate is coplanar with the target plane. In structured light plane calibration, the position of the laser sensor and the camera’s field of view are fixed, and the position of the coplanar target is changed six times. Although the position of the target has changed, the position of the light stripe is constant relative to the camera view, only the coplanar target relative to the camera coordinate system is changing. During the calibration process, the light strip region only needs to be extracted once under offline state, and then, the coordinate of the light center could be calculated by camera parameters, the lens distortion coefficient, and the camera’s external parameters on the coplanar target. Figure 11 is shown as the position of light stripe center.
Set P is a light stripe center, so P_{ij} is the jth point on the intersection line when the coplanar target is in the ith position. Setting the equation of the light plane under a camera coordinate system:
The parameters A, B, and C can be obtained by the objective function:
where n is a number of the coplanar target turns, n is 6 in the paper, k is the number of light stripe centers on the intersection line when the target is in the ith position, and (X^{i}_{Cj}, Y^{i}_{Cj,} Z^{i}_{Cj}) are the camera coordinates of P_{ij}. According to the principle of least squares, the coefficients of Eq. (18) can be solved as:
Because the camera parameters and coordinates of the centers have errors, the coefficients of the light plane obtained by Eq. (19) will be used as initial values for optimizing the light plane. The objective function of the optimized plane is established by the distances between the points and the plane:
The measurement results and experimental discussions
Measurement model
The threedimensional measurement model based on structured light vision is shown in Fig. 12, and the measurement system consists of an industrial camera, singleline structure light source, work table, etc. In Fig. 12, O_{w}X_{w}Y_{w}Z_{w} is a world coordinate system, O_{C}X_{C}Y_{C}Z_{C} is a camera coordinate system, and O_{u}x_{u}y_{u} is an image coordinate system. O_{L} represents the line structure of light projector location, and O_{l}x_{m}y_{m}z_{m} is a local coordinate system on the light plane. The equation of the light plane under O_{C}X_{C}Y_{C}Z_{C} could be obtained by Eq. (20).
In the measurement, the image coordinates of the light stripe center are first transformed to the local coordinate system, as shown in Eq. (21).
where \( \left[\begin{array}{ccc}{r}_{11}^{\hbox{'}}& {r}_{12}^{\hbox{'}}& {T}_x^{\hbox{'}}\\ {}{r}_{21}^{\hbox{'}}& {r}_{22}^{\hbox{'}}& {T}_y^{\hbox{'}}\\ {}{r}_{31}^{\hbox{'}}& {r}_{32}^{\hbox{'}}& {T}_z^{\hbox{'}}\end{array}\right]={\left[\begin{array}{ccc}{r}_{11}& {r}_{12}& {T}_x\\ {}{r}_{21}& {r}_{22}& {T}_y\\ {}{r}_{31}& {r}_{32}& {T}_z\end{array}\right]}^{1} \).
Then, the local world coordinates are changed to the camera coordinate system, as shown in Eq. (22).
The rotation matrix is shown in Eq. (23), and the translation matrix is shown in Eq. (24).
where \( \left\{\begin{array}{l}\alpha =\arctan \left(\frac{B}{A\sin \beta +C\cos \beta}\right)\\ {}\beta =\arctan \left(\frac{A}{C}\right)\\ {}\\ {}\gamma =0\end{array}\right. \).
Measurement experiments
Experiments are conducted to assess the utility of the measurement mode. The experimental equipment employed is shown in Fig. 13, and the main parameters of the equipment are shown in Table 1. The experiment used a set of block gauges. When the nominal height of block gauges is in the range of 1 to 10 mm, the limit deviation is ± 0.20 μm. When the nominal height is in the range of 10 to 25 mm, the limit deviation is of ± 0.30 μm.
The images of gauges are captured using the camera, as shown in Fig. 14. Firstly, the subpixel coordinates of the light stripe 1 and the light stripe 2 are respectively obtained. The depth information of the two the light stripes changes at the turning point which is shown in Fig. 14. Secondly, the corresponding external parameters of a light plane and target plane could respectively be obtained by the equations of the light plane and the target plane.
The subpixel coordinates of the light stripe centers could be obtained by as presented in the paper, and the depth information of the two light stripes changed at the turning points. Through the coordinates of the two light stripes centers, the two spatial line equations could be calculated by the straight line fitting algorithm. In order to improve the fitting accuracy, the constraint is as shown in Eq. (25).
where K_{1} and K_{2} are respectively the slopes of the two fitted lines.
According to the light plane equation and the target plane equation, the distance between two parallel lines on the light plane and the distance between two parallel lines on the target plane can be obtained respectively. Based on the triangle relationship as shown in Fig. 8, the actual height of the measured object can be obtained, and the result is the height of the block gauge in the experiment. The experimental results are shown in Table 2.
Through the calculation of the experimental data, the mean value of the error is 0.0191 mm, and the error standard deviation is 0.0029.
In order to compare the complexity of the strip extraction method proposed in the paper and the STEG algorithm, the running time of the gauge measurement experiments is recorded separately. The running time of experiments are shown in Table 3, and the detection speed of the light stripe center is a significant improvement.
Conclusions
In this paper, the threedimensional measurement model based on singleline structure is studied. For improving the calibration accuracy, a circular initialization window is proposed in camera calibration. A method for calculating the normal direction of the light stripe based on principal component analysis is proposed, and the rationality of the algorithm is verified by comparing with the normal direction obtained by the Hessian matrix. To further avoid a large number of twodimensional Gaussian convolutional operations, bilinear interpolation and parabola in the direction of the normal are used in the paper. Finally, by measuring the height of the standard block gauge, it is proved that the improved measurement algorithm has a higher accuracy and speed for the measurement of the 3D objected.
Abbreviations
 3D:

Threedimensional
 NCC:

Normalized correlation coefficient method
 PCA:

Principal component analysis
 STEG:

Supersonic Transport Evaluation Group
References
 1.
Q. Sun, Y. Hou, Q. Tan, C. Li, Shaft diameter measurement using a digital image. Opt. Lasers Eng. 55, 183–188 (2014)
 2.
P. D’Angelo, C. Wöhler, Imagebased 3D surface reconstruction by combination of photometric, geometric, and realaperture methods. ISPRS J Photogramm Remote Sens 63(3), 297–321 (2008)
 3.
H. Schreier, J.J. Orteu, M.A. Sutton, Image correlation for shape, motion and deformation measurements (Springer US, 2009)
 4.
S.V.D. Jeught, J.J.J. Dirckx, Realtime structured light profilometry: a review. Opt Lasers Eng 87, 18–31 (2016)
 5.
C. Guan, L.G. Hassebrook, D.L. Lau, V.G. Yalla, Improved compositepattern structuredlight profilometry by means of postprocessing. Opt. Eng. 47(47), 7203 (2008)
 6.
Y. Zhu, A. Li, W. Pan, Discussions on phasereconstructing algorithms for 3D digitizing structurelight profilometry. OptikInt J Light Electron Opt 122(2), 162–167 (2011)
 7.
X.y. Su, Q.c. Zhang, Dynamic 3D shape measurement method: a review. Opt. Lasers Eng. 48(2), 191–204 (2010)
 8.
B. Liu, P. Wang, Y. Zeng, C. Sun, Measuring method for microdiameter based on structuredlight vision technology. Chin. Opt. Lett. 8, 666–669 (2009)
 9.
S. Liu, Q. Tan, Y. Zhang, Shaft diameter measurement using structured light vision. Sensors 15(8), 19750–19767 (2015)
 10.
Z. Gong, J. Sun, G. Zhang, Dynamic measurement for the diameter of a train wheel based on structuredlight vision. Sensors 16(4), 564 (2016)
 11.
Z. Zhengyou, A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000)
 12.
D. Claus, A.W. Fitzgibbon, A rational function lens distortion model for general cameras, In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), IEEE Computer Society, vol 1 (2005), pp. 213–219
 13.
Z. Xiaodong, Z. Yachao, T. Qingchang, et al., New method of cylindricity measurement based on structured light vision technology. J Jilin Univ 47(2), 524–529 (2017)
 14.
L. Qi, Y. Zhang, X. Zhang, S. Wang, F. Xie, Statistical behavior analysis and precision optimization for the laser stripe center detector based on Steger’s algorithm. Opt. Express 21(11), 13442–13449 (2013)
 15.
H. Cai, Z. Feng, Z. Huang, Centerline extraction of structured light stripe based on principal component analysis. Chin J Lasers 42(3), 270–275 (2015)
 16.
J. Heikkila, Geometric camera calibration using circular control points. IEEE Trans. Pattern Anal. Mach. Intell. 22, 1066–1076 (2000)
 17.
R.Y. Tsai, A versatile camera calibration technique for highaccuracy 3D machine vision metrology using offtheshelf TV cameras and lenses. IEEE J. Robot. Autom. 3(4), 323–344 (1987)
 18.
J.J. Moré, The LevenbergMarquardt algorithm implementation and theory (Numerical Analysis, Berlin, Heidelberg, 1978), pp. 105–116
 19.
D. Fajjie, L. Fengmei, Y. Shenghua, A new accurate method for the calibration of line structured light sensor. Chin J Sci Instrum 21(1), 108–110 (2000)
About the authors
Siyuan Liu was born in Shanxi, China, in 1985. He earned his Ph.D. degree in Mechanical Engineering from Jilin University in 2016. He is currently working in College of Mechanical Science and Engineering, Jilin University. His research interests include machine vision, structured light measurement, and intelligent manufacturing.
Yachao Zhang was born in Henan, China, in 1990. He earned his Master’s degree in Mechanical Engineering from Jilin University in 2017. He is currently pursuing a Ph.D. degree in Biomedical Imaging and Guidance Technology at the City University of Hong Kong. His research interests include photoacoustic imaging reconstruction algorithm, highintensity focused ultrasound (HIFU), and intelligent medical instrumentation development.
Yunhui Zhang is currently a professor and a master’s tutor. She earned her Ph.D. degree in Mechanical Engineering from Jilin University in 2011. She graduated in 1995 and stayed in school to teach at the College of Mechanical Science and Engineering, Jilin University. Her research interests include machine vision and digital image measurement technology.
Tianchi Shao received the B.S. degree in Engineering from Changchun University of Science and Technology, Changchun, China, in 2017. He is currently pursuing a master’s degree in Jilin University. His research interests include image processing and machine vision.
Minqiao Yuan received the B.S. degree in Software Engineering from Jilin University, Changchun, China, in 2015. He is currently pursuing a master’s degree in Jilin University. His research interests include image processing and machine vision.
Funding
This work was supported in part by a grant from the Natural Science Foundation of Jilin Province (No. 2017010121JC), the Advanced Manufacturing Project of Provincial School Construction of Jilin Province (No. SXGJSF20172).
Availability of data and materials
We can provide the data.
Author information
Affiliations
Contributions
All authors take part in the discussion of the work described in this paper. SL wrote the first version of the paper. YcZ did the experiments of the paper. YhZ participated in the design of partially structured light measurement algorithms. TS and MY assisted to participate in the validation experiments that verify the model accuracy and organize the experimental data. SL, YhZ, YcZ, and TS revised the paper in different version of the paper, respectively. All authors read and approved the final manuscript.
Corresponding author
Correspondence to Yunhui Zhang.
Ethics declarations
Ethics approval and consent to participate
Approved.
Consent for publication
Approved.
Competing interests
The authors declare that they have no competing interests. And all authors have seen the manuscript and approved to submit to your journal. We confirm that the content of the manuscript has not been published or submitted for publication elsewhere.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Liu, S., Zhang, Y., Zhang, Y. et al. Research on 3D measurement model by line structure light vision. J Image Video Proc. 2018, 88 (2018) doi:10.1186/s1364001803306
Received
Accepted
Published
DOI
Keywords
 Line structure light
 3D measurement model
 Template matching
 Principal component analysis