Skip to main content

Research on image enhancement of light stripe based on template matching

Abstract

The detection accuracy of the light stripe centers is an important factor based on the structured light vision measurement, and the quality of the light stripe images is a prerequisite for accurately detecting the light stripe centers; this paper separately proposes image enhancement methods for linear and arc light stripe images. For linear light stripes, the image with better quality is captured and the gray-scale distribution of the normal section corresponding to the light stripe centers is used as a template for light stripe images with poor quality. The poor quality light stripe images are finally optimized by linear interpolation. For arc-shaped light stripes, this paper proposes a positioning method for light stripe centers on arc, and then the gray-scale distribution of the normal cross-section of corresponding center which is on a good quality light stripe image is used as templates to improve the poor quality light quality stripe images. In order to verify the effectiveness of the light stripe image enhancement algorithms, this paper respectively presents verification methods to linear light stripe and arc light stripe. Finally, the quality of the light stripe images could be improved by image enhancement algorithms through experiments.

1 Introduction

The measurement methods based on machine vision have been in-depth research and rapid development in the three-dimensional measurement of mechanical parts [1,2,3]. Due to its advantages of large range, good robustness, and high precision, the measurement method based on the structured light vision has been widely developed in the parts size measurement [4,5,6,7]. According to the different characteristics of structured light sources, they can be divided into different types. The paper mainly studies the widely used line structured light vision.

The structured light measurement technology can be divided into the following three parts: vision system calibration [8], light stripe center detection [9], and measurement model establishment. The detection precision of the structured light stripe centers is an important factor affecting the measurement accuracy of the vision system. The detection accuracy of the light stripe centers is not only affected by the detection algorithm, but also the quality of the light stripe images is a prerequisite for improving the detection accuracy. The gray-scale distribution of light stripe in a good light stripe image should be uniform and have high contrast. In measurement, the material and the surface shape of measured object have a great influence on the light stripe images. For the surfaces with high reflectance, specular reflection will be occured when structured light is projected onto the surface. This phenomenon causes the quality of the acquired images to become lower. Due to the requirements of machine equipment performance, most of the mechanical parts have to undergo surface treatment, and the surface reflectivity of the parts is improved after the processing. Therefore, the quality images of light stripes which are on the surface of the mechanical part are generally poor. Two methods are usually used to solve the problem: one kind of the methods is to improve the robustness of the light stripe center detection algorithms; the other is to enhance the quality of light stripe images by image enhancement algorithms.

Many algorithms have been proposed for the detection of the light stripe centers. Steger [10] proposed an light stripe center detection algorithm based on Hessian matrix, the algorithm has high detection accuracy and good robustness. However, this algorithm needs a large-scale convolution operation, and the detection speed is relatively low. In order to overcome the defect of the Steger’s algorithm, Cai [11] proposed a structural light center extraction algorithm based on principal component analysis. The algorithm firstly obtains the ROI region on the light stripe image, and then obtains the sub-pixel coordinates of the centers through Taylor expansion in the normal direction. Sun [12] proposed an extraction method by gray level moment; the method can eliminate the noise in the image and preserve the original information of the light stripe. The method has good detection accuracy for light stripe images with certain noise.

Image enhancement is an image processing technology and widely used to improve image effects [13, 14]. Wang [15] proposed image enhancement based on equal area dualistic sub-image histogram equalization method. The algorithm divides the original image into two sub-images, and the two sub-images are respectively equalized. Finally, the processed sub-images are composed of the processed images. This method not only enhanced the images information but also preserved the original images information. Li [16] proposed a new robust retinex mode based on the classic retinex model. This method enhanced low-light images due to noise, and could be used for underwater or remote sensing images with more noise. Pan [17] proposed a highly dynamic light stripe image enhancement method based on retinex theory for complex environments. But this method is mainly suitable for light bar images that are fully Gaussian.

In recent years, image enhancement technology has been widely studied, but these techniques are mainly used for remote sensing images, medical images or videos, but the techniques for light stripe images have rarely been studied in industrial measurement. Due to the special grayscale distribution of light strips and the development status of structured light measurement techniques, it is of great significance to study the enhancement technology of light strip images.

This paper proposes an image enhancement algorithm for linear and arc light stripes based on template matching. The good quality images of the light stripes are used as the templates to enhance the images with poor quality. In order to ensure the consistency of the light stripe features, the laser used by the template is the same as the laser with the enhanced images. In order to verify the effect of the enhancement algorithm, the evaluation experiments are designed for linear and arc-shaped light stripe centers, and it proved that the image enhancement algorithms proposed in the paper have a certain effect on improving image.

This paper is organized as follows: Section 2 outlines the light stripe image enhancement algorithms, Section 3 proposes light stripe centers detection evaluation methods, Section 4 reports the experimental results used to test the influences of enhancement algorithms, and Section 5 provides the study’s conclusions.

2 Light stripe image enhancement algorithms

When light projects the surface of the measured object, there are various reflections and refractions, and specular reflection and diffuse reflection account for most of the light energy among them. For diffuse reflection, scattered rays are basically uniform in all directions. On the contrary, scattered light is concentrated in one direction for specular reflected. On the measuring object, the paths of reflections and refractions are shown in Fig. 1.

Fig. 1
figure 1

Reflections and refractions on the object

In the measurement, when the diffuse reflection is in the majority on the surface of the measured objects, the images quality of the light stripe captured by the camera is better and the gray scale on the light stripe is evenly distributed. In opposition, specular reflections are in the majority; there may be two situations: firstly, the gray scale distribution of the light stripe on the image is not uniform and the width of the light bar is very thin, when camera is far away from the reflected light path of the laser; secondly, in the condition that the camera is close to the reflection of the laser, the width of the light stripe is too wide and the gray scale distribution is not uniform. Regardless of the above situations, the accuracy of the light stripe center detection will be affected and reduced, when the quality of the light stripe images is poor.

In order to solve the problem of inaccurate detection of light stripe centers due to the poor quality of the light bar image, the paper analyzes the gray distribution of light strips on different surfaces and proposes the two light stripe image correction algorithms for different measured object shapes.

2.1 Line structure light stripe gray scale distribution model

When the line structure light is projected onto the surface of the measured object, the light strip with a certain width is captured by a camera in the process of measurement. Therefore, determining the gray scale distribution model of the strip image plays an important role in the light stripe image enhancement algorithm. Because the gray scale distribution of the light strip is affected by the material and shape of the measured object, this paper contrasts the gray scale distribution of the light strip images which are captured on different materials and shapes of measured objects. The images of light strips formed on different measured objects are shown in Fig. 2.

Fig. 2
figure 2

Light strips images on different measured objects. a paper plane b metal plane c metal curved surface d  specularly reflective metal surface

In order to comprehensively analyze the gray distribution on the cross section of the strip, this paper intercepts ten cross sections on each strip image of Fig. 2. Fitting residual of the gray scale can be obtained by second-order polynomial fitting and Gaussian function fitting on each cross section. The second-order polynomial model and the Gaussian fitting model are respectively:

$$ I(x)={a}_1{x}^2+{a}_2x+{a}_3 $$
(1)
$$ I(x)=A\cdot {e}^{-\frac{{\left(x-{x}_0\right)}^2}{2{\sigma}^2}} $$
(2)

In Eq. (1) and (2), the x is the pixel abscissa in the strip image, and I(x) is the gradation value of the pixel point. Table 1 shows the Gaussian fitting residuals and second-order polynomial fitting residuals on each section of the light strips images; method A represents the second-order polynomial fitting and method B represents the Gaussian fitting in the table. The experimental images are taken under the condition that the lens focal length is 25 mm, the aperture is 4 F, the laser power is 18 mW, and the camera exposure time is 30 ms (Table 2).

Table 1 Comparison of light stripes gray distribution model fitting residual
Table 2 Experimental equipment parameters

According to the table, fitting residuals by the Gaussian model are smaller than the second-order polynomial model, so it is reasonable to use the Gaussian model to represent the gray scale distribution of the light strip on the measured object. When the light strip is a straight line, the gray scale distribution on each cross section is relatively uniform, and the gray scale fitting residual variation on each cross section is small. When the shape of light strip is curve, the gray scale fitting residual variation on each cross section is large. So the shape and material of the measured object have a great influence on the gray scale distribution of the light strip.

2.2 Image enhancement algorithm for linear light stripe

As shown in Fig. 3a, a gauge block is placed on a black paper plane, and the line structure light is projected onto the target which is consisted by the gauge and the paper plane. Since the mirror reflection on the surface of the gauge block is more than the paper surface, the image quality of the light stripe in Fig. 3b is significantly better than the image quality of the light bar in Fig. 3c.

Fig. 3
figure 3

The grayscale image of linear light stripe. a The image of target b The image of stripe on the paper plane c The image of stripe on the gauge

In order to contrast the gray distribution of light strips, three light stripe cross-sections are taken on the images corresponding to the paper plane and the gauge block surface, respectively. The gray level distribution of each section corresponding to the paper plane is uniform and the gray distribution is basically the same, as shown in Fig. 4a. Due to the influence of the surface material, the gray scale distribution of the three cross-sections corresponding to gauge block surface is not uniform and the gray scale distribution in each cross section changes greatly, as shown in Fig. 4b.

Fig. 4
figure 4

Cross-sectional grayscale distribution of light stripe images. a The grayscale distribution on paper plane b The grayscale distribution on gauge

In order to solve this problem, the light stripe centers and the normal direction of each center can be obtained by the Steger algorithm. According to Eq. (3), the gray distribution function of the light stripe in the normal direction can be obtained.

$$ I\left({x}_0^i+{tn}_x^i,{y}_0^i+{tn}_y^i\right)=I\left({x}_0^i,{y}_0^i\right)+\left({tn}_x^i,{tn}_y^i\right)\left[\begin{array}{c}{I}_x^i\\ {}{I}_y^i\end{array}\right]+\frac{1}{2}\left({tn}_x^i,{tn}_y^i\right)\left[\begin{array}{cc}{I}_{xx}^i& {I}_{xy}^i\\ {}{I}_{xy}^i& {I}_{yy}^i\end{array}\right]\left[\begin{array}{c}{tn}_x^i\\ {}{tn}_y^i\end{array}\right] $$
(3)

In theory, the light stripe is a uniform straight line in the plane, and the grayscale distribution on each normal section is basically the same according to Fig. 4a. Therefore, the gray mean values of the corresponding pixel points on each normal section are used as the template of the light stripe in the enhancement algorithm, as shown in the Eq. (4).

$$ \boldsymbol{II}=\left[\frac{\sum \limits_{i=1}^N\boldsymbol{I}\left({x}_0^i-{dn}_x^i,{y}_0^i-{dn}_y^i\right)}{N}\kern0.5em \frac{\sum \limits_{i=1}^N\boldsymbol{I}\left({x}_0^i-\left(d+1\right){n}_x^i,{y}_0^i-\left(d+1\right){n}_y^i\right)}{N}\kern0.5em \cdots \kern0.5em \frac{\sum \limits_{i=1}^N\boldsymbol{I}\left({x}_0^i,{y}_0^i\right)}{N}\kern0.5em \cdots \kern0.5em \frac{\sum \limits_{i=1}^N\boldsymbol{I}\left({x}_0^i+\left(d-1\right){n}_x^i,{y}_0^i+\left(d-1\right){n}_y^i\right)}{N}\kern0.5em \frac{\sum \limits_{i=1}^N\boldsymbol{I}\left({x}_0^i+{dn}_x^i,{y}_0^i+{dn}_y^i\right)}{N}\right] $$
(4)

Where N is the number of normal section, and the width of the light stripe is D = 2*d.

After the template is obtained, the light stripe image with poor quality is processed. Firstly, the center of each section \( {\left({x}_1^i,{y}_1^i\right)}_{i=1,2\cdots {N}_1} \) on the poor quality light stripe and the corresponding normal direction \( {\left({n}_{x1}^i,{n}_{y1}^i\right)}_{i=1,2\cdots {N}_1} \) are obtained by the Steger algorithm. Then the gray distribution of each section along the normal direction is replaced with the template by Eq. (5).

$$ \left.\begin{array}{c}\boldsymbol{I}\left({x}_1^i-{dn}_x^i,{y}_1^i-{dn}_y^i\right)=\frac{\sum \limits_{i=1}^N\boldsymbol{I}\left({x}_0^i-{dn}_x^i,{y}_0^i-{dn}_y^i\right)}{N}\\ {}\vdots \\ {}\boldsymbol{I}\left({x}_1^i,{y}_1^i\right)=\frac{\sum \limits_{i=1}^N\boldsymbol{I}\left({x}_0^i,{y}_0^i\right)}{N}\\ {}\vdots \\ {}\boldsymbol{I}\left({x}_1^i+{dn}_{x1}^i,{y}_1^i+{dn}_{y1}^i\right)=\frac{\sum \limits_{i=1}^N\boldsymbol{I}\left({x}_0^i+{dn}_x^i,{y}_0^i+{dn}_y^i\right)}{N}\end{array}\right\} $$
(5)

Since the pixel coordinates of each center point and normal direction are not integers, the gray values corresponding to the entire pixels need to be obtained for subsequent processing on the enhanced light stripe image. Set the pixel coordinates of P is (xij, yij)j = 1,2, …D, and the point is the jth point corresponding to the normal direction of the ith center point on the light stripe, the gray value of P is Iij. Q is the point adjacent to P in the normal direction, and the pixel coordinates and gray value of Q are (xij + 1, yij + 1) and Iij + 1. Rounding up the P point can obtain the O point, and the pixel coordinates of the point is (x0,y0). The gray value of the point O can be obtained by linear interpolation according to the gray values of the P and the Q.

$$ {I}_0={I}_j^i+\frac{y_{j+1}^i-{y}_j^i}{x_{j+1}^i-{x}_j^i}\left({x}_0-{x}_j^i\right) $$
(6)

I0 is the gray value of O. Through the enhancement algorithm for linear light strip, the result is shown in Fig. 5.

Fig. 5
figure 5

Comparison of light stripe images before and after optimization. a The light stripe image before enhancement b The light stripe image after enhancement

2.3 Image enhancement algorithm for arc light stripe

The gray scale distribution of the light stripe on the surface of the cylinder is different from the gray scale distribution of the light stripe on the plane, so the section proposed an image enhancement algorithm for the arc light stripe.

Figure 6 shows the images of the light stripe projected by the line laser on the surfaces of the shafts with different reflectivity. The three cross-sections are separately taken on the each light stripe image. As shown in Fig. 7, the gray scale distribution of each cross-section is relatively uniform on the shaft with weak specular reflection; however, the gray scale distribution of the cross section is not uniform, and the symmetry of gray is poor on the shaft with strong specular reflection. Compared with the linear light strip, the width of each section changes significantly on the arc light stripe, and the width of the light stripe is narrower on the edge of the stripe, as shown in Figs. 6 and 7. In order to solve this problem, an enhancement algorithm of the arc light stripe is proposed.

Fig. 6
figure 6

Curved light bar image contrast. a Better quality light stripe image b Poor quality light stripe image

Fig. 7
figure 7

Cross-section grayscale distribution of light bars contrast (arc type). a Gray distribution on weakly reflective shaft b Gray distribution on strong reflective shaft

Figures 8 and 9 respectively show the positional relationship of the feature points on the light stripe with good quality and poor quality; the good quality light stripe is set to stripe 1 and the poor quality light stripe is set to stripe 2. First, the pixel coordinates of the light stripe centers Pi (ui,vi)i = 1,2, …N and the normal vector corresponding to the each center (niu, niv) i = 1,2, …N could be obtained by Steger algorithm on the better quality image. The gray distribution in the normal direction of each center could be calculated by Eq. (3). Since the grayscale distribution of each center on the light stripe is not the same, the relative position of each center point on the arc needs to be determined. As shown in Fig. 8, set Pi is as the center point of the light stripe and M is as the starting point of the arc and is the first point of the center points set. The O point is the center of the arc and the coordinates of the O point can be obtained by fitting the pixel coordinates of the centers. αi is the angle between the straight line l1 and l2, α = [α1, α2, …αi] i = 1,2, …N.

Fig. 8
figure 8

Position relationship of center on stripe 1

Fig. 9
figure 9

Position relationship of center on stripe 2

Second, the pixel coordinates of centers Qi (xi,yi)i = 1,2, …K and the normal vector corresponding (nix, niv) i = 1,2, …K could be obtained by the similar process on the poor quality image. The point O1 is the center of the arc and the coordinates of O1 could be fixed by the coordinates of Qi. Set M1 as the starting point of the stripe, and N1 is the end point. βi is the angle between the straight line O1M1 and O1Qi, β = [β1, β2, …βi] i = 1,2, …K. Figures 8 and 9 respectively show the positional relationship of the feature points on the light bar with good quality and poor quality.

Finally, the closest value of βi is found in the array α. Assuming that αj is closest to βi, the gray distribution corresponding to the jth center point of the stripe 1 is used as a template. The gray value of the normal direction corresponding to the Qi is replaced by the template on the stripe 2, and the replacement process is the same as when the light bar is a straight line. According to the above steps, the poor quality image of light stripe can be enhanced, and the light stripe images before and after enhancement are shown in Fig. 10.

Fig. 10
figure 10

Comparison of light stripe images before and after optimization (cylindrical surface). a light stripe image before enhancement b The light stripe image after enhancement

3 Method—light stripe center detection evaluation methods

The light stripe centers detection is an important step in structured light measurement. Due to different shapes of measured objects, the gray scale distribution of light stripes has obvious differences; two evaluation methods are separately proposed in two conditions which the light stripe is a straight line on the plane and is an arc on the cylindrical surface.

3.1 An evaluation method for light stripe on the plane

As shown in Fig. 11, a gauge block is placed on the black plane target; the lower surface of the gauge block is coplanar with the target plane, L1 is the light stripe on the target plane, and L2 is the light stripe on the plane of the gauge block. The straight line l1 is obtained by fitting the centers of the light stripe L1, and l2 is obtained by fitting the centers of the light stripe L2, di is the pixel distance from the ith center point to l1, where i = 1,2,…N, as shown in Fig. 11b. Because l1 is parallel to l2, the consistency of the light stripe detection algorithm can be evaluated by the variance of the array D, where D = [d 1, d2, …dN]. The smaller the variance, the better the consistency of the center point.

Fig. 11
figure 11

The light stripe on the target and on the block. a Light stripes on the target and on the block b The schematic diagram of the method

3.2 An evaluation algorithm for light stripe on the cylindrical surface

When the line structure light is projected onto the cylindrical surface, the light stripe is an arc. Therefore, the paper proposes a method for evaluating the light stripe center detection algorithm on the cylindrical surface. As shown in Fig. 12, the centers of the light stripe are divided into five parts. In order to improve the fitting accuracy of the ellipse, five center points are selected on each part, and these points make up a set of data points. The red points are a set of fitting points in Fig. 12, set the pixel coordinates of the ellipse center are obtained by fitting ellipse to the data points of the ith group, and the pixel coordinates of the center are (xi0, yi0). In theory, all the centers of a light stripe should be on the same ellipse, so the fitting centers obtained by data points should be the same point. However, there is some error in the light stripe centers obtained by the detection algorithm, and the larger the error of centers, the more dispersed the fitting centers of the ellipse.

Fig. 12
figure 12

The schematic diagram of light stipe (curve) center detection

The variance of the two arrays can be calculated separately; the array of X axis is X = [x10, x20, …xN0] and the array of Y axis is Y = [y10, y20, …yN0], set N is the number of elliptical centers. To evaluate the light stripe center detection algorithm, the variance is smaller, and the consistency of the light stripe center is the better.

4 Experimental results and discussions

Experiments are conducted to assess the utility of the proposed method. In order to ensure the consistency of the experiment, the light stripe centers are detected using the Steger algorithm in the image before and after the optimized process. The results of image enhancement algorithms are evaluated by the method proposed in Section 3 of the paper, and the optimized effect is verified.

4.1 The evaluation experiment for the linear light stripe

Firstly, from 2 to7 mm gauge blocks are placed on the plane target, and the lower surface of the gauge block is adhered to the target plane by a large magnetic magnet. Then the line structure light is projected onto the target, as shown in Fig. 13a. In order to ensure the accuracy of the experiment, the five light stripe images corresponding to the same gauge are collected through rotating the target, and these light stripe images are in the different position. The light stripe centers corresponding to the image before and after optimization are detected by the Steger algorithm, and the light stripe image enhancement algorithm is verified by the evaluation method proposed in this paper. The experimental results are shown in Table 3, and the light stripe images before and after enhancement are shown in Fig. 14.

Fig. 13
figure 13

Experimental equipment for image enhancement algorithms. a The linear light stripe b The arc light stripe

Table 3 Comparison of enhanced light stripe images (the linear light stripe)
Fig. 14
figure 14

The light stripe images before and after enhancement in 2 mm gauge block. a The template image b The image before enhancement c The image after enhancement

According to Table 3, the uniformity of the light stripe centers after  enhancement is better than the light stripe centers before processing. Therefore, it can be seen that the enhancement algorithm has an effect on the image quality of the light stripe.

4.2 The evaluation experiment for the arc light stripe

Firstly, the line structure light is projected onto the special shaft, and the shaft is surface treated, as shown in Fig. 13b. The specular reflectivity of the surface is lower than that of the other shafts in the experiment, and the image of light stripe on the shaft is as a template for the enhancement algorithm. To verify the effectiveness of the light stripe image enhancement algorithm, the line structure light is projected onto four shafts with strong mirror reflection. In order to ensure the accuracy of the experiment, the five images of light stripe on the shaft are taken during rotating shaft. The experimental conditions are the same as those when the light stripe is a straight line. The pixel coordinates of the centers before and after optimization were obtained by the Steger algorithm, and the consistency of the centers is compared by the evaluation method. The results of the experiment are shown in Table 4, and the light stripe images before and after enhancement are shown in Fig. 15.

Table 4 Comparison of  enhanced light stripe images (the arc light stripe)
Fig. 15
figure 15

The light stripe images before and after enhancement on the shafts. a The template image b The image before enhancement c The image after enhancement

According to Table 4, the variance of the vertical and horizontal coordinates of the fitting center after the  enhancement process is smaller than the variance before the enhancement process, so the enhancement algorithm has a certain improvement effect on the poor quality light stripe image.

5 Conclusion

This paper presents the light stripe image enhancement algorithms based on image matching. For the linear light stripe, the grayscale distribution of the normal section is used as a matching template in stripe image with good quality. For arc light stripe, the light stripe centers are located on the arc, and grayscale distribution of the corresponding point is used as a template in the normal section of stripe image with good quality. In order to verify the effectiveness of the enhancement algorithms, the paper separately proposes two methods to evaluate the consistency of the light stripe centers for the linear light stripe and the arc light stripe. Finally, the experimental results show that the image enhancement algorithms have a certain effect on the improving light stripe images quality.

References

  1. Q. Sun, Y. Hou, Q. Tan, C. Li, Shaft diameter measurement using a digital image. Opt. Lasers Eng. 55, 183–188 (2014)

    Article  Google Scholar 

  2. G. Li, Q. Tan, Q. Sun, Y. Hou, Normal strain measurement by machine vision. Measurement 50(4), 106–114 (2014)

    Article  Google Scholar 

  3. N. Zhou, A. Zhang, F. Zheng, L. Gong, Novel image compression–encryption hybrid algorithm based on key-controlled measurement matrix in compressive sensing. Opt. Laser Technol. 62(10), 152–160 (2014)

    Article  Google Scholar 

  4. S. Liu, Q. Tan, Y. Zhang, Shaft diameter measurement using structured light vision. Sensors 15(8), 19750–19767 (2015)

    Article  Google Scholar 

  5. Z. Xiao-dong, Z. Ya-chao, T. Qing-chang, et al., New method of cylindricity measurement based on structured light vision technology. J Jilin Univ(Engineering and Technology Edition) 47(2), 524–529 (2017)

    Google Scholar 

  6. P. Zhou, Y. Yu, W. Cai, S. He, G. Zhou, Non-iterative three dimensional reconstruction method of the structured light system based on polynomial distortion representation. Opt Lasers Eng 100, 216–225 (2018)

    Article  Google Scholar 

  7. C. Rosales-Guzmán, N. Hermosa, A. Belmonte, J.P. Torres, Measuring the translational and rotational velocities of particles in helical motion using structured light. Opt. Express 22(13), 16504–16509 (2014)

    Article  Google Scholar 

  8. R.Y. Tsai, A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J. Robot. Autom. 3(4), 323–344 (1987)

    Article  Google Scholar 

  9. L. Qi, Y. Zhang, X. Zhang, S. Wang, F. Xie, Statistical behavior analysis and precision optimization for the laser stripe center detector based on Steger's algorithm. Opt. Express 21(11), 13442–13449 (2013)

    Article  Google Scholar 

  10. C. Steger, An unbiased detector of curvilinear structures. IEEE Comput Soc 20(2), 113–125 (1998)

    Google Scholar 

  11. H. Cai, Z. Feng, Z. Huang, Centerline extraction of structured light stripe based on principal component analysis. Chin J Lasers 42(3), 270–275 (2015)

    Google Scholar 

  12. Q. Sun, J. Chen, C. Li, A robust method to extract a laser stripe centre based on grey level moment. Opt Lasers Eng 67, 122–127 (2015)

    Article  Google Scholar 

  13. A. Polesel, G. Ramponi, V.J. Mathews, Image enhancement via adaptive unsharp masking. IEEE Trans. Image Process. 9(3), 505–510 (2002)

    Article  Google Scholar 

  14. A. KG Lore, S.S. Akintayo, LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recogn. 61, 650–662 (2016)

    Article  Google Scholar 

  15. Y. Wang, Q. Chen, B. Zhang, Image enhancement based on equal area dualistic sub-image histogram equalization method. IEEE Trans. Consum. Electron. 45(1), 68–75 (1999)

    Article  Google Scholar 

  16. M. Li, J. Liu, W. Yang, X. Sun, Z. Guo, Structure-revealing low-light image enhancement via robust Retinex model. IEEE Trans. Image Process. 27(6), 2828–2841 (2018)

    Article  MathSciNet  Google Scholar 

  17. X. Pan, Z. Liu, High dynamic stripe image enhancement for reliable center extraction in complex environment. Int. Conf., 135–139 (2017), https://dl.acm.org/citation.cfm?id=3177406

Download references

Acknowledgements

The authors thank the editor and anonymous reviewers for their helpful comments and valuable suggestions.

Funding

This work was supported in part by a grant from the Natural Science Foundation of Jilin Province (No. 2017010121JC) and the Advanced Manufacturing Project of Provincial School Construction of Jilin Province (No. SXGJSF2017-2).

Availability of data and materials

We can provide the data.

Author information

Authors and Affiliations

Authors

Contributions

All authors take part in the discussion of the work described in this paper. The author SL wrote the first version of the paper, and the author HB did experiments of the paper. YZ, ZZ, and QT participated in the design of partially structured light measurement algorithms. FL assisted to participate in validation experiments that verify model accuracy and organize experimental data. SL, YZ, HB, FL, and ZZ revised the paper in different version of the paper, respectively. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Yunhui Zhang.

Ethics declarations

Authors’ information

Siyuan Liu was born in Shanxi, China, in 1985. He earned his Ph.D. degree in mechanical engineering from the Jilin University in 2016. He is currently working in the College of Mechanical Science and Engineering, Jilin University. His research interests include machine vision, structured light measurement, intelligent manufacturing.

Haojing Bao was born in Changchun, China, in 1986. She is currently pursuing a Ph.D. degree in the College of Mechanical Science and Engineering, Jilin University. Her research interests include machine vision, structured light measurement, and intelligent manufacturing.

Yunhui Zhang is currently a professor and a master’s tutor. She earned her Ph.D. degree in mechanical engineering from the Jilin University in 2011. She graduated in 1995 and stayed in school to teach at the College of Mechanical Science and Engineering, Jilin University. Her research interests include machine vision and digital image measurement technology.

Fenghui Lian is currently working in the school of aviation operations and services, air force aviation university, and she is currently pursuing a Ph.D. degree in the College of Mechanical Science and Engineering, Jilin University. Her research interests include image processing, machine vision.

Zhihui Zhang is currently a professor and a doctoral tutor, and is currently working in key lab of bionic engineering, Ministry of Education, Jilin University. His research interests include laser processing and bionics design.

Qingchang Tan is currently a professor and a doctoral tutor, and is currently working in the College of Mechanical Science and Engineering, Jilin University. His research interests include image processing and machine vision.

Competing interests

There are no potential competing interests in our paper. And all authors have seen the manuscript and approved to submit to your journal. We confirm that the content of the manuscript has not been published or submitted for publication elsewhere.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, S., Bao, H., Zhang, Y. et al. Research on image enhancement of light stripe based on template matching. J Image Video Proc. 2018, 124 (2018). https://doi.org/10.1186/s13640-018-0362-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13640-018-0362-y

Keywords