- Research
- Open Access

# Research on image enhancement of light stripe based on template matching

- Siyuan Liu
^{1, 2}, - Haojing Bao
^{1}, - Yunhui Zhang
^{1}Email author, - Fenghui Lian
^{3}, - Zhihui Zhang
^{2}and - Qingchang Tan
^{1}

**2018**:124

https://doi.org/10.1186/s13640-018-0362-y

© The Author(s). 2018

**Received:**25 July 2018**Accepted:**19 October 2018**Published:**7 November 2018

## Abstract

The detection accuracy of the light stripe centers is an important factor based on the structured light vision measurement, and the quality of the light stripe images is a prerequisite for accurately detecting the light stripe centers; this paper separately proposes image enhancement methods for linear and arc light stripe images. For linear light stripes, the image with better quality is captured and the gray-scale distribution of the normal section corresponding to the light stripe centers is used as a template for light stripe images with poor quality. The poor quality light stripe images are finally optimized by linear interpolation. For arc-shaped light stripes, this paper proposes a positioning method for light stripe centers on arc, and then the gray-scale distribution of the normal cross-section of corresponding center which is on a good quality light stripe image is used as templates to improve the poor quality light quality stripe images. In order to verify the effectiveness of the light stripe image enhancement algorithms, this paper respectively presents verification methods to linear light stripe and arc light stripe. Finally, the quality of the light stripe images could be improved by image enhancement algorithms through experiments.

## Keywords

- Image enhancement
- Light stripe
- Template matching
- Images optimization

## 1 Introduction

The measurement methods based on machine vision have been in-depth research and rapid development in the three-dimensional measurement of mechanical parts [1–3]. Due to its advantages of large range, good robustness, and high precision, the measurement method based on the structured light vision has been widely developed in the parts size measurement [4–7]. According to the different characteristics of structured light sources, they can be divided into different types. The paper mainly studies the widely used line structured light vision.

The structured light measurement technology can be divided into the following three parts: vision system calibration [8], light stripe center detection [9], and measurement model establishment. The detection precision of the structured light stripe centers is an important factor affecting the measurement accuracy of the vision system. The detection accuracy of the light stripe centers is not only affected by the detection algorithm, but also the quality of the light stripe images is a prerequisite for improving the detection accuracy. The gray-scale distribution of light stripe in a good light stripe image should be uniform and have high contrast. In measurement, the material and the surface shape of measured object have a great influence on the light stripe images. For the surfaces with high reflectance, specular reflection will be occured when structured light is projected onto the surface. This phenomenon causes the quality of the acquired images to become lower. Due to the requirements of machine equipment performance, most of the mechanical parts have to undergo surface treatment, and the surface reflectivity of the parts is improved after the processing. Therefore, the quality images of light stripes which are on the surface of the mechanical part are generally poor. Two methods are usually used to solve the problem: one kind of the methods is to improve the robustness of the light stripe center detection algorithms; the other is to enhance the quality of light stripe images by image enhancement algorithms.

Many algorithms have been proposed for the detection of the light stripe centers. Steger [10] proposed an light stripe center detection algorithm based on Hessian matrix, the algorithm has high detection accuracy and good robustness. However, this algorithm needs a large-scale convolution operation, and the detection speed is relatively low. In order to overcome the defect of the Steger’s algorithm, Cai [11] proposed a structural light center extraction algorithm based on principal component analysis. The algorithm firstly obtains the ROI region on the light stripe image, and then obtains the sub-pixel coordinates of the centers through Taylor expansion in the normal direction. Sun [12] proposed an extraction method by gray level moment; the method can eliminate the noise in the image and preserve the original information of the light stripe. The method has good detection accuracy for light stripe images with certain noise.

Image enhancement is an image processing technology and widely used to improve image effects [13, 14]. Wang [15] proposed image enhancement based on equal area dualistic sub-image histogram equalization method. The algorithm divides the original image into two sub-images, and the two sub-images are respectively equalized. Finally, the processed sub-images are composed of the processed images. This method not only enhanced the images information but also preserved the original images information. Li [16] proposed a new robust retinex mode based on the classic retinex model. This method enhanced low-light images due to noise, and could be used for underwater or remote sensing images with more noise. Pan [17] proposed a highly dynamic light stripe image enhancement method based on retinex theory for complex environments. But this method is mainly suitable for light bar images that are fully Gaussian.

In recent years, image enhancement technology has been widely studied, but these techniques are mainly used for remote sensing images, medical images or videos, but the techniques for light stripe images have rarely been studied in industrial measurement. Due to the special grayscale distribution of light strips and the development status of structured light measurement techniques, it is of great significance to study the enhancement technology of light strip images.

This paper proposes an image enhancement algorithm for linear and arc light stripes based on template matching. The good quality images of the light stripes are used as the templates to enhance the images with poor quality. In order to ensure the consistency of the light stripe features, the laser used by the template is the same as the laser with the enhanced images. In order to verify the effect of the enhancement algorithm, the evaluation experiments are designed for linear and arc-shaped light stripe centers, and it proved that the image enhancement algorithms proposed in the paper have a certain effect on improving image.

This paper is organized as follows: Section 2 outlines the light stripe image enhancement algorithms, Section 3 proposes light stripe centers detection evaluation methods, Section 4 reports the experimental results used to test the influences of enhancement algorithms, and Section 5 provides the study’s conclusions.

## 2 Light stripe image enhancement algorithms

In the measurement, when the diffuse reflection is in the majority on the surface of the measured objects, the images quality of the light stripe captured by the camera is better and the gray scale on the light stripe is evenly distributed. In opposition, specular reflections are in the majority; there may be two situations: firstly, the gray scale distribution of the light stripe on the image is not uniform and the width of the light bar is very thin, when camera is far away from the reflected light path of the laser; secondly, in the condition that the camera is close to the reflection of the laser, the width of the light stripe is too wide and the gray scale distribution is not uniform. Regardless of the above situations, the accuracy of the light stripe center detection will be affected and reduced, when the quality of the light stripe images is poor.

In order to solve the problem of inaccurate detection of light stripe centers due to the poor quality of the light bar image, the paper analyzes the gray distribution of light strips on different surfaces and proposes the two light stripe image correction algorithms for different measured object shapes.

### 2.1 Line structure light stripe gray scale distribution model

*x*is the pixel abscissa in the strip image, and

*I*(

*x*) is the gradation value of the pixel point. Table 1 shows the Gaussian fitting residuals and second-order polynomial fitting residuals on each section of the light strips images; method A represents the second-order polynomial fitting and method B represents the Gaussian fitting in the table. The experimental images are taken under the condition that the lens focal length is 25 mm, the aperture is 4 F, the laser power is 18 mW, and the camera exposure time is 30 ms (Table 2).

Comparison of light stripes gray distribution model fitting residual

Image (a) | Image (b) | Image (c) | Image (d) | |||||
---|---|---|---|---|---|---|---|---|

Method A | Method B | A | B | Method A | Method B | A | B | |

Section 1 | 16.393 | 16.238 | 25.042 | 6.767 | 39.333 | 26.820 | 14.08 | 3.313 |

Section 1 | 24.787 | 10.502 | 23.757 | 17.478 | 20.617 | 27.236 | 42.44 | 14.427 |

Section 1 | 18.442 | 11.426 | 18.576 | 14.556 | 17.146 | 15.220 | 46.116 | 11.758 |

Section 1 | 25.400 | 11.412 | 21.651 | 8.226 | 15.371 | 2.336 | 37.685 | 19.1 |

Section 1 | 17.988 | 13.375 | 28.244 | 4.225 | 1.366 | 4 × 10 | 15.699 | 4.307 |

Section 1 | 22.214 | 15.250 | 26.824 | 9.013 | 9 × 10 | 6 × 10 | 17.351 | 4.714 |

Section 1 | 20.059 | 6.860 | 28.984 | 6.120 | 8.287 | 4 × 10 | 24.757 | 22.153 |

Section 1 | 22.665 | 14.161 | 28.29 | 9.964 | 17.805 | 3.446 | 46.698 | 10.982 |

Section 1 | 17.737 | 10.810 | 23.769 | 5.841 | 16.813 | 17.502 | 36.257 | 12.503 |

Section 1 | 17.296 | 18.898 | 21.911 | 3.768 | 24.024 | 30.079 | 16.094 | 6.431 |

Mean | 20.298 | 12.893 | 24.705 | 8.596 | 16.076 | 12.264 | 29.718 | 10.969 |

Experimental equipment parameters

Equipment | Mold no. | Main parameters |
---|---|---|

CCD camera | MER-125-30UM/UC | Resolution: 1292×964 |

Lens | M0814-MP | Focal length: 25 mm |

Line projector | LH650-80-3 | Wavelength: 650 nm |

According to the table, fitting residuals by the Gaussian model are smaller than the second-order polynomial model, so it is reasonable to use the Gaussian model to represent the gray scale distribution of the light strip on the measured object. When the light strip is a straight line, the gray scale distribution on each cross section is relatively uniform, and the gray scale fitting residual variation on each cross section is small. When the shape of light strip is curve, the gray scale fitting residual variation on each cross section is large. So the shape and material of the measured object have a great influence on the gray scale distribution of the light strip.

### 2.2 Image enhancement algorithm for linear light stripe

Where *N* is the number of normal section, and the width of the light stripe is *D* = 2**d*.

*P*is (

*x*

^{i}

_{j},

*y*

^{i}

_{j})

_{j = 1,2, …D}, and the point is the

*j*th point corresponding to the normal direction of the

*i*th center point on the light stripe, the gray value of

*P*is I

^{i}

_{j}.

*Q*is the point adjacent to

*P*in the normal direction, and the pixel coordinates and gray value of

*Q*are (

*x*

^{i}

_{j + 1},

*y*

^{i}

_{j + 1}) and I

^{i}

_{j + 1}. Rounding up the

*P*point can obtain the

*O*point, and the pixel coordinates of the point is (

*x*

_{0},

*y*

_{0}). The gray value of the point

*O*can be obtained by linear interpolation according to the gray values of the

*P*and the

*Q*.

*I*

_{0}is the gray value of

*O*. Through the enhancement algorithm for linear light strip, the result is shown in Fig. 5.

### 2.3 Image enhancement algorithm for arc light stripe

The gray scale distribution of the light stripe on the surface of the cylinder is different from the gray scale distribution of the light stripe on the plane, so the section proposed an image enhancement algorithm for the arc light stripe.

*i*(

*u*

_{i},

*v*

_{i})

_{i = 1,2, …N}and the normal vector corresponding to the each center (

*n*

^{i}

_{u},

*n*

^{i}

_{v})

_{i = 1,2, …N}could be obtained by Steger algorithm on the better quality image. The gray distribution in the normal direction of each center could be calculated by Eq. (3). Since the grayscale distribution of each center on the light stripe is not the same, the relative position of each center point on the arc needs to be determined. As shown in Fig. 8, set

*P*

_{i}is as the center point of the light stripe and

*M*is as the starting point of the arc and is the first point of the center points set. The

*O*point is the center of the arc and the coordinates of the

*O*point can be obtained by fitting the pixel coordinates of the centers.

*α*

_{i}is the angle between the straight line

*l*

_{1}and

*l*

_{2},

*α*= [

*α*

_{1},

*α*

_{2}, …

*α*

_{i}]

_{i = 1,2, …N}.

Second, the pixel coordinates of centers *Q*_{i} (*x*_{i},*y*_{i})_{i = 1,2, …K} and the normal vector corresponding (*n*^{i}_{x}, *n*^{i}_{v}) _{i = 1,2, …K} could be obtained by the similar process on the poor quality image. The point *O*_{1} is the center of the arc and the coordinates of *O*_{1} could be fixed by the coordinates of *Q*_{i}. Set *M*_{1} as the starting point of the stripe, and *N*_{1} is the end point. *β*_{i} is the angle between the straight line *O*_{1}*M*_{1} and *O*_{1}*Q*_{i}, *β* = [*β*_{1}, *β*_{2}, …*β*_{i}] _{i = 1,2, …K}. Figures 8 and 9 respectively show the positional relationship of the feature points on the light bar with good quality and poor quality.

*β*

_{i}is found in the array

*α*. Assuming that

*α*

_{j}is closest to

*β*

_{i}, the gray distribution corresponding to the

*j*th center point of the stripe 1 is used as a template. The gray value of the normal direction corresponding to the

*Q*

_{i}is replaced by the template on the stripe 2, and the replacement process is the same as when the light bar is a straight line. According to the above steps, the poor quality image of light stripe can be enhanced, and the light stripe images before and after enhancement are shown in Fig. 10.

## 3 Method—light stripe center detection evaluation methods

The light stripe centers detection is an important step in structured light measurement. Due to different shapes of measured objects, the gray scale distribution of light stripes has obvious differences; two evaluation methods are separately proposed in two conditions which the light stripe is a straight line on the plane and is an arc on the cylindrical surface.

### 3.1 An evaluation method for light stripe on the plane

*L*

_{1}is the light stripe on the target plane, and

*L*

_{2}is the light stripe on the plane of the gauge block. The straight line

*l*

_{1}is obtained by fitting the centers of the light stripe

*L*

_{1}, and

*l*

_{2}is obtained by fitting the centers of the light stripe

*L*

_{2},

*d*

_{i}is the pixel distance from the

*i*th center point to

*l*

_{1}, where

*i*= 1,2,…

*N*, as shown in Fig. 11b

*.*Because

*l*

_{1}is parallel to

*l*

_{2}, the consistency of the light stripe detection algorithm can be evaluated by the variance of the array

*D*, where

*D*= [

*d*

_{1},

*d*

_{2}, …

*d*

_{N}]. The smaller the variance, the better the consistency of the center point.

### 3.2 An evaluation algorithm for light stripe on the cylindrical surface

*i*th group, and the pixel coordinates of the center are (

*x*

^{i}

_{0},

*y*

^{i}

_{0}). In theory, all the centers of a light stripe should be on the same ellipse, so the fitting centers obtained by data points should be the same point. However, there is some error in the light stripe centers obtained by the detection algorithm, and the larger the error of centers, the more dispersed the fitting centers of the ellipse.

The variance of the two arrays can be calculated separately; the array of *X* axis is *X* = [*x*^{1}_{0}, *x*^{2}_{0}, …*x*^{N}_{0}] and the array of *Y* axis is *Y* = [*y*^{1}_{0}, *y*^{2}_{0}, …*y*^{N}_{0}], set *N* is the number of elliptical centers. To evaluate the light stripe center detection algorithm, the variance is smaller, and the consistency of the light stripe center is the better.

## 4 Experimental results and discussions

Experiments are conducted to assess the utility of the proposed method. In order to ensure the consistency of the experiment, the light stripe centers are detected using the Steger algorithm in the image before and after the optimized process. The results of image enhancement algorithms are evaluated by the method proposed in Section 3 of the paper, and the optimized effect is verified.

### 4.1 The evaluation experiment for the linear light stripe

Comparison of enhanced light stripe images (the linear light stripe)

Gauge size | \( {S}_1^2 \) × 10 | \( {S}_1^2 \) × 10 |
---|---|---|

2 mm | 1.039 | 0.541 |

3 mm | 1.271 | 0.786 |

4 mm | 0.492 | 0.456 |

5 mm | 0.768 | 0.720 |

6 mm | 0.892 | 0.454 |

7 mm | 0.408 | 0.421 |

\( {S}_{\mathrm{Mean}}^2 \) | 0.812 | 0.563 |

According to Table 3, the uniformity of the light stripe centers after enhancement is better than the light stripe centers before processing. Therefore, it can be seen that the enhancement algorithm has an effect on the image quality of the light stripe.

### 4.2 The evaluation experiment for the arc light stripe

Comparison of enhanced light stripe images (the arc light stripe)

Number | Before optimization | After optimization | ||
---|---|---|---|---|

Variance \( {S}_X^2 \) | Variance \( {S}_Y^2 \) | Variance \( {S}_X^2 \) | Variance \( {S}_Y^2 \) | |

Shaft 1 | 143.535 | 274.191 | 78.062 | 151.889 |

Shaft 2 | 120.085 | 656.769 | 55.475 | 346.786 |

Shaft 3 | 59.695 | 87.608 | 27.243 | 117.272 |

Shaft 4 | 44.707 | 152.676 | 19.408 | 116.587 |

\( {S}_{\mathrm{Mean}}^2 \) | 92.006 | 292.811 | 45.047 | 183.134 |

According to Table 4, the variance of the vertical and horizontal coordinates of the fitting center after the enhancement process is smaller than the variance before the enhancement process, so the enhancement algorithm has a certain improvement effect on the poor quality light stripe image.

## 5 Conclusion

This paper presents the light stripe image enhancement algorithms based on image matching. For the linear light stripe, the grayscale distribution of the normal section is used as a matching template in stripe image with good quality. For arc light stripe, the light stripe centers are located on the arc, and grayscale distribution of the corresponding point is used as a template in the normal section of stripe image with good quality. In order to verify the effectiveness of the enhancement algorithms, the paper separately proposes two methods to evaluate the consistency of the light stripe centers for the linear light stripe and the arc light stripe. Finally, the experimental results show that the image enhancement algorithms have a certain effect on the improving light stripe images quality.

## Declarations

### Acknowledgements

The authors thank the editor and anonymous reviewers for their helpful comments and valuable suggestions.

### Funding

This work was supported in part by a grant from the Natural Science Foundation of Jilin Province (No. 2017010121JC) and the Advanced Manufacturing Project of Provincial School Construction of Jilin Province (No. SXGJSF2017-2).

### Availability of data and materials

We can provide the data.

### Authors’ contributions

All authors take part in the discussion of the work described in this paper. The author SL wrote the first version of the paper, and the author HB did experiments of the paper. YZ, ZZ, and QT participated in the design of partially structured light measurement algorithms. FL assisted to participate in validation experiments that verify model accuracy and organize experimental data. SL, YZ, HB, FL, and ZZ revised the paper in different version of the paper, respectively. All authors read and approved the final manuscript.

### Authors’ information

Siyuan Liu was born in Shanxi, China, in 1985. He earned his Ph.D. degree in mechanical engineering from the Jilin University in 2016. He is currently working in the College of Mechanical Science and Engineering, Jilin University. His research interests include machine vision, structured light measurement, intelligent manufacturing.

Haojing Bao was born in Changchun, China, in 1986. She is currently pursuing a Ph.D. degree in the College of Mechanical Science and Engineering, Jilin University. Her research interests include machine vision, structured light measurement, and intelligent manufacturing.

Yunhui Zhang is currently a professor and a master’s tutor. She earned her Ph.D. degree in mechanical engineering from the Jilin University in 2011. She graduated in 1995 and stayed in school to teach at the College of Mechanical Science and Engineering, Jilin University. Her research interests include machine vision and digital image measurement technology.

Fenghui Lian is currently working in the school of aviation operations and services, air force aviation university, and she is currently pursuing a Ph.D. degree in the College of Mechanical Science and Engineering, Jilin University. Her research interests include image processing, machine vision.

Zhihui Zhang is currently a professor and a doctoral tutor, and is currently working in key lab of bionic engineering, Ministry of Education, Jilin University. His research interests include laser processing and bionics design.

Qingchang Tan is currently a professor and a doctoral tutor, and is currently working in the College of Mechanical Science and Engineering, Jilin University. His research interests include image processing and machine vision.

### Competing interests

There are no potential competing interests in our paper. And all authors have seen the manuscript and approved to submit to your journal. We confirm that the content of the manuscript has not been published or submitted for publication elsewhere.

### Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## Authors’ Affiliations

## References

- Q. Sun, Y. Hou, Q. Tan, C. Li, Shaft diameter measurement using a digital image. Opt. Lasers Eng.
**55**, 183–188 (2014)View ArticleGoogle Scholar - G. Li, Q. Tan, Q. Sun, Y. Hou, Normal strain measurement by machine vision. Measurement
**50**(4), 106–114 (2014)View ArticleGoogle Scholar - N. Zhou, A. Zhang, F. Zheng, L. Gong, Novel image compression–encryption hybrid algorithm based on key-controlled measurement matrix in compressive sensing. Opt. Laser Technol.
**62**(10), 152–160 (2014)View ArticleGoogle Scholar - S. Liu, Q. Tan, Y. Zhang, Shaft diameter measurement using structured light vision. Sensors
**15**(8), 19750–19767 (2015)View ArticleGoogle Scholar - Z. Xiao-dong, Z. Ya-chao, T. Qing-chang, et al., New method of cylindricity measurement based on structured light vision technology. J Jilin Univ(Engineering and Technology Edition)
**47**(2), 524–529 (2017)Google Scholar - P. Zhou, Y. Yu, W. Cai, S. He, G. Zhou, Non-iterative three dimensional reconstruction method of the structured light system based on polynomial distortion representation. Opt Lasers Eng
**100**, 216–225 (2018)View ArticleGoogle Scholar - C. Rosales-Guzmán, N. Hermosa, A. Belmonte, J.P. Torres, Measuring the translational and rotational velocities of particles in helical motion using structured light. Opt. Express
**22**(13), 16504–16509 (2014)View ArticleGoogle Scholar - R.Y. Tsai, A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J. Robot. Autom.
**3**(4), 323–344 (1987)View ArticleGoogle Scholar - L. Qi, Y. Zhang, X. Zhang, S. Wang, F. Xie, Statistical behavior analysis and precision optimization for the laser stripe center detector based on Steger's algorithm. Opt. Express
**21**(11), 13442–13449 (2013)View ArticleGoogle Scholar - C. Steger, An unbiased detector of curvilinear structures. IEEE Comput Soc
**20**(2), 113–125 (1998)Google Scholar - H. Cai, Z. Feng, Z. Huang, Centerline extraction of structured light stripe based on principal component analysis. Chin J Lasers
**42**(3), 270–275 (2015)Google Scholar - Q. Sun, J. Chen, C. Li, A robust method to extract a laser stripe centre based on grey level moment. Opt Lasers Eng
**67**, 122–127 (2015)View ArticleGoogle Scholar - A. Polesel, G. Ramponi, V.J. Mathews, Image enhancement via adaptive unsharp masking. IEEE Trans. Image Process.
**9**(3), 505–510 (2002)View ArticleGoogle Scholar - A. KG Lore, S.S. Akintayo, LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recogn.
**61**, 650–662 (2016)View ArticleGoogle Scholar - Y. Wang, Q. Chen, B. Zhang, Image enhancement based on equal area dualistic sub-image histogram equalization method. IEEE Trans. Consum. Electron.
**45**(1), 68–75 (1999)View ArticleGoogle Scholar - M. Li, J. Liu, W. Yang, X. Sun, Z. Guo, Structure-revealing low-light image enhancement via robust Retinex model. IEEE Trans. Image Process.
**27**(6), 2828–2841 (2018)MathSciNetView ArticleGoogle Scholar - X. Pan, Z. Liu, High dynamic stripe image enhancement for reliable center extraction in complex environment. Int. Conf., 135–139 (2017), https://dl.acm.org/citation.cfm?id=3177406