 Research
 Open Access
 Published:
Application of machine vision technology in geometric dimension measurement of small parts
EURASIP Journal on Image and Video Processing volume 2018, Article number: 127 (2018)
Abstract
In this paper, the online detection of small parts’ dimension measurement based on machine vision is designed, and the key technologies, such as image processing, image registration and stitching, edge detection, subpixel location analysis, image feature recognition and clustering, and scale measurement based on image involved in the detection of small part dimension are studied. Firstly, based on the actual usage and the characteristics of the algorithm, the featurebased SIFT algorithm was selected to complete the image registration, and the image edge detection algorithm and data processing method were explored. The histogram equalization improves the grayscale distribution of the stitched image and improves the contrast of the image. The median noise filtering algorithm was used to complete the image noise reduction. The false edge was filtered to get the single pixel edge, and the least square method was used to compensate the missing edge pixels and reduce the measurement error. An accurate image registration transformation matrix was obtained. Then, the weighted average fusion algorithm was used to complete the image fusion. The experimental results show that the image stitching algorithm is accurate and effective, and the measurement accuracy of the system meets the performance requirements.
Introduction
With the development of science and technology, the traditional manual testing technology has been unable to meet the needs of the modern, machine vision inspection technology in the field of industrial promotion and application of highprecision, lowcost advantages. The basic principle of machine vision [1] is to analyze the measured target image obtained by the computer vision system so as to obtain the required measurement information, and determine whether the measured target complies with the specification based on prior knowledge.Machine vision includes capturing digital video,using algorithms which can detect and analyze specific objects and interested patterns [2], including the design and selection of hardware, image splicing and preprocessing algorithms, image edge detection algorithm research, data processing methods, and so on.
With the development of science and technology, the traditional manual testing technology have been unable to meet this need, machine vision inspection technology for its high precision, lowcost advantage in the field of industrial promotion and application, as the machine vision inspection technology are briefly introduced, and then to a nonwoven masks of factory as an example, Chen [3] expounds the application of machine vision inspection technology. Carlos CalderonCordova proposed an architecture and program of board game design based on computer vision [4]. Chen et al. proposed the current application of machine vision technology in agriculture [5]. Morison et al. described the development process of a senior undergraduate course that introduced machine vision and image processing algorithms and implementation topics [6]. Choi and Kim proposed the design of structural safety inspection system based on computer vision [7].
Inspection of the geometrical dimensions of small parts includes the geometric dimension, surface finish, surface texture, and surface defects. At present, for the detection of small parts is mainly machine vision detection, machine vision detection technology has been widely used in these aspects.
Pavim established a flexible detection system for small batch production by combining sensors and data [8]. Jin et al. proposed a method of using computer vision technology to detect small ceramic tube defects [9], using computer and computer vision, electromechanical and other technologies to develop an automatic detection system, and to achieve a small ceramic tube for automatic detection and classification. Böhm and Jech [10] scanned the surface with a small scattered light sensor using a red LED as a light source. The intensity distribution of scattered light detected depends on the topography of the reflective surface. The system can detect scratches on the high gloss metal sheet with a typical width of 1 μm and a depth of 40 nm. Kawasue and Komatsu [11] proposed a mobile robot equipped with two lasers and a CCD camera for pipeline inspection. The mobile robot can reconstruct the 3D shape of the sewage pipeline. Since the tilt of the mobile robot with respect to the axis of the pipe is expressed as a deformation between the two circular stripes, the shape of the sewage pipe can be accurately measured regardless of the inclination of the robot. Li et al. [12] applied machine vision technology to automate the inspection and condition monitoring of railway tracks. An imaging setup framework for capturing multiple video streams was proposed for the detection of important rail assemblies such as backing plates, nails, anchor rods, and connecting rod bolts, and for defect identification such as raised spikes, defect severity analysis and time condition analysis, and longterm forecast evaluation. The average detection rate of component detection has been initially achieved 98.2%, the falsepositive rate was 1.57%, and the falsenegative rate was 1.78%. He and Wang [13] developed an improved small defect detection algorithm for realtime leather surface detection using an enhancement method based on discrete cosine transform (DCT). The experimental results show that the proposed small defect detection method can achieve 94% detection rate of small defects.
With the continuous improvement of precision requirements for industrial inspection, the basic pixellevel accuracy in machine vision recognition has not been able to meet the requirements of actual measurement, so it needs a more accurate edge extraction algorithm [14, 15], that is, subpixel algorithm[16]. The subpixel algorithm [17] generally needs to find the position of the edge pixel [18] using the classical algorithm first and then use the gray value of the surrounding pixel [18] as the supplementary information for judgment, using interpolation, fitting, and so on [19,20,21] to position the edge in a more precise position.
On the basis of the discussion of the accuracy, the antinoise ability, and the computation of the common subpixel location algorithm [22,23,24], this paper proposes an improved least squares subdivision method [25,26,27] to subpixel fitting for straight lines [28,29,30]. In this paper, three standard nuts are used as the measurement data, mainly using the interpolation method before binomial modified and after binomial modified based on Canny operator to subpixel locate the inner circle of the test sample, extract the subpixel coordinates, and use the least square method to fit the circle diameter. The error of subpixel value is 0.0314, and the error of the improved subpixel algorithm is 0.0108. The three measured value is 3.392, and the average value of machine vision recognition technology is 3.403. By contrast, it can be found that the improved subpixel value improves the positioning accuracy and operation speed.
Proposed method
The key to the detection of small part dimension is vision technology, including image acquisition, image edge detection and subpixel positioning, image stitching, and dimension estimation.
Image acquisition
The visual image that can be seen by the eye is continuous, and the computer can only process discrete data, so the visual inspection system must convert the continuous image function into a discrete data set. This process is called digitization of the image. The process of digitizing an image is the process of generating a onedimensional matrix in a computer. The digitization process consists of one step: scanning, sampling, and quantification.
The image acquisition is actually to convert the visual image and intrinsic features of the measured object into a series of discrete data that can be processed by the computer. The result of scanning an image according to a rectangular scanning grid is to generate a onedimensional integer matrix corresponding to the image. The position of each element (pixel) in the matrix is determined by the scanning order, and the gray value of each pixel is determined by the samples that are generated and quantized to give an integer representation of each pixel’s gray value. Therefore, the result of image acquisition is to digitize a natural continuous image and finally obtain a digital image. It generally has two common representations.

(1)
Array representation of grayscale images
Suppose continuous images f(x, y) are sampled at equal intervals and arranged in an M × N array (generally squared N × N), each element in the image array is a discrete value called a pixel (pixel). In digital image processing, it is generally assumed that the array N and the gray level are both integer powers of two, as shown in Eq. 1.

(2)
Binary image representation
The binarization of the image is that the entire image presents a clear black and white effect. The 256 brightness level grayscale image is selected by an appropriate threshold to obtain a binarized image that can still reflect the overall image and local characteristics of the image. If a particular object has a uniform gray value inside, and it is in a uniform background with other levels of gray values, we can get the comparison segmentation effect by using the threshold method. When the difference between the object and the background is not reflected in the grayscale value, this difference feature can be converted to a grayscale difference, and then the threshold selection technique is used to segment the image. The image can observe the specific results of the segmented image dynamically by adjusting the threshold dynamically to achieve the binary.
In digital image processing, in order to reduce the amount of computation, grayscale images are often converted to binary image processing. The socalled binary image is that the image has two gray levels (only black and white), that is the pixel gray level is 1 or 0.
Image edge detection
The visual measurement system of mechanical parts is generally carried out under a carefully designed and controllable environment. The acquired image of the measured part has a large contrast between the target and the background, and the signaltonoise ratio is relatively high. A high signaltonoise ratio image is edgedetected by the edge detection operator to obtain a singlepixel edge point. The edges of most mechanical parts are composed of straight lines and arcs. The linear parameters are usually determined by the least square linear regression method, which is the most commonly used subpixel detection method for line edges. The least squares linear regression method in the fitting method can calculate the boundary of the straight line portion of the edge image at one time, and the positioning is fast.
The edge points of a straight line can be regarded as a set of vectors, and the edge vectors can be accurately located by least square straight line fitting.
Let the linear regression equation be:
In Eq. 2, a_{0} and a_{1} are the constants and coefficients of the regression equation.
The corresponding regression values are:
when the value of x is x_{1},x_{2},…x_{n}.
According to extreme conditions, there are:
Solve the above equations and get the result:
From Eq. 5, the constants and coefficients of the regression equation are obtained, and the fitted line is obtained as Eq. 2, which is the subpixel position passing through the edges of the straight line pixels. The positioning accuracy of subpixels is related to the number of points and noise involved in fitting. The more points participating in fitting, the more accurate the subpixel positioning of the straight line, but the more time it takes for the calculation. The smaller the noise, the higher the positioning accuracy.
Image stitching
Image matching is a key step before stitching. The purpose is to find the overlapping part of adjacent images, that is to achieve image alignment. The essence is to find the internal relationship between the two images and find the relationship between the two sets of corresponding data. The matching based on the similarity between the images can be divided into a featurebased method and a nonfeature method from the matching method. The featurebased method requires selecting a series of features from two images and then performing feature matching between images according to the principle of similarity. Finally, the optimization process is used to calculate the transformation from one image to another. The selection of features can be feature points or feature flags. The feature point can be the corner point of the image, the center of gravity of the closed contour line, the texture feature point, or other prominent points. The key issue is to find the feature that fits the target. Nonfeature matching is the alignment based on the correlation principle. The least squares difference function of some parameters is used to find the registration region of the image. For example, the region with the smallest grayscale difference is directly searched from the overlapping region, although it does not rely on the external target feature, but it is required that the changes in grayscale, rotation, and deformation between images are small.
Szeliski proposed an image smoothing algorithm: the gray value of the pixel in the overlapping region of the image is obtained by weighted averaging of the gray value of the corresponding pixel in the two images. The specific method is as follows:

(1)
Project image B onto the coordinate system of image A, as shown in Fig. 1. Areas A1 and B2 do not belong to the overlapping area. The values of the merged image pixels take the values of A1 and B2.

(2)
The overlapping areas A2 and B1 represent the same physical scene and have the same geometric properties after registration. In the overlap region, take the gray weighted mean: G = d1 × A2 + d2 × B1 d1 ∈ (0, 1), and d1 + d2 = 1; d1 is related to the abscissa of the pixel point, in the direction from A to B, gradient from 0 to 1.

(3)
Divide A2 and B1 into two L1, L2, and L3 columns, taking (L1 = L3) > L2.

(4)
Let the resulting image after fusion be R and set the threshold Th. In L1, if A2 − G < Th, R = G, otherwise R = A2. In L2, if max(A2, B1) − G < Th, R = G, otherwise R = max(A2, B1). In L3, if B1 − G < Th, R = G, otherwise R = B1.
Defect feature extraction
After obtaining the stitched image of the surface of the part, extract the global image of the surface with surface defects for edge extraction of defects. Once the surface defects of the test sample are detected, the defects are classified. Firstly, feature extraction is performed on the defects. Feature extraction is an important part of image pattern recognition. Its purpose is to extract a set of features that are as concise as possible and most effective in order to improve the recognition effect and efficiency of the classifier. In this paper, subpixel positioning is applied to the defect boundary in the global image of the surface of the part. And the fuzzy clustering algorithm is used to identify and divide the defects. Fuzzy clustering algorithm is a clustering algorithm based on partition. The idea is to make the similarity among the objects that are divided into the same cluster be the largest, and the similarity between different clusters is the smallest. The FCM(Fuzzy Cmeans) clustering uses fuzzy partitioning to make each given data point determine the degree to which it belongs to each group with a membership degree between [0,1]. The general form of the value function of FCM is as Eq. 6:
Where u_{ij} is between [0,1], c_{i} is the clustering center of fuzzy group i, dij = ‖c_{i} − x_{j}‖ is the Euclidean distance between the i cluster center and the j data point, and m ∈ [i, ∞) is a weighted exponent. By constructing an objective function such as Eq. 7, the necessary conditions for the minimum value of Eq. 6 can be obtained:
The above algorithm can also choose to initialize the clustering center and then perform the iterative process. Because it cannot ensure that the FCM converges to an optimal solution, the performance of the algorithm depends on the initial clustering center. Therefore, we either use another fast algorithm to determine the initial cluster center or start the algorithm each time with a different initial cluster center and run the FCM multiple times.
The work of this paper is to detect the thread geometry of the nut bolt and its defects, both of which are based on machine vision. This article deals with the image acquired by the camera after writing the program in the image processing software. The image processing content includes the following parts: image grayscale, binarization, edge detection, edge subpixel detection, etc., achieving the ultimate goal of detecting component defects and dimensions. The edge detection algorithm and data processing method of the image are explored.
Experimental results
In the visual inspection system, it is necessary to establish the mutual correspondence between the location of the pixel points in the image and the location of the surface points of the space parts, for obtaining the geometric dimensions from the small parts image. This mutual correspondence is determined by the imaging system imaging model; the parameter in the model is that the camera parameter measurement system captures the image reflecting the surface characteristics of the shaft through the CCD camera. By processing the acquired image data, the computer can quickly and accurately calculate the geometric dimensions e of the small parts, and the results are displayed directly. The whole schematic diagram of machine vision system is shown in Fig. 2.
The dimension of the small parts is the dimension of the shape and the dimension of the position. For thin plate parts, the hole diameter and hole center distance are the parameters that need to be measured. For threaded parts, the inner diameter, outer diameter, and thread pitch are important parameters for thread measurement. To verify the accuracy and repeatability of the measurement, the test samples are shown in Fig. 3. Three standard nuts (H1, H2, H3) are used as measurement data for the inner diameter of the test circle.
The method of edge detection in this paper is to combine the Canny operator with the quadratic polynomial improved interpolation; firstly use the Canny operator to coarsely locate the edges. Then, the gradient image is calculated by adding a template operator to calculate the gradient image. Finally, the binomial expression is used to perform interpolation calculations, that is, the improved quadratic polynomial interpolation based on Canny operator. Follow the F1 coordinates to find the adjacent F0 and F2, which detects the subpixel edges and finds the coordinates of the edge points, as shown in Table 1.
Discussion
In this paper, the basic process of detecting machine parts based on machine vision is shown in Fig. 4.
In this paper, the five million pixel camera with industrial practical value is used to analyze the accuracy of subpixel; the results after binarization of sample images are shown in Fig. 5. And use the interpolation method before binomial modified and after binomial modified based on Canny operator to subpixel locate the inner diameter and outer diameter of the test sample shown in Fig. 6。.
Regardless of the leastsquares method commonly used for circle fitting or the algorithm used in this paper, it requires a single pixel at the edge. To facilitate grammatical expression, the system divides the circle in the image into two parts with a straight line passing through the center and perpendicular to the yaxis firstly, and it is shown in Fig. 7. After singlepixel refinement as shown in Fig. 8, the edges of the image are all optimized to the desired singlepixel edge and relatively complete, to facilitate the calculation of the inner hole diameter in the next step.

(1)
This paper designs two sets of experimental comparisons. The previous algorithm and the improved algorithm are tested and run by using MATLAB. Through visual and data comparison, it is judged whether the improved algorithm achieves the desired effect of this paper. Extracting subpixel coordinates and fitting a circle with least squares is used to find the diameter of a circle. If the higher measurement accuracy is needed, the system can also use partial closeups to capture images. It can be seen using the machine vision technology to perform 5 measurements of the sample data as shown in Fig. 9. There are three values compared in Fig. 9 such as the standard value, the measured value using the subpixel algorithm measurement, and the measured value using the improved algorithm in this article. The measurement error of the subpixel algorithm is 0.0314, 0.0304, 0.0304, 0.0315, and 0.0305, while the error of the improved algorithm in this paper is 0.0108, 0.0110, 0.0107, 0.0108, and 0.0108. The average error was improved from 0.0305 to 0.0108. Under the same threshold conditions, the improved subpixel edges detect more pixels, the edges looked finer, and the improved edge detection effect is better than the improved edge detection effect.

(2)
The inner circle of the three nut samples is subpixelpositioned by using this algorithm, subpixel coordinates are extracted, and the circle diameter is obtained by fitting the circle by least squares method. The inner diameter data is measured as shown in Table 2. The average of the three measured samples (actual measured values) are 3.403 (3.372), 3.407 (3.413), and 3.407 (3.418), and the average error values are 0.021, 0.015, and 0.011 respectively. According to the test results, the detection method proposed in this paper satisfies the detection requirements.
Conclusions
This paper discusses the measurement of microminiature parts based on machine vision. The binomial interpolation subpixel algorithm is analyzed and optimized. Finally, the Cannes operator is used as the edge coarse positioning and the binomial interpolation is used to improve the subpixels for edge fine positioning. It completes the noncontact measurement of the critical part size based on the least squares straight line fitting and circle fitting, and the proposed method can meet the testing requirements. The measurement range is extended while ensuring measurement accuracy.
From the above results, it can be seen that the accuracy of the subpixel edge detection based on the improved binomial interpolation is higher than the accuracy of the original binomial interpolation edge detection. And the error is reduced from the general 0.0305 to 0.0108. At the same time, with the increase of the number of measurements, the error of the test sample is decreasing. The error between the measured average and the measured value is smaller; the accuracy of the measurement is improved. The results show that whether in the detection efficiency or measurement accuracy, the requirements of the engineering practice are met.
Abbreviations
 DCT:

Discrete cosine transform
 FCM:

Fuzzy Cmeans clustering
References
 1.
M. Sonka, V. Hlavac, R. Boyle, Image processing, analysis, and machine vision (Cited by: 4579) [J]. J Electron Imaging xix(82), 685–686 (2008)
 2.
D. Michael, Machine Vision Methods for Identifying Extrema of Objects in Rotated Reference Frames[J] (1999)
 3.
Y. Chen, Machine vision inspection technology in industrial applications detection[J]. Electronic Test. 18:7980 (2015)
 4.
Carlos CalderonCordova, María Guajala, Rodrigo Barba, et al. Design of a Machine Vision Applied to Educational Board Game[C]// International Conference on Information Technology and Science. 2016
 5.
Y.R. Chen, K. Chao, M.S. Kim, Machine vision technology for agricultural applications ☆[J]. Comput Electron Agric 36(2), 173–191 (2002)
 6.
G. Morison, M.D. Jenkins, T. Buggy, et al., An implementation focused approach to teaching image processing and machine vision  from theory to beagleboard[C]// Education and Research Conference. IEEE, 274–277 (2014)
 7.
S.S. Choi, E.K. Kim, Design and implementation of visionbased structural safety inspection system using small unmanned aircraft[C]// International Conference on Advanced Communication Technology. IEEE, 562–567 (2015)
 8.
A. Pavim, Fusion of micrometrology techniques for the flexible inspection of MEMS/MOEMS assembly[J]. Proc SPIE Int. Soc. Opt. Eng 6995(2), 153–162 (2008)
 9.
H.G. Jin, X.B. Zhou, X.W.A. Zeng, System for detecting cigarette carton with pack missing based on machine vision technology[J]. J Anhui Vocat Coll Electron Inf Technol, 15(3) (2016)
 10.
J. Böhm, M. Jech, M. Vellekoop, Analysis of NMscale scratches on highgloss tribological surfaces by using an angleresolved light scattering method[J]. Tribol. Lett. 37(2), 209–214 (2010)
 11.
Kawasue K, Komatsu T, 川末,紀功仁. Shape measurement of a sewer pipe using a mobile robot with computer vision[J]. Int. J. Adv. Robot. Syst., 10(1):1 (2013)
 12.
Li Y, Otto C, Haas N, et al. Componentbased track inspection using machinevision technology[C]// International Conference on Multimedia Retrieval, ICMR 2011, Trento, Italy, April. DBLP, 2011:1–8
 13.
He F, Wang W. Detection of small surface defects using DCT based enhancement approach in machine vision systems[C]// ICMIT 2005: Merchatronics, MEMS, and Smart Materials. 2005:60400H60400H6
 14.
Q. Xia, Y. K. Chen, Z. X. Zhang, et al. Edge Extraction Algorithm of Infrared Thermal Image Based on Laplace Operator and Gray Theory[J]. Infrared Technology, 36(5), 377376 (2014)
 15.
Y. Zhang, K. Lu, Y. H. Gao. QSobel: A novel quantum image edge extraction algorithm[J]. Science China Information Sciences 58(1), 12106012106 (2015)
 16.
Q. C. Sun, Q. C. Tan, A. N. Gang, et al. A Subpixel Edge Detection Algorithm[J]. Journal of Beijing University of Technology, 35(10), 13321337 (2009)
 17.
P. Luciani, D. Chen, The impact of image and class structure upon subpixel mapping accuracy using the pixelswapping algorithm[J]. Geogr Inf. Sci 17(1), 31–42 (2011)
 18.
C.T. Huang, O.R. Mitchell, A Euclidean distance transform using grayscale morphology decomposition[J]. IEEE Trans Pattern Anal Mach Intell 16(4), 443–448 (1994)
 19.
A. Sezer, H.B. Sezer, S. Albayrak, Hermitebased texture feature extraction for classification of humeral head in proton densityweighted MR images. Neural Comput. & Applic. 28(10), 3021–3033 (2017)
 20.
Y. Wang, Y. Zhou, D. Liu, Z. Tang, Comic storyboard extraction via edge segment analysis. Multimedia Tools Appl. 75(5), 2637–2654 (2016)
 21.
M.R. Anderson, Remark on “algorithm 433: Interpolation and smooth curve fitting based on local procedures [E2]”[J]. ACM Trans. Math. Softw. 2(2), 208 (1976)
 22.
M. Ganesh, M. Naresh, C. Arvind, MRI brain image segmentation using enhanced adaptive fuzzy Kmeans algorithm. Int Automation Soft Comput 23(2), 325–330 (2017)
 23.
M. Kumar, Y.H. Mao, Y.H. Wang, T.R. Qiu, C. Yang, W.P. Zhang, Fuzzy theoretic approach to signals and systems: Static systems. Inf. Sci 418, 668–702 (2017)
 24.
Chen K, Feng H J, ZhiHai X U, et al. Subpixel location algorithm for planetary center measurement[J]. Optics & Precision Engineering, 21(7), 18811890 (2013)
 25.
R.H. Bartels, G.H. Golub, F.F. Samavati, Some observations on local least squares[J]. BIT Numer. Math. 46(3), 455–477 (2006)
 26.
R.H. Bartels, F.F. Samavati, Reversing subdivision rules: local linear conditions and observations on inner products[J]. J Comput Appl Math 119(1), 29–67 (2000)
 27.
N. Cardiel, Data boundary fitting using a generalized leastsquares method[J]. Mon. Not. R. Astron. Soc. 396(2), 680–695 (2018)
 28.
A. Rosenfeld, M. Thurston, Edge and curve detection for visual scene analysis[J]. IEEE Trans. Comput. 20(5), 562–569 (1970)
 29.
Q. U, ShiBo, Y. T. Zhou. Application of edge detection method in linear array CCD[J]. Transducer & Microsystem Technologies, 32(8), 149152 (2013)
 30.
C. Wang, Y.G. Wang, Comparison of algorithms for subpixel edge detection based on fitting[J]. Software Guide, 17(2), 6067 (2018)
Acknowledgements
The authors thank the editor and anonymous reviewers for their helpful comments and valuable suggestions.
Funding
This work was supported by research development foundation of Tianjin University of Technology and Education Grant (No.KJ1502).
Availability of data and materials
Please contact the author for data requests.
About the authors
Li Bin was born in Sanhe,Hebei, P.R. China, in 1980. He received the master‘s degree from Hebei University of Technology, P.R. China. Now, he works in School of Mechanical Engineering, Tianjin University of Technology and Education. His research interests include computational intelligence, robotics, and reliability analysis.
Email: libinfly@163.com
Author information
Affiliations
Contributions
The author BL wrote the paper. The author read and approved the final manuscript.
Corresponding author
Correspondence to Bin Li.
Ethics declarations
Competing interests
The author declares that he has no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Received
Accepted
Published
DOI
Keywords
 Machine vision
 Small parts
 Edge detection
 Image feature recognition