 Research
 Open Access
 Published:
Research on geometric dimension measurement system of shaft parts based on machine vision
EURASIP Journal on Image and Video Processing volume 2018, Article number: 101 (2018)
Abstract
Computer vision measurement systems have become more and more widely used in industrial production processes. Traditional manual measurement methods cannot guarantee product quality. Therefore, it is of great significance to improve the technology level of the manufacturing industry to study the automatic measurement system for the dimension of shaft parts with low cost, high precision, and high efficiency. A geometric part measurement system for shaft parts based on machine vision is presented in this paper. It uses the CCD camera to get the image. First, it preprocesses the collected images. In view of the influence of the noise and other factors, the wavelet denoising is used to denoise the image. Then, an improved single pixel edge detection method is proposed based on the Canny detection operator to extract the edge contour of the part image. Finally, the geometrical quantity algorithm is applied to the measurement research, and the measured data are obtained and analyzed. The experimental results show that the repeatability error of the system is less than 0.01 mm.
Introduction
Machine vision technology is the most promising new technology in the field of precision testing technology. It mainly uses computers to simulate humans or reproduce certain intelligent behaviors related to human vision. It extracts information from images of objective things to process and understand them. Finally, it is used for actual detection and control [1]. It has a very wide range of applications in many fields such as industry and agriculture. Among them, machine vision technology has been widely applied in the industries such as automobile, electronics, and electrical industry [2]. At present, the application of industrial vision systems can be roughly divided into three directions: automatic detection, intelligent assembly, and visual servo system [3]. Among them, the application of machine vision in automatic detection includes geometric measurement test and automatic visual recognition test [4], and the geometric measurement test technology is an indispensable part of manufacturing technology. In general industrial production, the measurement of part dimension is mostly offline or partspot inspection. However, as people are increasingly demanding the quality of packaging products, the measurement of part dimension tends to be 100% inspection, which puts forward higher requirements for inspection efficiency and means [5]. The use of general optical instrument geometric measurement technology is cumbersome in reading process, long in measurement time, subjective error in personnel is large, and automation level is low. The introduction of machine vision technology into geometric measurement can achieve rapid measurement of the dimension or relative position of objects (products or parts). It can achieve realtime online, noncontact, fast speed, good flexibility, and high precision. It saves time and power and can avoid human errors in the process of measurement, and can also achieve continuity of production and improve the automation level of production [6].
Shaft parts are one of the most common parts in the machinery industry. Their geometric dimension and precision not only directly affect the mechanical performance and life, but also have an important impact on the reduction of energy consumption and environmental pollution [7]. In most machining enterprises of our country, the measurement of product dimension still adopts the traditional measuring tools, such as V block percentage meter or micrometer, and micrometer. This traditional manual measurement method has a strong dependence on the operator, the operator’s labor intensity is large, the efficiency is low, the quality of the product cannot be effectively guaranteed, and many human errors may be produced, so it is difficult to meet the requirement of large batch, highefficiency, and highprecision product detection requirements [8]. Therefore, it is of great significance to study the automatic measurement system of shaft parts with low cost, high precision, and high efficiency for improving the technical level of the machinery and equipment manufacturing industry in China. At present, there are many researches on the dimension measurement system based on machine vision technology. For example, Guo Chao [9] and others built a simple machine vision system with a CMOS camera and used NI Vision Assistant software to measure the linear and arc dimensions of the test piece. Li Xuejiao [10] and others developed a large part dimension measurement system based on machine vision aiming at the limitation of the traditional contact measurement method such as the need of part contact and the measurement time and human factors. The system preprocessed the image and stitched the image. The Canny algorithm was used to extract the edge contour of the part image. Finally, the geometry of the extracted edge contour was measured experimentally. Experiments show that the system can quickly measure the dimension of parts within the allowable range of error. In order to combine the newest untouched measurement technique with the problem of size measurement of parts, Tian Yuanyuan [11] and others proposed a project of untouched measurement based on machine vision. Firstly, the noise and blur caused by limited detection size and optical element using super resolution reconstruction technique are eliminated with the super resolution reconstruction technique to get more details and information from captured images. Secondly, the leastsquares regression subpixel edge detection technique is used to orientate edge and extract corner points. Experiments show that the accuracy of the method is high, and the accuracy of measurement is stable. Based on the geometrical characteristics and image captured by CCD camera of shaft, edge detection using image processing technique for shaft is studied in this work. Qi Xiaoling [12] and others propose an improved adaptive switching median filtering method. These algorithms delivers better performance. A dimension measurement system is developed based on the realization for shaft dimension and parameters. The experimental result indicates that this filter is better able to preserve 2D edge structures of the image, and fastaccurate measurement for basic parameters of the shaft can be realized by using noncontact measurement method of machine vision within the permissible error range. A large number of studies show that the measurement system based on machine vision technology can measure parts effectively and quickly with high measurement accuracy.
On the basis of other parts dimension measurement system based on machine vision technology, based on the geometric features of shaft parts, this paper presents a geometric dimension measurement system for shaft parts based on machine vision. It uses the CCD camera to get the image. First, it preprocesses the collected images. In view of the influence of the noise and other factors, the wavelet denoising is used to denoise the image. Then, an improved single pixel edge detection method is proposed based on the Canny detection operator to extract the edge contour of the part image. Finally, the geometrical quantity algorithm is applied to the measurement research, and the measured data are obtained and analyzed.
System design method
System overall design
The measurement system of machine vision is composed of two parts: hardware system and software system. The hardware system mainly completes image acquisition and result output. The software system mainly completes the function of image preprocessing and size measurement. The measurement system mainly includes four functions: image acquisition, image analysis and processing, dimension measurement, and result output [13]. Among them, the hardware of the image acquisition part mainly includes light source, lens, camera, and image acquisition card. The image acquisition part is responsible for the optical imaging of the target object’s characteristic information, and then converts the optical signal into an image data acquisition card transmitted to the computer through the image sensor; the image analysis and processing part realized by the image processing analysis software based on the personal computer platform; the processing result is responsible for outputting the measurement result to the computer display, and the dimension measurement is completed by the corresponding software. The process of the whole system is shown in Fig. 1.
The system first collects image data through the image acquisition part, and then grays and denoises the image data through image analysis and preprocessing program. Then, the column height and the bottom radius of the parts are calculated through the size measurement program. In order to obtain the specific geometric size of the parts, the system also needs to calibrate the system. Finally, the size of the parts is displayed through the monitor.
Calibration of the system
The digital images collected from the machine vision system record the size relationship between the target objects. But this size relation is in pixels. In order to obtain the specific geometric size of the target, the relationship between the position of pixels in the image and the relation of the position of each point in the real world must be established. This process is called the calibration of the system. The measurement calibration method is used in this measurement system: taking a known mass block as a standard part, the mass image is selected with the same measurement conditions as the actual measurement, and the pixel value in the image pixel coordinate system is calculated. The ratio of the actual dimension of the gauge block to its pixel dimension is the camera’s calibration dimension. In the measurement, the obtained image of the mass is first processed, and the number of pixels in the image coordinate system is counted and expressed by P. It is compared with the actual length L of the mass to obtain the actual dimension S corresponding to each pixel. It is expressed by the following formula:
Among them, S represents the actual dimension corresponding to one pixel, namely, the calibration value. Since the calibration process also introduces errors, multiple calibration averaging methods can be used. After obtaining the system calibration parameters, the pixel dimension obtained by the image measurement can be converted into the actual dimension. However, the calibration coefficients of such systems need to be recalibrated after replacing the different parts to be measured or the measurement conditions (illumination, sight distance, focal length, and so on).
Image acquisition
The measurement system captures images that reflect the surface characteristics of the shaft through a CCD camera. Since shaft parts belong to nontransparent bodies, in order to obtain highcontrast images and facilitate subsequent image processing, the back lighting mode is adopted here. The light source sends out light waves, which is irradiated by the annular light tube to the measured part, so that the part is as much as possible in the controllable background of uniform illumination (In order to reduce the influence of ambient light on the parts, a circular cylinder is surrounded by an opaque cylindrical cavity mask and a piece of wool glass is placed on the light source to reduce the scattering.). The optical system is imaged on the photosensitive surface of the array CCD. The image acquisition card receives the analog signal input from the CCD camera and converts the A/D into the discrete digital signal to communicate with the computer. By processing the collected image data, the computer can quickly and accurately calculate the dimension of the part and send the result directly to the graphics card to show it. The system schematic diagram is shown in Fig. 2.
Image analysis and processing
The image analysis and processing module is mainly composed of Open CV and VC++ mixed development programming software. Mainly completed functions include image preprocessing, edge detection, and dimension measurement algorithm implementation.
1) Image preprocessing
Image preprocessing mainly includes two steps: image grayscale and image denoising. The image captured by the camera is a 32bitRGB image. The color of each pixel is represented by a combination of red (R), green (G), and blue (B). Each color is represented by 8 bits, so it can contain 224 different colors in theory and can reproduce the real color of the image. But the image measurement of the workpiece size is concerned with the edge features of the workpiece image. These features can be expressed by the change of the gray level of the image, so in order to reduce the amount of operation in the process of image processing, the image captured by the camera can be first grayscale. Grayscale processing is a process of processing a collected color image into a grayscale image using a correlation algorithm. Grayscale processing maps the gray value of the image to a new range, which makes it easy to identify certain features of the image and facilitate the subsequent image manipulation. The grayscale histogram of the natural image is often at a higher frequency in the lowvalued grayscale interval, making the darker details in the image often invisible. In order to make the image clear, the gray range of the image can be pulled apart, and the gray level with smaller gray frequency is made larger, so that the grayscale histogram tends to be consistent within a large dynamic range, that is, histogram equalization processing. After histogram equalization, the details of the image can be clearer and the proportion of each gray scale is more average. The image denoising is done by using wavelet to denoise the selected part images. The denoising process includes three steps [14]: multiscale decomposition, multiscale denoising, and wavelet inverse transform. The wavelet transform is used to reconstruct the noisy signal, which can remove Gaussian noise and saltandpepper noise in the signal.
2) Edge detection
For images with higher signaltonoise ratio, the leastsquares linear regression and the spatial moment subpixel localization algorithm can achieve subpixel accuracy for straightline edge detection, and satisfactory results are obtained [15]. Least squares linear regression subpixel positioning accuracy is approximately 0.1 pixel, spatial moment subpixel positioning accuracy is approximately 0.01 pixel, and the least squares linear regression subpixel positioning algorithm is much faster than the spatial moment subpixel positioning algorithm. Therefore, this paper uses least squares linear regression to reduce the twodimensional edge fitting to onedimensional edge location, so that the straight edge location can reach subpixel accuracy. In the linear filtering edge detection method, the Canny optimal operator is the most representative, and it is also one of the operators that detect the stepwise edge effect better. First, the Canny operator is used as an integer pixel level edge location function to extract the entire pixel level edge. Then, the edge subpixel location is performed using a least squares linear regression.
Canny operator edge detection algorithm is implemented by calculating the gradient size and direction of each pixel in the filtered image. The following 2 × 2 template can be used as a firstorder approximation to partial differentiation in the x direction and y direction [15].
The resulting gradient size and direction are [16]
Next, the gradient is nonmaximally suppressed, and the edges are refined by inhibiting the amplitude of all nonridge peaks in the gradient direction. Finally, double threshold segmentation is carried out, and two gradient thresholds are selected. The high threshold is usually 2~ 3 times of the low threshold. First, the pixel points of the gradient value less than the high threshold are removed from the set of edge points, and the edge point set F is obtained, and then process the pixel set M with the gradient between the high and low thresholds. If a point in M has a neighboring point in F, then this point is added to F, and finally the edge point set is obtained.
After a high signaltonoise ratio image is obtained by edge extraction, a vector set Ei = {E1,E2,…,En} containing edge pixel points is obtained to perform regression processing on the straight edges of the mechanical parts. Since the detected parts are ground with high precision, their edges can be approximated as a straight line, which satisfies the accuracy requirements. Moreover, the detection image is carefully designed, the contrast between the target and the background luminance in the captured image is high, and the edge vector group obtained through the first step processing includes all the edge points of the entire pixel accuracy. Therefore, directly performing linear regression on Ei not only has higher positioning accuracy, but also has a faster calculation speed. If Ei contains too much noise, linear regression is performed directly on Ei, and the returned straight line may deviate from the true edge position, resulting in a large error, and it is not suitable for direct regression at this time.
When the model is determined, the main task is to estimate the parameters. The linear coefficient α is estimated using least squares linear regression [17].
Find the minimum of formula (6), which is the point \( \alpha :{Q}^{\hbox{'}}\left(\alpha \right)=\underset{\alpha =20}{\min }Q\left(\alpha \right) \), since Q(α) is a nonnegative two type of α, let Q(α) be about the firstorder partiality of α equal to 0, then the least square estimation of α can be obtained [18].
Formulate (7) equations into normal equations about parameter α [19].
Then formula (7) can be expressed as
From this, the least square estimate of the linear parameter can be calculated as
If the edge points satisfy a normal distribution, the least square estimate of the linear parameter is an unbiased estimate. From (9), the parameters of the linear estimation are obtained to obtain the straight line of the regression. The straight line is the subpixel position passing through the edge pixels. The more points that participate in the estimation, the higher the regression accuracy, and the smaller the impact of noise.
3) Geometric dimension measurement
After the previous image processing, the obtained part image can be directly calculated. In this paper, the contour information of the image is measured by the principle of geometric measurement. For cylinder parts, the geometric measurement includes shaft radius and column height.

Calculation of column height
Because of the particularities of shaft parts, Hough transform can be used to detect straight lines. Its basic idea is [20] each data point on the straight line is transformed into a straight line or curve in the parameter plane, and the relationship between the parametric curve corresponding to the collinear data point intersects with one point in the parameter space, and the extraction problem of the straight line is transformed into a counting problem. The main advantage of the Hough transform in extracting a line is that it is less affected by gaps and noise in the line. From the derived edge image, two straight line segments can be detected, and the position parameters of each straight line are obtained in turn, that is, the edge line of the cylinder (column height).

Measurement of the diameter of the bottom surface
The circle or arc on the part image is also a collection of subpixel points after subpixel edge detection. Fit the circle using the principle of least square fitting, and then find the diameter of the circle part. The measuring principle of the diameter of parts is shown in Fig. 3. The calibration value of point B is calculated first because the radius R of the part is the length of the line segment OB in the figure, that is, the corresponding actual distance between the pixels in the AB segment. According to the linear nature of the calibration formula and the geometric relationship from point A to point B, it can be known that the calibration value of the pixel point from point A to point B is monotonically increasing. Therefore, a suitable point must be found between these data points. The number of pixels in the AB segment multiplied by the calibration value of this point is R.
Assume a real number ϕ, and 0 < ϕ < 1. The above points to find can be expressed as
This results in
Since formula (11) only R and φ are unknowns, where R is the desired value, an initial R value can be calculated given an initial value of φ. However, this R value is not the final result, and it needs to be continuously revised until it reaches the requirement. After obtaining an initial R value, the calibration values of each point from point A to point B can be calculated, and then accumulate to get the next R value. If the difference between the two adjacent R values is less than a sufficiently small ε, the R value is the final result. If it is not smaller than ε, a new φ value is obtained by formula (10). Then, use this value of φ to find the next R value. This loops until the final result is found R(set ε = 0.0012 mm here). The expression used to calculate the calibration values corresponding to the n and n + 1 points Y_{n} and Y_{n + 1}, and the distance L_{n + 1} of the point from the camera is
Where n = N,1,2,3......,Y_{n} and Y_{n} + 1 represent the corresponding values of the N and n + 1 points on the AB semicircle, and L_{n} + 1 represents the distance from the camera. a’, b’, c’ are given parameters. Through the calculation of the above process, the final measurement result is obtained.
Measurement experiment
The hardware diagram of this measurement system is shown in Fig. 4. The CCD camera makes the optical imaging of the target object’s characteristic information, and then converts the optical signal into an image data acquisition card transmitted to the computer through the image sensor; and the image data is processed by image preprocessing, edge detection and size measurement. Finally, the size measurement results are displayed through computer monitors.
The system uses a computer with a total pixel number of 800 × 600 surface array CCD cameras, CCTVLENS8. 0 mm lenses, and a fourway standard camera image acquisition card with a builtin model of PCI1104. The shaft part of the test sample is shown in Fig. 5. In this paper, the number of samples in the experiment was 10, some were machine tested and some were manual tests. Divide according to 10fold cross.
Its twodimensional drawing is shown in Fig. 6:
H indicates the height of the shaft part and R indicates the bottom radius of the shaft part. Image analysis and processing software is based on the Open CV platform’s self compiled function. It is mainly used for image preprocessing, edge detection, size measurement, and so on. In addition, the machine vision system collects the size relationship of the target object that is in pixels. Therefore, in order to obtain the specific geometric size of the target, system calibration is necessary. In this experiment, the gauge block of 60 mm is chosen as the measurement object, and the calibration value is obtained through calculation:
The measurement process of this experiment belongs to many times of equal precision. Because of the randomness of the measurement process, there must be errors in the measurement results. Owing to the existence of errors, it is impossible to judge which measurement result is closer to the true value, and it is more reliable to calculate the average value x of each group of data than x_{i}. Therefore, this experiment uses traditional inspection to detect parts 10 times and takes the average value as the actual value. The radius of the circle is 3.5 mm, the height of the cylinder is 4.7 mm, then the part is placed in the measuring place of the vision measurement system, and the part is measured 10 times. And then compare the results of the two.
Experimental results and discussions
The sample part is photographed 10 times with the CCD camera, and then the picture is introduced into the computer, which is grayscale and denoised, and the contrast image before and after processing is shown in Fig. 7. It is obvious that the images in (b) are clearer than those in (a).
Intercept a section of the outline drawing from the collected source image, and the result is shown in Fig. 8. From Fig. 8, it can be seen that the true condition of the two boundaries of the shaft. The original image is transformed into a grayscale image, and then the corresponding gray data are displayed. A large portion of the data value represents a bright area, and a small part of the data value represents a dark area. From the data distribution, we can conclude that the section between the maximum and minimum values in the abscissa represents the area of the edge.
In this way, the edge region is roughly located, and then the edge detection and location of the edge is required by the least square linear regression subpixel edge detection method.
After the image is preprocessed, the size measurement program is used to measure the size of the image, and the results are shown in Table 1. Because of the existence of errors, it is impossible to determine which measurement results are closer to the true value, and it is more reliable to calculate the average value of each group of data.
Calculate the repetitive errors of the data shown in Table 3 according to Eq. (15).
Where x_{i} is the data of the i measurement, u(x) is a repetitive error. From the above results, the repetitive error of the system was less than 0.01 mm.
The error values in visual inspection are caused by many factors, which are mainly caused by ①System calibration errors, including the selection error of the calibration reference point and the error caused by the noncoplanar between the calibration plane and the measured plane; ②the influence of the external environment, mainly the influence of the lighting scheme on the quality of the original image; ③perspective error and distortion caused by lens; ④errors caused by the selection of detection points in the measurement process; ⑤the error caused by the software algorithm.
Conclusions
The computer vision measurement system is more and more widely used in the industrial production process. Because the traditional manual measurement method has strong dependence on the operator, the operator has labor intensity, low efficiency, the product quality is not effectively guaranteed, and many human errors may be produced. It is difficult to meet the requirements of large quantity, highefficiency, and highprecision product testing. Therefore, it is of great significance to improve the technology level of the manufacturing industry to study the automatic measurement system for the dimension of shaft parts with low cost, high precision, and high efficiency. In this paper, a CCD camera is used to collect the parts in real time, and a series of preprocessing operations such as image grayscale processing and wavelet denoising are used. An improved single pixel edge detection method based on Canny detection operator is proposed to extract the edge contour of the part image. Using the code program in the cross platform computer vision library Open CV, the geometry algorithm is used to measure its contour, thus the bottom radius of the cylinder contour and the height of the column are obtained, and the repetitive error is tested and analyzed. The experimental results show that the repetitive error of the measuring system is less than 0.01 mm.
Abbreviations
 CCD:

Charge coupled element
 CMOS:

Complementary metaloxidesemiconductor
 CV:

Computer vision
 PCI:

Peripheral component interconnect
References
 1.
Y. Fang, I. Masaki, B. Horn, Depthbased target segmentation for intelligent vehicles: Fusion of radar and binocular stereo[J]. IEEE Trans. Intell. Transp. Syst. 3(3), 196–202 (2002)
 2.
R.S. Lu, Y.F. Li, Q. Yu, Online measurement of the straightness of seamless steel pipes using machine vision technique[J]. Sens Actuators A 94(1), 95–101 (2001)
 3.
M. Brown, D.G. Lowe, Automatic panoramic image stitching using invariant features[J]. Int. J. Comput. Vis. 74(1), 59–73 (2007)
 4.
B. Triggs, Empirical filter estimation for subpixel interpolation and matching[C]// Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on. IEEE, vol 2002, p. 550
 5.
S.D. Ma, Z.Y. Zhang, Computer visiontheory and algorithm[M] (Science Press, Beijing, 1998)
 6.
P. Kierkegaard, A method for detection of circular arcs based on the Hough transform[J]. Mach Vis Appl 5(4), 249–263 (1992)
 7.
H.K. Yuen, J. Princen, J. Illingworth, et al., Comparative study of Hough transform methods for circle finding[J]. Image Vis. Comput. 8(1), 71–77 (1989)
 8.
R. Klette, Robot Vision[M] (Hunan Literature and Art Publishing House, Changsha, 2001)
 9.
Q. Meng, R. Zhou, Research on restoration of compositeframe images degraded by motion[J]. Comput. Eng. 32(13), 187–189 (2006)
 10.
M. Elad, A. Feuer, Superresolution restoration of an image sequence: Adaptive filtering approach[J]. IEEE Trans Image Process 8(3), 387–395 (1999)
 11.
R. Keys, Cubic convolution interpolation for digital image processing[J]. IEEE Trans Acoust Speech Signal Process 29(6), 1153–1160 (2003)
 12.
Brown L G. A survey of image registration techniques[M]. ACM 24(4), 325–376 (1992)
 13.
Min Y. Method of Straight Line Edge Subpixel Localization for Mechanical Part Image[J]. Journal of SouthChina University of Technology 31(12), 30–33 (2003)
 14.
J.H. Lee, Y.S. Kim, S.R. Kim, et al., Realtime application of critical dimension measurement of TFTLCD pattern using a newly proposed 2D imageprocessing algorithm[J]. Opt Lasers Eng 46(7), 558–569 (2008)
 15.
E.N. Malamas, E.G.M. Petrakis, M. Zervakis, et al., A survey on industrial vision systems, applications and tools[J]. Image Vis Comput 21(2), 171–188 (2003)
 16.
M. Juneja, R. Mohan, An improved adaptive median filtering method for impulse noise detection[J]. Int J Recent Trends Eng 1(1), 274–278 (2013)
 17.
M.K. Lee, K.Y. Chan, A flexible inspection cell for machined parts[J]. Comput. Ind. 30(30), 219–224 (1996)
 18.
G. Chen, K. Yang, R. Chen, et al., A graynatural logarithm ratio bilateral filtering method for image processing[J]. Chin. Opt. Lett. 6(9), 648–650 (2008)
 19.
J. Zheng, Y. Bai, B. Wang, et al., Highspeed dynamic spectrum data acquisition system based on linear CCD[J]. Chinese Optics Letters 9(s1), s10308 (2011)
 20.
S. Xiaojun, Z. Xiaohui, Z. Hu, et al., Inspection of surface finish of flat grinding based on machine vision[J]. Chin J Lasers 35(s2), 320–323 (2008)
Acknowledgements
The authors thank the editor and anonymous reviewers for their helpful comments and valuable suggestions.
Availability of supporting data
We can provide the data.
About the authors
Li Bin was born in Sanhe, Hebei, P.R. China, in 1980. He received the master‘s degree from Hebei University of Technology, P.R. China. Now, he works in School of Mechanical Engineering, Tianjin University of Technology and Education. His research interests include computational intelligence, robotics, and reliability analysis.
Email: libinfly@163.com
Funding
This work was supported by research development foundation of Tianjin University of Technology and Education Grant (No. KJ1502).
Author information
Affiliations
Contributions
The author BL wrote the paper. The author read and approved the final manuscript.
Corresponding author
Correspondence to Bin Li.
Ethics declarations
Competing interests
The author declares that he has no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Li, B. Research on geometric dimension measurement system of shaft parts based on machine vision. J Image Video Proc. 2018, 101 (2018) doi:10.1186/s136400180339x
Received
Accepted
Published
DOI
Keywords
 Machine vision
 Geometric dimension
 Dimension measurement