Skip to main content

Research on geometric dimension measurement system of shaft parts based on machine vision

Abstract

Computer vision measurement systems have become more and more widely used in industrial production processes. Traditional manual measurement methods cannot guarantee product quality. Therefore, it is of great significance to improve the technology level of the manufacturing industry to study the automatic measurement system for the dimension of shaft parts with low cost, high precision, and high efficiency. A geometric part measurement system for shaft parts based on machine vision is presented in this paper. It uses the CCD camera to get the image. First, it preprocesses the collected images. In view of the influence of the noise and other factors, the wavelet denoising is used to denoise the image. Then, an improved single pixel edge detection method is proposed based on the Canny detection operator to extract the edge contour of the part image. Finally, the geometrical quantity algorithm is applied to the measurement research, and the measured data are obtained and analyzed. The experimental results show that the repeatability error of the system is less than 0.01 mm.

1 Introduction

Machine vision technology is the most promising new technology in the field of precision testing technology. It mainly uses computers to simulate humans or reproduce certain intelligent behaviors related to human vision. It extracts information from images of objective things to process and understand them. Finally, it is used for actual detection and control [1]. It has a very wide range of applications in many fields such as industry and agriculture. Among them, machine vision technology has been widely applied in the industries such as automobile, electronics, and electrical industry [2]. At present, the application of industrial vision systems can be roughly divided into three directions: automatic detection, intelligent assembly, and visual servo system [3]. Among them, the application of machine vision in automatic detection includes geometric measurement test and automatic visual recognition test [4], and the geometric measurement test technology is an indispensable part of manufacturing technology. In general industrial production, the measurement of part dimension is mostly off-line or part-spot inspection. However, as people are increasingly demanding the quality of packaging products, the measurement of part dimension tends to be 100% inspection, which puts forward higher requirements for inspection efficiency and means [5]. The use of general optical instrument geometric measurement technology is cumbersome in reading process, long in measurement time, subjective error in personnel is large, and automation level is low. The introduction of machine vision technology into geometric measurement can achieve rapid measurement of the dimension or relative position of objects (products or parts). It can achieve real-time online, non-contact, fast speed, good flexibility, and high precision. It saves time and power and can avoid human errors in the process of measurement, and can also achieve continuity of production and improve the automation level of production [6].

Shaft parts are one of the most common parts in the machinery industry. Their geometric dimension and precision not only directly affect the mechanical performance and life, but also have an important impact on the reduction of energy consumption and environmental pollution [7]. In most machining enterprises of our country, the measurement of product dimension still adopts the traditional measuring tools, such as V block percentage meter or micrometer, and micrometer. This traditional manual measurement method has a strong dependence on the operator, the operator’s labor intensity is large, the efficiency is low, the quality of the product cannot be effectively guaranteed, and many human errors may be produced, so it is difficult to meet the requirement of large batch, high-efficiency, and high-precision product detection requirements [8]. Therefore, it is of great significance to study the automatic measurement system of shaft parts with low cost, high precision, and high efficiency for improving the technical level of the machinery and equipment manufacturing industry in China. At present, there are many researches on the dimension measurement system based on machine vision technology. For example, Guo Chao [9] and others built a simple machine vision system with a CMOS camera and used NI Vision Assistant software to measure the linear and arc dimensions of the test piece. Li Xuejiao [10] and others developed a large part dimension measurement system based on machine vision aiming at the limitation of the traditional contact measurement method such as the need of part contact and the measurement time and human factors. The system preprocessed the image and stitched the image. The Canny algorithm was used to extract the edge contour of the part image. Finally, the geometry of the extracted edge contour was measured experimentally. Experiments show that the system can quickly measure the dimension of parts within the allowable range of error. In order to combine the newest untouched measurement technique with the problem of size measurement of parts, Tian Yuanyuan [11] and others proposed a project of untouched measurement based on machine vision. Firstly, the noise and blur caused by limited detection size and optical element using super resolution reconstruction technique are eliminated with the super resolution reconstruction technique to get more details and information from captured images. Secondly, the least-squares regression sub-pixel edge detection technique is used to orientate edge and extract corner points. Experiments show that the accuracy of the method is high, and the accuracy of measurement is stable. Based on the geometrical characteristics and image captured by CCD camera of shaft, edge detection using image processing technique for shaft is studied in this work. Qi Xiaoling [12] and others propose an improved adaptive switching median filtering method. These algorithms delivers better performance. A dimension measurement system is developed based on the realization for shaft dimension and parameters. The experimental result indicates that this filter is better able to preserve 2D edge structures of the image, and fast-accurate measurement for basic parameters of the shaft can be realized by using non-contact measurement method of machine vision within the permissible error range. A large number of studies show that the measurement system based on machine vision technology can measure parts effectively and quickly with high measurement accuracy.

On the basis of other parts dimension measurement system based on machine vision technology, based on the geometric features of shaft parts, this paper presents a geometric dimension measurement system for shaft parts based on machine vision. It uses the CCD camera to get the image. First, it preprocesses the collected images. In view of the influence of the noise and other factors, the wavelet denoising is used to denoise the image. Then, an improved single pixel edge detection method is proposed based on the Canny detection operator to extract the edge contour of the part image. Finally, the geometrical quantity algorithm is applied to the measurement research, and the measured data are obtained and analyzed.

2 System design method

2.1 System overall design

The measurement system of machine vision is composed of two parts: hardware system and software system. The hardware system mainly completes image acquisition and result output. The software system mainly completes the function of image preprocessing and size measurement. The measurement system mainly includes four functions: image acquisition, image analysis and processing, dimension measurement, and result output [13]. Among them, the hardware of the image acquisition part mainly includes light source, lens, camera, and image acquisition card. The image acquisition part is responsible for the optical imaging of the target object’s characteristic information, and then converts the optical signal into an image data acquisition card transmitted to the computer through the image sensor; the image analysis and processing part realized by the image processing analysis software based on the personal computer platform; the processing result is responsible for outputting the measurement result to the computer display, and the dimension measurement is completed by the corresponding software. The process of the whole system is shown in Fig. 1.

Fig. 1
figure 1

The whole processing flow of the machine vision measurement system

The system first collects image data through the image acquisition part, and then grays and denoises the image data through image analysis and pre-processing program. Then, the column height and the bottom radius of the parts are calculated through the size measurement program. In order to obtain the specific geometric size of the parts, the system also needs to calibrate the system. Finally, the size of the parts is displayed through the monitor.

2.2 Calibration of the system

The digital images collected from the machine vision system record the size relationship between the target objects. But this size relation is in pixels. In order to obtain the specific geometric size of the target, the relationship between the position of pixels in the image and the relation of the position of each point in the real world must be established. This process is called the calibration of the system. The measurement calibration method is used in this measurement system: taking a known mass block as a standard part, the mass image is selected with the same measurement conditions as the actual measurement, and the pixel value in the image pixel coordinate system is calculated. The ratio of the actual dimension of the gauge block to its pixel dimension is the camera’s calibration dimension. In the measurement, the obtained image of the mass is first processed, and the number of pixels in the image coordinate system is counted and expressed by P. It is compared with the actual length L of the mass to obtain the actual dimension S corresponding to each pixel. It is expressed by the following formula:

$$ S=\frac{L}{P} $$
(1)

Among them, S represents the actual dimension corresponding to one pixel, namely, the calibration value. Since the calibration process also introduces errors, multiple calibration averaging methods can be used. After obtaining the system calibration parameters, the pixel dimension obtained by the image measurement can be converted into the actual dimension. However, the calibration coefficients of such systems need to be recalibrated after replacing the different parts to be measured or the measurement conditions (illumination, sight distance, focal length, and so on).

2.3 Image acquisition

The measurement system captures images that reflect the surface characteristics of the shaft through a CCD camera. Since shaft parts belong to non-transparent bodies, in order to obtain high-contrast images and facilitate subsequent image processing, the back lighting mode is adopted here. The light source sends out light waves, which is irradiated by the annular light tube to the measured part, so that the part is as much as possible in the controllable background of uniform illumination (In order to reduce the influence of ambient light on the parts, a circular cylinder is surrounded by an opaque cylindrical cavity mask and a piece of wool glass is placed on the light source to reduce the scattering.). The optical system is imaged on the photosensitive surface of the array CCD. The image acquisition card receives the analog signal input from the CCD camera and converts the A/D into the discrete digital signal to communicate with the computer. By processing the collected image data, the computer can quickly and accurately calculate the dimension of the part and send the result directly to the graphics card to show it. The system schematic diagram is shown in Fig. 2.

Fig. 2
figure 2

Schematic diagram of machine vision system

2.4 Image analysis and processing

The image analysis and processing module is mainly composed of Open CV and VC++ mixed development programming software. Mainly completed functions include image preprocessing, edge detection, and dimension measurement algorithm implementation.

1) Image preprocessing

Image preprocessing mainly includes two steps: image grayscale and image denoising. The image captured by the camera is a 32-bit-RGB image. The color of each pixel is represented by a combination of red (R), green (G), and blue (B). Each color is represented by 8 bits, so it can contain 224 different colors in theory and can reproduce the real color of the image. But the image measurement of the workpiece size is concerned with the edge features of the workpiece image. These features can be expressed by the change of the gray level of the image, so in order to reduce the amount of operation in the process of image processing, the image captured by the camera can be first grayscale. Grayscale processing is a process of processing a collected color image into a grayscale image using a correlation algorithm. Grayscale processing maps the gray value of the image to a new range, which makes it easy to identify certain features of the image and facilitate the subsequent image manipulation. The grayscale histogram of the natural image is often at a higher frequency in the low-valued grayscale interval, making the darker details in the image often invisible. In order to make the image clear, the gray range of the image can be pulled apart, and the gray level with smaller gray frequency is made larger, so that the grayscale histogram tends to be consistent within a large dynamic range, that is, histogram equalization processing. After histogram equalization, the details of the image can be clearer and the proportion of each gray scale is more average. The image denoising is done by using wavelet to denoise the selected part images. The denoising process includes three steps [14]: multi-scale decomposition, multi-scale denoising, and wavelet inverse transform. The wavelet transform is used to reconstruct the noisy signal, which can remove Gaussian noise and salt-and-pepper noise in the signal.

2) Edge detection

For images with higher signal-to-noise ratio, the least-squares linear regression and the spatial moment sub-pixel localization algorithm can achieve sub-pixel accuracy for straight-line edge detection, and satisfactory results are obtained [15]. Least squares linear regression sub-pixel positioning accuracy is approximately 0.1 pixel, spatial moment sub-pixel positioning accuracy is approximately 0.01 pixel, and the least squares linear regression sub-pixel positioning algorithm is much faster than the spatial moment sub-pixel positioning algorithm. Therefore, this paper uses least squares linear regression to reduce the two-dimensional edge fitting to one-dimensional edge location, so that the straight edge location can reach sub-pixel accuracy. In the linear filtering edge detection method, the Canny optimal operator is the most representative, and it is also one of the operators that detect the stepwise edge effect better. First, the Canny operator is used as an integer pixel level edge location function to extract the entire pixel level edge. Then, the edge sub-pixel location is performed using a least squares linear regression.

Canny operator edge detection algorithm is implemented by calculating the gradient size and direction of each pixel in the filtered image. The following 2 × 2 template can be used as a first-order approximation to partial differentiation in the x direction and y direction [15].

$$ p=\frac{1}{2}\left[\begin{array}{cc}-1& 1\\ {}-1& 1\end{array}\right],Q=\frac{1}{2}\left[\begin{array}{cc}1& 1\\ {}-1& -1\end{array}\right] $$
(2)

The resulting gradient size and direction are [16]

$$ M\left(x,y\right)=\sqrt{P^2\left(x,y\right)+{Q}^2\left(x,y\right)} $$
(3)
$$ \theta =\arctan \left[Q\left(x,y\right)/P\left(x,y\right)\right] $$
(4)

Next, the gradient is non-maximally suppressed, and the edges are refined by inhibiting the amplitude of all non-ridge peaks in the gradient direction. Finally, double threshold segmentation is carried out, and two gradient thresholds are selected. The high threshold is usually 2~ 3 times of the low threshold. First, the pixel points of the gradient value less than the high threshold are removed from the set of edge points, and the edge point set F is obtained, and then process the pixel set M with the gradient between the high and low thresholds. If a point in M has a neighboring point in F, then this point is added to F, and finally the edge point set is obtained.

After a high signal-to-noise ratio image is obtained by edge extraction, a vector set Ei = {E1,E2,…,En} containing edge pixel points is obtained to perform regression processing on the straight edges of the mechanical parts. Since the detected parts are ground with high precision, their edges can be approximated as a straight line, which satisfies the accuracy requirements. Moreover, the detection image is carefully designed, the contrast between the target and the background luminance in the captured image is high, and the edge vector group obtained through the first step processing includes all the edge points of the entire pixel accuracy. Therefore, directly performing linear regression on Ei not only has higher positioning accuracy, but also has a faster calculation speed. If Ei contains too much noise, linear regression is performed directly on Ei, and the returned straight line may deviate from the true edge position, resulting in a large error, and it is not suitable for direct regression at this time.

When the model is determined, the main task is to estimate the parameters. The linear coefficient α is estimated using least squares linear regression [17].

$$ Q\left(\alpha \right)=\sum {\left( yi-\alpha 0-\alpha 1 xi1-\cdots -\alpha pxip\right)}^2 $$
(5)

Find the minimum of formula (6), which is the point \( \alpha :{Q}^{\hbox{'}}\left(\alpha \right)=\underset{\alpha =20}{\min }Q\left(\alpha \right) \), since Q(α) is a non-negative two type of α, let Q(α) be about the first-order partiality of α equal to 0, then the least square estimation of α can be obtained [18].

$$ \left\{\begin{array}{c}\frac{\partial Q\left(\alpha \right)}{\partial {\alpha}_0}=-2\sum \limits_{i=1}^n\left({y}_i-{\alpha}_0-{\alpha}_1{x}_{i1}-\cdots -{\alpha}_p{x}_{ip}\right)=0\\ {}\frac{\partial Q\left(\alpha \right)}{\partial {\alpha}_i}=-2\sum \limits_{i=1}^n\left({y}_i-{\alpha}_0-{\alpha}_1{x}_{i1}-\cdots -{\alpha}_p{x}_{ip}\right){x}_{ij}=0\end{array}\left(i=1,2,\cdots, p\right)\right. $$
(6)

Formulate (7) equations into normal equations about parameter α [19].

$$ y={\left[y1\kern0.5em y2\kern0.5em \cdots \kern0.5em yn\right]}^T,X=\left[\begin{array}{cccc}1& {x}_{11}& \cdots & {x}_{1p}\\ {}1& {x}_{21}& \cdots & {x}_{2p}\\ {}\cdots & \cdots & \cdots & \cdots \\ {}1& {x}_{n1}& \cdots & {x}_{np}\end{array}\right],\alpha =\left[{\alpha}_0\kern0.5em {\alpha}_1\kern0.5em \cdots \kern0.5em {\alpha}_p\right] $$

Then formula (7) can be expressed as

$$ {X}^T X\alpha ={X}^Ty $$
(7)

From this, the least square estimate of the linear parameter can be calculated as

$$ \alpha ={\left({X}^TX\right)}^{-1}{X}^Ty $$
(8)

If the edge points satisfy a normal distribution, the least square estimate of the linear parameter is an unbiased estimate. From (9), the parameters of the linear estimation are obtained to obtain the straight line of the regression. The straight line is the sub-pixel position passing through the edge pixels. The more points that participate in the estimation, the higher the regression accuracy, and the smaller the impact of noise.

3) Geometric dimension measurement

After the previous image processing, the obtained part image can be directly calculated. In this paper, the contour information of the image is measured by the principle of geometric measurement. For cylinder parts, the geometric measurement includes shaft radius and column height.

  • Calculation of column height

Because of the particularities of shaft parts, Hough transform can be used to detect straight lines. Its basic idea is [20] each data point on the straight line is transformed into a straight line or curve in the parameter plane, and the relationship between the parametric curve corresponding to the collinear data point intersects with one point in the parameter space, and the extraction problem of the straight line is transformed into a counting problem. The main advantage of the Hough transform in extracting a line is that it is less affected by gaps and noise in the line. From the derived edge image, two straight line segments can be detected, and the position parameters of each straight line are obtained in turn, that is, the edge line of the cylinder (column height).

  • Measurement of the diameter of the bottom surface

The circle or arc on the part image is also a collection of sub-pixel points after sub-pixel edge detection. Fit the circle using the principle of least square fitting, and then find the diameter of the circle part. The measuring principle of the diameter of parts is shown in Fig. 3. The calibration value of point B is calculated first because the radius R of the part is the length of the line segment OB in the figure, that is, the corresponding actual distance between the pixels in the AB segment. According to the linear nature of the calibration formula and the geometric relationship from point A to point B, it can be known that the calibration value of the pixel point from point A to point B is monotonically increasing. Therefore, a suitable point must be found between these data points. The number of pixels in the AB segment multiplied by the calibration value of this point is R.

Fig. 3
figure 3

Part diameter measurement

Assume a real number ϕ, and 0 < ϕ < 1. The above points to find can be expressed as

$$ {Y}_A\phi +{Y}_B\left(1-\phi \right) $$
(9)

This results in

$$ R=N\left[{Y}_A\phi +{Y}_B\left(1-\phi \right)\right] $$
(10)

Since formula (11) only R and φ are unknowns, where R is the desired value, an initial R value can be calculated given an initial value of φ. However, this R value is not the final result, and it needs to be continuously revised until it reaches the requirement. After obtaining an initial R value, the calibration values of each point from point A to point B can be calculated, and then accumulate to get the next R value. If the difference between the two adjacent R values is less than a sufficiently small ε, the R value is the final result. If it is not smaller than ε, a new φ value is obtained by formula (10). Then, use this value of φ to find the next R value. This loops until the final result is found R(set ε = 0.0012 mm here). The expression used to calculate the calibration values corresponding to the n and n + 1 points Yn and Yn + 1, and the distance Ln + 1 of the point from the camera is

$$ {Y}_n={a}^{\hbox{'}}+{b}^{\hbox{'}}{L}_n+{c}^{\hbox{'}}N $$
(11)
$$ {Y}_{n+1}={a}^{\hbox{'}}+{b}^{\hbox{'}}{L}_{n+1}+{c}^{\hbox{'}}\left(N+1\right) $$
(12)
$$ {L}_{n+1}=244-\sqrt{R^2-{\left(\sum \limits_{i=1}^n{Y}_i\right)}^2} $$
(13)

Where n = N,1,2,3......,Yn and Yn + 1 represent the corresponding values of the N and n + 1 points on the AB semicircle, and Ln + 1 represents the distance from the camera. a’, b’, c’ are given parameters. Through the calculation of the above process, the final measurement result is obtained.

3 Measurement experiment

The hardware diagram of this measurement system is shown in Fig. 4. The CCD camera makes the optical imaging of the target object’s characteristic information, and then converts the optical signal into an image data acquisition card transmitted to the computer through the image sensor; and the image data is processed by image preprocessing, edge detection and size measurement. Finally, the size measurement results are displayed through computer monitors.

Fig. 4
figure 4

The hardware equipment of the measurement system

The system uses a computer with a total pixel number of 800 × 600 surface array CCD cameras, CCTV-LENS8. 0 mm lenses, and a four-way standard camera image acquisition card with a built-in model of PCI-1104. The shaft part of the test sample is shown in Fig. 5. In this paper, the number of samples in the experiment was 10, some were machine tested and some were manual tests. Divide according to 10-fold cross.

Fig. 5
figure 5

Parts to be measured

Its two-dimensional drawing is shown in Fig. 6:

Fig. 6
figure 6

The two-dimensional drawing of the part

H indicates the height of the shaft part and R indicates the bottom radius of the shaft part. Image analysis and processing software is based on the Open CV platform’s self compiled function. It is mainly used for image preprocessing, edge detection, size measurement, and so on. In addition, the machine vision system collects the size relationship of the target object that is in pixels. Therefore, in order to obtain the specific geometric size of the target, system calibration is necessary. In this experiment, the gauge block of 60 mm is chosen as the measurement object, and the calibration value is obtained through calculation:

$$ S=60/356\left(\mathrm{mm}/\mathrm{pixel}\right) $$

The measurement process of this experiment belongs to many times of equal precision. Because of the randomness of the measurement process, there must be errors in the measurement results. Owing to the existence of errors, it is impossible to judge which measurement result is closer to the true value, and it is more reliable to calculate the average value x of each group of data than xi. Therefore, this experiment uses traditional inspection to detect parts 10 times and takes the average value as the actual value. The radius of the circle is 3.5 mm, the height of the cylinder is 4.7 mm, then the part is placed in the measuring place of the vision measurement system, and the part is measured 10 times. And then compare the results of the two.

4 Experimental results and discussions

The sample part is photographed 10 times with the CCD camera, and then the picture is introduced into the computer, which is grayscale and denoised, and the contrast image before and after processing is shown in Fig. 7. It is obvious that the images in (b) are clearer than those in (a).

Fig. 7
figure 7

The original image of the part and the image after grayscale and denoising

Intercept a section of the outline drawing from the collected source image, and the result is shown in Fig. 8. From Fig. 8, it can be seen that the true condition of the two boundaries of the shaft. The original image is transformed into a grayscale image, and then the corresponding gray data are displayed. A large portion of the data value represents a bright area, and a small part of the data value represents a dark area. From the data distribution, we can conclude that the section between the maximum and minimum values in the abscissa represents the area of the edge.

Fig. 8
figure 8

The edge of the image

In this way, the edge region is roughly located, and then the edge detection and location of the edge is required by the least square linear regression sub-pixel edge detection method.

After the image is preprocessed, the size measurement program is used to measure the size of the image, and the results are shown in Table 1. Because of the existence of errors, it is impossible to determine which measurement results are closer to the true value, and it is more reliable to calculate the average value of each group of data.

Table 1 Parts dimension data measured by system

Calculate the repetitive errors of the data shown in Table 3 according to Eq. (15).

$$ {\displaystyle \begin{array}{l}\overline{x}=\frac{1}{n}\sum \limits_{i=1}^n{x}_i\\ {}s\left(\overline{x}\right)=\sqrt{\frac{\sum \limits_{i=1}^n{\left({x}_i-\overline{x}\right)}^2}{n}}\\ {}u(x)=s\left(\overline{x}\right)\end{array}} $$
(15)

Where xi is the data of the i measurement, u(x) is a repetitive error. From the above results, the repetitive error of the system was less than 0.01 mm.

The error values in visual inspection are caused by many factors, which are mainly caused by System calibration errors, including the selection error of the calibration reference point and the error caused by the non-coplanar between the calibration plane and the measured plane; the influence of the external environment, mainly the influence of the lighting scheme on the quality of the original image; perspective error and distortion caused by lens; errors caused by the selection of detection points in the measurement process; the error caused by the software algorithm.

5 Conclusions

The computer vision measurement system is more and more widely used in the industrial production process. Because the traditional manual measurement method has strong dependence on the operator, the operator has labor intensity, low efficiency, the product quality is not effectively guaranteed, and many human errors may be produced. It is difficult to meet the requirements of large quantity, high-efficiency, and high-precision product testing. Therefore, it is of great significance to improve the technology level of the manufacturing industry to study the automatic measurement system for the dimension of shaft parts with low cost, high precision, and high efficiency. In this paper, a CCD camera is used to collect the parts in real time, and a series of preprocessing operations such as image grayscale processing and wavelet denoising are used. An improved single pixel edge detection method based on Canny detection operator is proposed to extract the edge contour of the part image. Using the code program in the cross platform computer vision library Open CV, the geometry algorithm is used to measure its contour, thus the bottom radius of the cylinder contour and the height of the column are obtained, and the repetitive error is tested and analyzed. The experimental results show that the repetitive error of the measuring system is less than 0.01 mm.

Abbreviations

CCD:

Charge coupled element

CMOS:

Complementary metal-oxide-semiconductor

CV:

Computer vision

PCI:

Peripheral component interconnect

References

  1. Y. Fang, I. Masaki, B. Horn, Depth-based target segmentation for intelligent vehicles: Fusion of radar and binocular stereo[J]. IEEE Trans. Intell. Transp. Syst. 3(3), 196–202 (2002)

    Article  Google Scholar 

  2. R.S. Lu, Y.F. Li, Q. Yu, On-line measurement of the straightness of seamless steel pipes using machine vision technique[J]. Sens Actuators A 94(1), 95–101 (2001)

    Article  Google Scholar 

  3. M. Brown, D.G. Lowe, Automatic panoramic image stitching using invariant features[J]. Int. J. Comput. Vis. 74(1), 59–73 (2007)

    Article  Google Scholar 

  4. B. Triggs, Empirical filter estimation for subpixel interpolation and matching[C]// Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on. IEEE, vol 2002, p. 550

  5. S.D. Ma, Z.Y. Zhang, Computer vision-theory and algorithm[M] (Science Press, Beijing, 1998)

    Google Scholar 

  6. P. Kierkegaard, A method for detection of circular arcs based on the Hough transform[J]. Mach Vis Appl 5(4), 249–263 (1992)

    Article  Google Scholar 

  7. H.K. Yuen, J. Princen, J. Illingworth, et al., Comparative study of Hough transform methods for circle finding[J]. Image Vis. Comput. 8(1), 71–77 (1989)

    Article  Google Scholar 

  8. R. Klette, Robot Vision[M] (Hunan Literature and Art Publishing House, Changsha, 2001)

    Book  Google Scholar 

  9. Q. Meng, R. Zhou, Research on restoration of composite-frame images degraded by motion[J]. Comput. Eng. 32(13), 187–189 (2006)

    Google Scholar 

  10. M. Elad, A. Feuer, Superresolution restoration of an image sequence: Adaptive filtering approach[J]. IEEE Trans Image Process 8(3), 387–395 (1999)

    Article  Google Scholar 

  11. R. Keys, Cubic convolution interpolation for digital image processing[J]. IEEE Trans Acoust Speech Signal Process 29(6), 1153–1160 (2003)

    Article  MathSciNet  Google Scholar 

  12. Brown L G. A survey of image registration techniques[M]. ACM 24(4), 325–376 (1992)

    Article  Google Scholar 

  13. Min Y. Method of Straight Line Edge Subpixel Localization for Mechanical Part Image[J]. Journal of SouthChina University of Technology 31(12), 30–33 (2003)

  14. J.H. Lee, Y.S. Kim, S.R. Kim, et al., Real-time application of critical dimension measurement of TFT-LCD pattern using a newly proposed 2D image-processing algorithm[J]. Opt Lasers Eng 46(7), 558–569 (2008)

    Article  Google Scholar 

  15. E.N. Malamas, E.G.M. Petrakis, M. Zervakis, et al., A survey on industrial vision systems, applications and tools[J]. Image Vis Comput 21(2), 171–188 (2003)

    Article  Google Scholar 

  16. M. Juneja, R. Mohan, An improved adaptive median filtering method for impulse noise detection[J]. Int J Recent Trends Eng 1(1), 274–278 (2013)

    Google Scholar 

  17. M.K. Lee, K.Y. Chan, A flexible inspection cell for machined parts[J]. Comput. Ind. 30(30), 219–224 (1996)

    Article  Google Scholar 

  18. G. Chen, K. Yang, R. Chen, et al., A gray-natural logarithm ratio bilateral filtering method for image processing[J]. Chin. Opt. Lett. 6(9), 648–650 (2008)

    Article  Google Scholar 

  19. J. Zheng, Y. Bai, B. Wang, et al., High-speed dynamic spectrum data acquisition system based on linear CCD[J]. Chinese Optics Letters 9(s1), s10308 (2011)

    Article  Google Scholar 

  20. S. Xiaojun, Z. Xiaohui, Z. Hu, et al., Inspection of surface finish of flat grinding based on machine vision[J]. Chin J Lasers 35(s2), 320–323 (2008)

    Google Scholar 

Download references

Acknowledgements

The authors thank the editor and anonymous reviewers for their helpful comments and valuable suggestions.

Availability of supporting data

We can provide the data.

About the authors

Li Bin was born in Sanhe, Hebei, P.R. China, in 1980. He received the master‘s degree from Hebei University of Technology, P.R. China. Now, he works in School of Mechanical Engineering, Tianjin University of Technology and Education. His research interests include computational intelligence, robotics, and reliability analysis.

E-mail: libinfly@163.com

Funding

This work was supported by research development foundation of Tianjin University of Technology and Education Grant (No. KJ15-02).

Author information

Authors and Affiliations

Authors

Contributions

The author BL wrote the paper. The author read and approved the final manuscript.

Corresponding author

Correspondence to Bin Li.

Ethics declarations

Competing interests

The author declares that he has no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, B. Research on geometric dimension measurement system of shaft parts based on machine vision. J Image Video Proc. 2018, 101 (2018). https://doi.org/10.1186/s13640-018-0339-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13640-018-0339-x

Keywords