Skip to content

Advertisement

  • Research
  • Open Access

Integrated development environment model for visual image processing based on Moore nearest neighbor model

EURASIP Journal on Image and Video Processing20182018:123

https://doi.org/10.1186/s13640-018-0363-x

  • Received: 3 July 2018
  • Accepted: 23 October 2018
  • Published:

Abstract

With the development of information technology, especially the development of visual image processing technology, more and more automated production links, such as injection molding production, use visualization to realize automatic detection of injection molds, but the traditional integrated environment for visual image processing exists in real time. In order to solve this problem, this paper proposes a visual image processing integrated development environment model based on the Moore nearest neighbor model. The model runs on the visual platform, and the residuals to be detected are highlighted by the Moore nearest neighbor model. In order to solve the error caused by the image shift, the model introduces the support vector machine as the classification method, and the model is used for the simulation experiment. The average accuracy of cavity residual detection in the model is 85.71%, and the average time of residual detection is 0.910 s. The results show that the model solves the real-time and accuracy problems of the traditional visual image processing model.

Keywords

  • Mold protection
  • Image processing
  • Moore nearest neighbor model
  • Integrated development environment

1 Introduction

The traditional mold injection production generally adopts the manipulator for mold compression and mold unloading. Due to the phenomenon that mold unloading may not be thorough during the mold unloading process of plastic products, it is necessary to monitor the molds on the manual basis in the production site. Once there is any abnormal situation, the injection molding machine is immediately locked so as to prevent the mold damage and the generation of defective products [1, 2]. However, the harsh environment and high risk at the injection molding production sites are not conducive to long-term operation of the workers [3]. In addition, manual monitoring is lack of stability. And the labor costs continue to increase. These factors have contributed to the development of the automation technology for the detection of the injection molds [4, 5].

The traditional protection methods for the mold of the injection molding machine mainly include the low-voltage protection, setting the limit parameters, the current monitoring, setting the upper limit of the time, and so on [6, 7]. This type of methods requires the addition of sophisticated equipment, which will increase the cost and at the same time make the injection molding process complicated [8]. The machine vision inspection technology has the characteristics of non-contact, high precision, high efficiency, and so on. And its application to the protection of the injection molds has become the research hotspot at present [9, 10]. The integrated development environment for the image processing based on the machine vision is generally composed of an industrial personal computer for the operation of the integrated development environment model for the image processing and a set of image acquisition and control equipment which is used as the bridge between the industrial personal computer and the injection molding machine [11]. In the aspect of the integrated development environment model for the image processing, the host computer interactive interface is used to set the region of interest. And the difference image of the image to be measured and the template image is obtained by the background subtraction method. Subsequently, threshold segmentation and binarization are carried out. And then the statistics of the black points in the binary image is carried out to determine whether there is residue. In the same way, the host computer interactive interface is used to set the region of interest. Subsequently, the light compensation and the image alignment algorithm are used to perform the preprocessing. And then differential operation is carried out on the binary image of the image to be measured and the binary image of the background image. Finally, in accordance with the threshold value, it is determined whether there is residue. Through the gray feature matching algorithm, filter preprocessing is carried out on the image. And the canny operator is used to extract the geometric contour features of the mold and then carry out the template matching. Image background subtraction detection algorithm is used to carry out the similarity calculation through the template matching algorithm [12, 13]. The region of interest is identified by means of template matching. And the similarity judgment is carried out on the template of the gray level co-occurrence matrix and the detection image. The above algorithm in the literature requires an industrial personal computer hardware environment with relatively large memory and high main frequency to meet the real-time and precision requirements of the image detection. The cost of such equipment is high, and there is certain limitation in the popularization and application. The visualized mold protection based on image processing has become a new direction of exploration. In the literature, TQ2440 is used to build a visual mold protection platform. The mold protection algorithm makes use of the background subtraction method, which, however, does not take into consideration the error caused by the image shift that is generated due to the mechanical vibration in the actual applications. The mold monitoring system based on TMS320DM6473 adopts the basic anomaly detection algorithm and the template matching algorithm based on the dual detection of the local binary pattern anomaly detection algorithm. The algorithm is complicated and the setting conditions are excessively ideal [14]. In recent years, the digital image technology has attracted great attention and achieved considerable development. It has been widely used in various fields. The research on the image technology has been carried out in universities, research institutes, and companies, which is increasingly deepened and carried out in a systematic manner. Great attention is paid to the inheritance of knowledge and algorithm reuse [1521], the pursuit of high efficiency, and low cost. Therefore, it is necessary to provide an environment with powerful support for the learning, research, and development of the image algorithm.

The real-time performance and the precision are important performance indicators for the visual inspection. In view of the problems existing in the above literatures, a kind of integrated development environment model for the visual image processing based on the fusion of the Moore nearest neighbor model is put forward in this paper. The model is operated on the visualized platform. And the Moore nearest neighbor model is adopted to highlight the residue to be measured without the need for the template matching, image registration, and correction. The system has the features of high detection precision rate, fast response, strong adaptability, and so on.

2 Overall design of the integrated development environment for image processing

The overall architecture of the integrated development environment for the visualized image processing is shown in Fig. 1. It includes an image processing and protection control unit, a camera module, an image buffering module, a communication module, a VGA display module, and a power supply module. The main functions are as the following.
  1. 1.

    Image processing and protection control unit. Image processing is carried out on the mold cavity image of the injection molding machine acquired in real time to determine whether there is residue in the mold cavity. The control unit adopts the 32-bit dual-core visualization processor, and the basic frequency is 456 MHz.

     
  2. 2.

    Camera module. The image processing and protection control unit controls the camera to acquire the image after each mold cavity is opened.

     
  3. 3.

    Cache module. The original image data and the image data output after each image processing is cached. 128M DDR2 SDRAM is used.

     
  4. 4.

    Communication module. The result of the determination for the residue of the image processing and the protection control unit is transmitted to the controller of the injection molding machine through the MAX3232CUE chip.

     
  5. 5.

    VGA display module. The collected raw image data and the image data after the processing of the algorithm each time are displayed. Three-channel 10-bit high speed video DA converter CS7123 is used to convert the image data output from the media interface to the VGA display.

     
  6. 6.

    Alarm module. If the image processing and the protection control unit determine that there is residue in the mold cavity, the buzzer at the GPIO interface of the processor and the LED lamp are activated to alarm and flicker.

     
Fig. 1
Fig. 1

Overall architecture of the system

3 Method—construction of the integrated development environment model for visual image processing

In the traditional integrated development environment for image processing, regardless of whether the background subtraction method or the template matching algorithm is adopted, it is necessary to carry out one-to-one match on the pixel points of the template image and the image to be measured and calculation. The error caused by the image offset is a problem that cannot be eradicated. The solution can only be achieved by adding the image registration and correction algorithm, which will affect the real-time performance of the algorithm. The residue detection algorithm based on the fusion of the Moore nearest neighbor model highlights the residue, which can effectively avoid the problem of image offset. At the same time, the support vector machine classification is introduced, which can reduce the misjudgment caused by the image offset and improve the detection precision.

In general, the development of new image processing algorithms needs to go through three phases: the design of the core algorithm model, the testing and implementation of the algorithm model design, and experimentation. This is a cyclical process of optimization and improvement. And it is necessary to complete a number of tasks, as shown in Fig. 2. It can be seen that the research on new algorithm may often become very complicated due to the need to accomplish the basic image algorithm and carry out programming of the graphical user interface (GUI). As a result, the efficiency may be reduced. And we can divide the work involved in the development into two categories. The specific classification is as shown in Fig. 2.
Fig. 2
Fig. 2

Model for the research and development process of the image processing algorithm

Definition 1: The researchers on core algorithm development are most interested in the modeling of the image processing algorithms and the implementation of the basic programming. And this algorithm is also referred to as a new algorithm or a standard algorithm, which is the “Effective work” and denoted by Weffective.

Definition 2: The assisted development that supports the core algorithm and carries out the basic image algorithm research and development, as well as development of the GUI visualization for the human-computer interaction is the “Extra work,” which is denoted by Wextra.

Therefore, the development efficiency is \( Z\kern0.5em =\kern0.5em \frac{W_{\mathrm{effective}}}{W_{\mathrm{effective}}\kern0.5em +\kern0.5em {W}_{\mathrm{extra}}} \), in which, it is generally difficult to change Weffective, and it is preferable to provide the algorithm and interface resource reuse and inheritance mechanism in the development environment to reduce or avoid Wextra and improve the efficiency.

The model put forward in this paper includes the processing mode and the operating mode.

The flow of the visual image processing mode is shown in Fig. 3 as the following, and the specific steps are as follow:
  • Step 1: Obtain the original image data.

  • Step 2: Carry out gray scale processing on the original image.

Fig. 3
Fig. 3

Flow of the visualized image processing mode

The specific process of the gray scale processing algorithm is as follows: the components of R, G, and B for each pixel point is obtained; the R, G, and B components for each pixel point is calculated in accordance with the Eq. (1); the original pixel point is covered by the temp obtained from calculation; the above process is repeated and the loop is from the first pixel to the last pixel as the following.
$$ \mathrm{temp}\kern0.5em =\kern0.5em \left(r\ast 299\kern0.5em +\kern0.5em g\ast 587+b\ast 114+500\right)/1000. $$
(1)
r, g, b represent the RGB component values in the graph, respectively.
  • Step 3: Carry out the median filtering on the gray scale images.

The specific flow of the median filtering algorithm: sliding window at 3 × 3 is used to obtain the gray value of 9 pixel points. In accordance with Eq. (2), the gray values of the 9 pixel points are calculated. The W template is used to traverse the entire image. And the pixel value of the center point of the region covered by the template is replaced by the obtained median value in the region covered by the template at the size of W.
$$ g\left(x,y\right)= med\left\{f\left(x-k,y-l\right),\left(k,l\in W\right)\right\}, $$
(2)
In which: med stands for the median operation, that is, the pixel points in the W template are first sorted. And then the median value of the pixel points in the W matrix is obtained. The f (x, y) and g (x, y) stand for the original image and the processed image, respectively.
  • Step 4: Carry out the three-layer wavelet decomposition on the filtered gray scale image.

The two-dimensional wavelet transform is the basis of the image wavelet decomposition, and the wavelet basis function is as the following:
$$ {\Psi}_{a,\tau }(t)=\frac{1}{\sqrt{a}}\Psi \left(\frac{t-\tau }{a}\right)a,\tau \in R;a>0, $$
(3)
In which a stands for the scaling factor and τ stands for the translation factor. Let Ψ(x, y) stand for the two-dimensional basic wavelet, then the two-dimensional basic wavelet can be defined as follows:
$$ {WT}_f\left(a;{b}_1,{b}_2\right)=\left\langle f\left(x,y\right),{\Psi}_{a;{b}_1;{b}_2}\left(x,y\right)\right\rangle =\frac{1}{a}\iint f\left(x,y\right){\Psi}^{\ast}\left(\frac{x-{b}_1}{a},\frac{y-{b}_2}{a}\right) dx, dy, $$
(4)

In which \( {\Psi}_{a;{b}_1;{b}_2}\left(x,y\right)=\frac{1}{a}\iint f\left(x,y\right){\Psi}^{\ast}\left(\frac{x-{b}_1}{a},\frac{y-{b}_2}{a}\right) \) stands for the scale expansion and two-dimensional displacement of Ψ(x, y). \( \frac{1}{a} \) stands for the normalization factor that is introduced to ensure that the energy is not changed before and after the wavelet transform.

After the image is decomposed by the two-dimensional wavelet, the low-frequency coefficient, horizontal high-frequency coefficient, vertical high-frequency coefficient, and oblique high-frequency coefficient of the image can be obtained.
  • Step 5: In accordance with Eq. (5), the low-frequency decomposition coefficient of the image after three-layer wavelet transform is enhanced. And the high-frequency decomposition coefficient is attenuated so as to achieve the effect of image enhancement, that is, wavelet passivation, as shown in the following

$$ \left\{\begin{array}{l}c(i)=4\ast c(i);c(i)>405\\ {}c(i)=0.1\ast c(i);c(i)\le 405\end{array}\right., $$
(5)
In which c (i) stands for the coefficient after the two-dimensional decomposition of the image.
  • Step 6: Binarization is carried out on the image after the Moore nearest neighbor model processing. Calculation is conducted in accordance with Eq. (6), and the details are as follows:

$$ g\left(x,y\right)\kern0.5em =\kern0.5em \left\{\begin{array}{l}0,f\left(x,\kern0.5em y\right)\kern0.5em <\kern0.5em \mathrm{threshold}\ \\ {}1,f\left(x,\kern0.5em y\right)\kern0.5em >\kern0.5em \mathrm{threshold}\end{array}\right. $$
(6)
In which f (x, y) stands for the image after the wavelet decomposition and reconstruction, g (x, y) stands for the binarized image, and the threshold stands for the threshold value selected for the binarization.
  • Step 7: Statistics of the number of 0 and 1 pixels is carried out for the binarized image, and the statistical data are stored as an array, for example: \( \mathrm{image}\left[i\right]\kern0.5em =\kern0.5em \left\{\mathrm{pixel}\ \underline{0}\ \mathrm{nums},\kern0.5em \mathrm{pixel}\ \underline{1}\ \mathrm{nums}\right\} \).

  • Step 8: For multiple images in the same mold cavity, steps 1–7 are repeated to obtain the array of each image. The array is entered into the support vector machine. And the array with the residue characteristic is labeled as 1. The array without the residue characteristic is labeled as − 1. The specific method for obtaining the binary mathematical model to determine whether or not the residues are contained is as the following: select the linear kernel function K(xi, xj) = xi, xj to obtain two types of samples (xi, xj),  i = 1, 2, …, n, in which n stands for the number of samples obtained. When xi is sample type ω1 that does not contain the residues, yi = 1; when xi is sample type ω2 that does not contain the residues, yi =  − 1. The support vector machine is processed, that is, the parameter a and bin the classification model \( f(x)=\operatorname{sgn}\left\{\sum \limits_{i=1}^n{a}_i^{\ast }{y}_i\left({x}_i\cdot x\right)+{b}^{\ast}\right\} \) are obtained.

The operation flow of the integrated development environment model for image processing is shown in Fig. 4. And the specific steps are as follows:
  • Step 1: Obtain the judgment image data.

  • Step 2: Carry out gray scale processing on the judgment image.

  • Step 3: Carry out filtering on the gray scale image.

  • Step 4: Carry out the three-layer wavelet transform on the filtered gray scale image.

  • Step 5: Carry out the Moore nearest neighbor model processing on the image after the three-layer wavelet transform to highlight the profile of the residue.

  • Step 6: Carry out binarization on the image after performing the Moore nearest neighbor model processing.

  • Step 7: Perform the statistics on the number of 0 and 1 pixels for the binarized image and store the statistical data as an array.

  • Step 8: The array obtained in step 7 is judged by using the binary mathematical model that determines whether or not there is residue, so as to confirm whether the array is an array that contains the residue image.

  • Step 9: The result of determination is transmitted to the controller of the injection molding machine through the communication module. If the result of determination is that there is the presence of residue, alarm will be sent out through the buzzer and LED light of the alarm module.

Fig. 4
Fig. 4

Flow of algorithm operation mode

4 Experiment and result discussions

In the verification experiment, the gray level co-occurrence matrix matching algorithm and the background subtraction method are adopted as the performance comparison algorithm. Such type of image processing algorithms has stringent requirements on the performance of the visual hardware platform and it is even impossible to operate on the visualization platform. Therefore, in the experiment, the algorithms will be compared through the MATLAB on the same PC platform. Seven sets of 800 × 400 pixels cavity images are adopted for the image sample. There are ten pieces of mold cavity images with residues in each set of the molds and ten pieces of images without residues. Among them, four images with residues and four images without residues are taken as the template for processing the support vector machine. And the remaining 12 samples are used as the demonstration samples for the algorithm.

The sample mold 5 is taken as an example, and the original image is shown in Fig. 5. Figure 5a shows the presence of residue, and Fig. 5b shows no presence of residue. This model put forward in this paper is sued to carry out gray scale processing and median filtering on the image of the mold 5. And the image effect is shown in Fig. 6. The Moor nearest neighbor model and the binarized image effect after completion are shown in Fig. 7. And the outline of the residue in panel (a) is effectively highlighted as the following.
Fig. 5
Fig. 5

Cavity original image

Fig. 6
Fig. 6

Cavity image after the gray scale and median filtering processing

Fig. 7
Fig. 7

Moore nearest neighbor model and cavity image after binarization

After the introduction of the Moore nearest neighbor model, the image is binarized. After the binarization, the statistics of the number of 0 and 1 pixels can be obtained by counting the number of 0 and 1 pixels. The values of the 0 and 1 pixel with the residues in the panel (a) are 4523 and 315,477, respectively. And the values of the 0 and 1 pixel without the residues in the panel (b) are 2023 and 317,177, respectively. The binarization result from Fig. 6 can be obtained, and the residual of the image can be well reduced by using the method of the present invention.

In order to more highlight the features of the part and better achieve part feature extraction, we extracted the outer contour of the part, as shown in Fig. 8. As can be seen from Fig. 8, the contour of the part makes the features of the part more visible show as Fig. 8a, and c are original drawing of the part, b and d are the contour of the part. 
Fig. 8
Fig. 8

Part outline

After obtaining the pixel statistics of the multiple images in this mode cavity, the support the machine based on the linear cavity function is used for processing. And the detection precision rate of the mold cavity residues is 91.66%. The same experimental conditions are adopted to obtain the experimental results of the gray level co-occurrence matrix matching algorithm and the background subtraction method. The precision of the three algorithms in the detection of the residue is shown in Table 1. And the operation time of each algorithm is shown in Table 2 as the following.
Table 1

Detection precision of the residues in the mode cavity

Algorithm

Mold cavity

1

2

3

4

5

6

7

Mean value

Model put forward in this paper

91.67%

66.6%

50%

100%

91.66%

100%

100%

85.71%

Gray level co-occurrence matrix matching algorithm

88%

50%

20%

50%

50%

54%

72%

54.86%

Background subtraction method

70%

60%

50%

54.5%

63.6%

50%

73%

60.16%

Table 2

Time consumption for the detection of residues in the mold cavity

Algorithm

Mode cavity

1

2

3

4

5

6

7

Mean value

Model put forward in this paper

0.869 s

0.99 s

0.89 s

0.876 s

1.013 s

0.868 s

0.869 s

0.910

Gray level co-occurrence matrix matching algorithm

24 s

24 s

25 s

20s

24 s

23 s

27 s

23.85

Background subtraction method

1.34 s

1.26 s

0.96 s

0.79 s

1.00s

0.73 s

1.30s

1.05 s

5 Conclusions

The integrated development environment for the visualized image processing designed in the paper adopts the visual hardware platform, which has overcome the defects that the image detection systems in the injection molding production are mainly operated in the industrial personal computer. The Moore nearest neighbor model is adopted to highlight the residues to be detected, and at the same time adapts to the feature of small memory in the visualization platform. And a kind of support vector machine algorithm based on the pixel value statistics is put forward. The experimental results show that both the real-time performance and the precision rate of the integrated development environment model based on the fusion of the Moore nearest neighbor model for the visual image processing are superior to those of the traditional image processing methods. On the other hand, the insufficiency of the algorithm is mainly due to the poor detection effect of the images acquired under the individual illumination abnormal conditions. In the next step, the optimization of the algorithm requires the solution to the problem of the photometric compensation for such images, so that the system performance can further meet the actual demands in the injection molding production.

This model is applied to No. 1, 4, 5, 6, and 7 set of molds, and the residue detection precision rates are all higher than 91%. The detection precision rates of the second and third sets of the molds are relatively low. Through the analysis of the two sets of cavity images, it is found that the light intensity is not the same in the image acquisition process, which leads to the result that after the wavelet transform and the binarization are carried out on the cavity image with and without residue, the number of the 0 and 1 pixels is relatively close. In the next step, the algorithm is optimized. And it is only necessary to make photometric compensation for such situation. In accordance with the experimental data in Tables 1 and 2, the model put forward in this paper has higher precision rate and lower time consumption when it is compared to the gray level co-occurrence matrix matching algorithm and the background subtraction method.

Declarations

Acknowledgements

The authors thank the editor and anonymous reviewers for their helpful comments and valuable suggestions.

About the authors

Weiping Zhang was born in Shanghai, China, in 1982. He received the Ph.D. degree in computer science from Rostock University, Rostock, Germany. From 2014 to 2015, he was a Research Scientist with the Institute of BCRT in Berlin, Humboldt University, Berlin, Germany. He is currently an Associate Professor with the School of Information Engineering, Nanchang University, Nanchang, China. His research interests include machine learning, process information management systems, and real-time mobile measurements of physiological parameters.

Xiujuan Li was born in Shandong province, China, in 1988. She received her master degree in Geographic Information System from Wuhan University. During her study period, her research interest focused on indoor and outdoor 3D visualization and location. Now, she works in Industrial Technology Research Institute of Zhejiang University, mainly studying wisdom water supply model and big data system.

Yalin Li was born in Hebei province, China, in 1990. He received the master degree in Oil and Natural Gas Engineering from China University of Petroleum-Beijing. During his master degree study period, his research interests include quantitative analysis method of main controlling factors and productivity prediction in oil and gas wells based on machine learning. After graduation, he joined Research Center for Big Data multi-discipline Applied Scientific, Binhai Industrial Technology Research Institute of Zhejiang University, as a machine learning and algorithmic engineer.

Mohit Kumar is the Professor for Computational Intelligence in Automation in Faculty of Computer Science and Electrical Engineering, University of Rostock, Germany. He also holds the position of Research and Development Technical Director in Binhai Industrial Technology Research Institute of Zhejiang University, Tianjin, China. His research vision is to deliver a rigorous mathematical framework of intelligent computing algorithms with applications in machine learning, data analysis, signal modeling and analysis, image matching and mining, process identification, and optimization to fully utilize the potential of artificial intelligence paradigm for addressing challenging problems of real-world.

Yihua Mao, PhD, Professor. He was born in Zhejiang Province in 1963 and got his PhD degree in Zhejiang University. Now he serves in the College of Civil Engineering and Architectural of Zhejiang University. He also serves as the deputy dean of Industrial Technology Research Institute of Zhejiang University and the dean of Binhai Industrial Technology Research Institute of Zhejiang University. His main research fields: the engineering economy and project cost, construction project management, system design and large data analysis technology, enterprise strategy, and technology innovation management.

Funding

This work was supported by the National Natural Science Foundation of China (61662045) and the Special Program of talents Development for Excellent Youth Scholars in Tianjin.

Availability of data and materials

We can provide the data.

Authors’ contributions

All authors take part in the discussion of the work described in this paper. The authors WZ and YM wrote the first version of the paper. The authors JY and YL did part of the experiments of the paper. MK revised the paper in different version of the paper, respectively. All authors read and approved the final manuscript.

Competing interests

There are no potential competing interests in our paper. And all authors have seen the manuscript and approved to submit to your journal. We confirm that the content of the manuscript has not been published or submitted for publication elsewhere.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Department of Electronic Information Engineering, Nanchang University, Nanchang, China
(2)
Binhai Industrial Technology Research Institute of Zhejiang University, Tianjin, China
(3)
Zhejiang University College of Civil Engineering and Architecture, Hangzhou, China

References

  1. S. Paris, S.W. Hasinoff, J. Kautz, Local laplacian filters: edge-aware image processing with a laplacian pyramid. Commun. ACM 58(3), 81–91 (2015)View ArticleGoogle Scholar
  2. J. Ragan-Kelley, C. Barnes, A. Adams, S. Paris, S. Amarasinghe, Halide: a language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines. ACM SIGPLAN Not. 48(6), 519–530 (2013)View ArticleGoogle Scholar
  3. A. Amara, S.P. Quanz, Pynpoint: an image processing package for finding exoplanets. Mon. Not. R. Astron. Soc. 427(2), 948–955 (2018)View ArticleGoogle Scholar
  4. R.C. Chen, D. Dreossi, L. Mancini, R. Menk, L. Rigon, T.Q. Xiao, Pitre: software for phase-sensitive x-ray image processing and tomography reconstruction. J. Synchrotron Radiat. 19(5), 836–845 (2012)View ArticleGoogle Scholar
  5. J. Ragan, Decoupling algorithms from schedules for easy optimization of image processing pipelines. ACM Trans. Graph. 31(4), 13–15 (2012)Google Scholar
  6. A.G. York, P. Chandris, D.D. Nogare, J. Head, P. Wawrzusin, R.S. Fischer, Instant super-resolution imaging in live cells and embryos via analog image processing. Nat. Methods 10(11), 1122–1126 (2013)View ArticleGoogle Scholar
  7. C. Kim, B. Kim, H. Kim, 4d cad model updating using image processing-based construction progress monitoring. Autom. Constr. 35(14), 44–52 (2013)View ArticleGoogle Scholar
  8. R. Zhou, L. Damerow, Y. Sun, M.M. Blanke, Using colour features of cv. ‘gala’ apple fruits in an orchard in image processing to predict yield. Precis. Agric. 13(5), 568–580 (2012)View ArticleGoogle Scholar
  9. S. Ercisli, B. Sayinci, M. Kara, C. Yildiz, I. Ozturk, Determination of size and shape features of walnut ( juglans regia, l.) cultivars using image processing. Sci. Hortic. 133(1), 47–55 (2012)View ArticleGoogle Scholar
  10. D. Han, X. Yuan, W. Zhang, An augmented lagrangian based parallel splitting method for separable convex minimization with applications to image processing. Math. Comput. 83(289), 2263–2291 (2014)MathSciNetView ArticleGoogle Scholar
  11. A.K.H Kwan, C.F Mora, H.C Chan. Particle shape analysis of coarse aggregate using digital image processing[J]. Cement and Concrete Research. 29(9), 1403-1410 (1999).View ArticleGoogle Scholar
  12. M. Olech, Ł. Komsta, R. Nowak, Ł. Cieśla, M. Waksmundzka-Hajnos, Investigation of antiradical activity of plant material by thin-layer chromatography with image processing. Food Chem. 132(1), 549–553 (2012).View ArticleGoogle Scholar
  13. J. Kafashan, B. Tijskens, H. Ramon. Shape modelling of fruit by image processing.[J]. Commun Agric Appl Biol Sci. 70(2), 161-164 (2005).Google Scholar
  14. Y. Fujimori, T. Fujimori, J. Imura, T. Sugai, T. Yao, R. Wada, An assessment of the diagnostic criteria for sessile serrated adenoma/polyps: SSA/Ps using image processing software analysis for ki67 immunohistochemistry. Diagn. Pathol. 7(1), 1–5 (2012)View ArticleGoogle Scholar
  15. B. Xu, B. Pourdeyhimi, J. Sobus, Fiber cross-sectional shape analysis using image processing techniques. Text. Res. J. 63(12), 717–730 (2016)View ArticleGoogle Scholar
  16. X. Jiang, J. Liu, Y. Chen, D. Liu, Y. Gu, Z. Chen, Feature adaptive online sequential extreme learning machine for lifelong indoor localization. Neural Comput. & Applic. 27(1), 215–225 (2016)View ArticleGoogle Scholar
  17. M. Bested, A.H. Weisberg, F.A. Durão, A social interactive whiteboard system using finger-tracking for mobile devices. Multimedia Tools Appl. 76(4), 5367–5397 (2017)View ArticleGoogle Scholar
  18. M.A.Z. Chahooki, N.M. Charkari, Bridging the semantic gap for automatic image annotation by learning the manifold space. Comput. Syst. Sci. Eng. 30(4), 303–316 (2015)Google Scholar
  19. M. Xu, Q.-S. Sun, C. Huang, J. Shi, Object motion detection and data processing in large-scale particle image velocimetry. Intell. Autom. Soft Comput. 23(4), 653–660 (2017)View ArticleGoogle Scholar
  20. Z. Chen, C. Zhou, M. Li, A.-X. Zhu, Q. Huang, Y. Pian, Adaptable parallel strategy to extract polygons from massive classified images on multi-core clusters. Concurrency Comput. Pract. Exp. 29(4), e3861 (2017)View ArticleGoogle Scholar
  21. B. Bhagavathsingh, K. Srinivasan, M. Natrajan, Real time speech based integrated development environment for C program[J]. Circuits Syst. 07(3), 69–82 (2016)View ArticleGoogle Scholar

Copyright

© The Author(s). 2018

Advertisement