Skip to main content

Advertisement

Recognition and localization of actinidia arguta based on image recognition

Article metrics

  • 695 Accesses

Abstract

In the process of picking, actinidia arguta has difficulty in image recognition and occlusion problems, and there are few studies on kiwifruit recognition. Based on this, this study uses the color model to perform image basic processing and uses frequency domain enhancement to process the image. Simultaneously, in the frequency domain of the image, this study applied filtering to the original image of kiwifruit orchard, used homomorphic filtering to enhance the image of actinidia arguta orchard, highlighted the characteristics of actinidia arguta trunk, and reduced the influence of background noise on the recognition of actinidia arguta trunk. In addition, this study used a binocular stereo vision system for fruit location recognition to improve recognition accuracy. Finally, the effectiveness of the research method is verified by experimental research. The results show that the proposed algorithm performs well and can provide theoretical references for subsequent related research.

Introduction

Kiwifruit needs intensive picking during the maturity period, and picking is one of the most time-consuming steps in the kiwifruit planting process, which is currently done manually. With the large number of peasants working in cities and the aging of the population, the rural labor resources are becoming increasingly tense, and the demand for agricultural machinery operations is becoming more and more urgent. In order to make the kiwi picking robot reach a practical level, it is the key to improve its picking efficiency [1].

In the 1960s, American scholars Schertz and Brown first proposed the use of robots to pick fruits. Since then, various fruit and vegetable picking robots have been extensively studied and have achieved a lot of results [2]. In 2004, Zhang Tiezhong used a back-propagation (BP) neural network-based algorithm to segment mature strawberry fruit from the background. The algorithm converts the acquired image into the hue, saturation, value (HSV) color space and uses the value of the 33 neighborhood pixel H channel as the input of the 3-layer BP neural network. The algorithm is used to train 20 images of 400 or 300 pixels. After 100 cycles, the experimental error is about 0.001, which indicates that the obtained network weight can segment the image well [3]. In 2010, Lu Qiang et al. proposed a method for identifying citrus in natural scenes. This method divides citrus images by the maximum inter-class variance method based on G-B color difference components. If the fruit overlaps, the watershed segmentation algorithm based on distance transformation is used to segment. If the leaf occlusion occurs, the minimum convex hull algorithm is used to reduce the influence of occlusion on the recognition. Finally, the least squares fitting is performed on the segmented image to determine the center and radius of the citrus fruit to realize the recognition of citrus. It can be concluded from a series of experiments that the recognition accuracy of citrus recognition using this algorithm is 2.87% [4]. In 2015, Zhao De’an et al. of Jiangsu University proposed an algorithm combining the improved R-G color difference segmentation algorithm and the quadratic segmentation method to identify night kiwifruit images. The experimental results show that the correct recognition rate of the algorithm for a single unobstructed kiwifruit is 83.7% [5].

Binocular vision is an important part of computer vision. According to binocular stereo vision technology, researchers can obtain the stereo information of an object and calculate the three-dimensional coordinates of the object to achieve the positioning of the object [6]. Binocular vision technology originated in 1977, and Professor Marr proposed the computer binocular vision theory in the USA. Since then, the binocular vision theory has been continuously improved, and its application in the field of fruit and vegetable picking robots has been continuously matured [7]. In 2002, Professor Takahashi of Hirosaki University in Japan used binocular stereo vision technology to locate kiwifruit in the orchard. First, the three-dimensional space is divided into several cross-sections according to the line-of-sight distance, and then the three-dimensional information of the object is reconstructed by integrating the two-dimensional images of the cross-sections. It has been proved by experiments that the error of measuring the depth distance using this method is controlled within ± 5%. When there are 20~30 fruits in the image, the discrimination rate of fruit is greater than or equal to 90% [8]. In 2005, Kitamura of Kochi University in Japan used parallel binocular stereo vision technology to pick green peppers and made a prototype. The prototype consists of three parts: image processing, camera positioning, and cutting device. The image processing part includes two CCD cameras: an image capture card and an image processing application. When the images are acquired, the two cameras are placed in parallel. In the image recognition part, the prototype integrates illumination and stereo vision technology for image processing, and in the cutting part, it is controlled by a camera positioning system and a visual feedback system [9]. In 2015, Ivorra et al. proposed the use of binocular stereo vision technology to assess the quality of mature grapes. The method obtains a three-dimensional model of clustered grapes by binocular stereo vision and then evaluates the quality of mature grapes using a SVM model based on the new three-dimensional descriptor. The algorithm has a good evaluation effect in the laboratory environment, but the evaluation effect in the actual environment is not known [10]. Compared with foreign-related research, domestic application of binocular vision technology to fruit and vegetable picking robots started late, but with the efforts of some scholars, it has made some progress [11]. In 2001, Zhang Ruihe et al. segmented the tomato image according to the two-dimensional histogram curve, then performed the area-based stereo matching, and finally calculated the spatial coordinate of the target. Through experiments, it can be concluded that when the distance between the target and the camera is 300~400 mm, the error between the theoretical value and the actual value is 3~4% [12]. In 2010, Si Yongsheng and others used a normalized red-green difference to segment the kiwifruit to obtain the outline of the kiwifruit and then used the random ring method to extract the center and radius of the kiwifruit and used the limit constraint to perform horological matching based on area features on kiwifruit images. At the same time, kiwifruit with similar size is matched by order consistency constraint. According to the experiment of 130 images, the recognition success rate of the algorithm is 92%, which is in the range of 60~150 cm, and the error of measurement depth information is less than 2 cm [13]. In 2015, Guo Aixia and others combined Harris Corner Detection with binocular stereo matching based on cosine distance to locate litchi picking points. The experimental results show that the algorithm can meet the complex operation requirements of litchi needs to be picked in series, and the matching success rate is 89.55% [14].

Through the above analysis, it can be seen that image recognition has achieved certain achievements in picking fruits and vegetables, but there are still few studies on kiwifruit, and kiwifruit is more special. Therefore, it is necessary to analyze the actual situation and design of a fruit recognition method suitable for kiwifruit to make up for the blank of this aspect. Based on the image recognition technology [15], this study analyzes the characteristics of kiwifruit and combines image recognition to realize the effective recognition and localization of kiwifruit, which can lay a foundation for subsequent kiwifruit picking.

Research methods

Image processing

The original photo captured by the navigation camera in this article is a color RGB image. The color model is a description of the different color systems developed in the field of image processing. The purposes of color modeling are to represent the colors according to a certain primary color and to establish a color model using a three-dimensional coordinate system, and the RGB model is based on a Cartesian coordinate system. The color subspace is shown in the cube in Fig. 1. At the same time, all color values are normalized and all RGB values are assumed to be in the range [0.1]. Figure 1 shows a common color matrix [16].

Fig. 1
figure1

RGB color matrix

In the RGB color space, the kiwi leaves and ground weeds cannot be separated in the R, G, and B component images. At the same time, due to the scaffolding cultivation mode of kiwifruit trees, the orchard environment is heavily shadowed, so the grass, leaves, and trunks are mixed and cannot be separated in subsequent processing. In the HSV space, the boundary between the kiwifruit tree ridge and the grass on the ground and the leaves in the H component of the kiwi orchard image is clear, so it is easy to extract the tree row information and also suppress the influence of the shadow. The middle and upper branches of the S component are the same feature as the grassland, but the trunk and the ridge are both fuzzy, so it is not easy to extract the kiwifruit tree row. Meanwhile, the tree row information of the V component is closer to the ground than the ground. From the above analysis, it is known that the H channel grayscale image in the HSV space can effectively separate the ridges of the kiwifruit tree from the background, so this study converts RGB to HSV. The specific process is as follows:

There is a conversion relationship between HSV and RGB color space. The calculation formula of luminance component Y is shown in Eq. (1), and the calculation formula of saturation component S is Eq. (2) [17]:

$$ V=\max \left(R,G,B\right) $$
(1)
$$ S=\frac{V-\min \left(R,G,B\right)}{V} $$
(2)

The formula for calculating the hue component H can be expressed as:

$$ H=\left\{\begin{array}{c}\frac{G-B}{R-\min \left(R,G,B\right)}\times {60}^{{}^{\circ}},R=\max \left(R,G,B\right)\\ {}2+\frac{B-R}{G-\min \left(R,G,B\right)}\times {60}^{{}^{\circ}},G=\max \left(R,G,B\right)\\ {}4+\frac{R-G}{B-\min \left(R,G,B\right)}\times {60}^{{}^{\circ}},B=\max \left(R,G,B\right)\end{array}\right. $$
(3)

Image enhancement

The original image captured by the vision system will contain a lot of noise, and the image will be distorted. Therefore, the image quality must be improved before the image processing analysis. The main purpose of image enhancement of kiwi orchard is to eliminate noise, improve contrast, and make the image more favorable for processing by the visual system. Airspace enhancement and frequency domain enhancement are the two major methods of image enhancement. This study mainly uses frequency domain enhancement to process images, and the original image of the kiwi orchard is filtered in the frequency domain of the image. According to signal analysis theory, frequency domain filtering technology mainly relies on Fourier transform and convolution theory. Assuming that the kiwi orchard image is g(i, j), the result of the convolution operation of the function f(i, j) with h(i, j) is the kiwi orchard image, as in Eqs. (4) and (5).

$$ g\left(i,j\right)=f\left(i,j\right)\bigotimes h\left(i,j\right) $$
(4)
$$ G\left(u,v\right)=F\left(u,v\right)H\left(u,v\right) $$
(5)

where G, F, and H are the Fourier transforms of the functions g(i, j), f(i, j), and h(i, j), respectively, and H(u, v) is the filter function, that is, the transfer function. The image filtering process can be divided into the following three steps: (1) The original kiwi orchard image f(i, j) is taken by Fourier transform to obtain F(i, j). (2) F(i, j) and H(u, v) are taken convolution to get G(u, v). (3) The inverse Fourier transform is performed on G(u, v), so that the enhanced image g(i, j) can be obtained.

Filtering

The environment of the kiwifruit orchard in the scaffolding cultivation mode is complex, which makes the kiwi picking robot picking up serious environmental shadows. There are many ways to process image with shadows. Suitable for processing kiwifruit orchard images is a homomorphic filtering method, which has a good effect on processing high-light and reflective areas with strong illumination effects. This paper analyzes how to overcome the influence of light intensity changes on the processing of kiwifruit orchard images, so that the kiwifruit trunk can be effectively segmented as the target of recognition. In this paper, homomorphic filtering is used to enhance the image of kiwifruit orchard, highlight the trunk characteristics of kiwifruit, and reduce the influence of background noise on the recognition of kiwifruit trunk. Homomorphic filtering is a nonlinear system based on the generalized superposition principle and is a method of enhancing image contrast in the frequency domain. The image f(x, y) is represented by the illumination component i(x, y) and the reflection component r(x, y) as in Eq. (6).

$$ f\left(x,y\right)=i\left(x,y\right)\bullet r\left(\mathrm{x},\mathrm{y}\right) $$
(6)

The basic principles of homomorphic filtering are the product of illuminance and reflectance in the image is the pixel gray value, the low-frequency component of the image is illuminance, and the high-frequency component is reflectivity. The shaded area detail feature is described by processing the relationship between illuminance and reflectivity and pixel gray value. In this study, the homomorphic filtering is used to enhance the image, which can effectively improve the image segmentation effect and reduce the influence of shadow on the image. The homomorphic filter function in this study is based on the modification of the high-pass filter function and the homomorphic filter transfer function H(i, j) is given by Eq. (7):

$$ H\left(i,j\right)=\left( rh- rl\right)\times \left(-{e}^{-c{\left(\frac{D\left(i,j\right)}{d_0}\right)}^2}\right)+ rh $$
(7)

Image segmentation

Image segmentation based on kiwifruit orchard and kiwi trunk is the important content of this paper, and the specific segmentation method is as follows. The Otsu algorithm was used to segment the black ridges of kiwifruit orchards in this study. For the kiwifruit orchard image, gray histogram has two peak images; this paper selects the H channel image above for processing.

$$ H\left(i,j\right)=\left( rh- rl\right)\times \left(-{e}^{-c{\left(\frac{D\left(i,j\right)}{d_0}\right)}^2}\right)+ rh $$
(8)

In the formula, rh > 1,\( rl<1;D\left(i,j\right)=\sqrt{{\left(i- nl\right)}^2+{\left(j-n2\right)}^2} \). nl and n2 are half of the image line number and column number (take integer), and c = 4, d0=4.8. In this paper, the homomorphic filtering is used to extract the kiwifruit trunk. Simultaneously, according to the multiple test analysis, the above parameters are selected to enhance the homomorphic filtering of the kiwifruit orchard image, and the kiwifruit orchard image under different light intensities is selected to verify the homomorphic filtering effect. Combined with the pretreatment results of kiwifruit orchards, the Otsu algorithm was used to segment the black ridges of kiwifruit orchards in this study. The algorithm was proposed by Otsu in 1979, and it is considered to be one of the most widely used methods for automatically acquiring image thresholds, has an effective processing effect, and thus has been widely used. We assume that T is the segmentation threshold of the kiwifruit orchard image, the ratio of the kiwifruit trunk area to the total kiwifruit orchard image is P0, the average gray value of the trunk area is u0, and the other regions outside the kiwifruit trunk region are u1 and P1, then the total average gray scale of the kiwi orchard image can be expressed as:

$$ {u}_r={P}_0\times {u}_0+{P}_1\times {u}_1 $$
(9)

When the threshold is T, the variance is obtained.

$$ {\sigma}^2=+{P}_0{\left({u}_0-{u}_r\right)}^2+{P}_1{\left({u}_1-{u}_r\right)}^2 $$
(10)

For an image with a distinct trough of the grayscale histogram, we first select an approximate threshold N (close to the value of the trough) to split the image into two parts, X1 and X2. Thereafter, the gray value average values ε1 and ε2 of the two regions are calculated, and the new threshold value is selected as N = (ε1 + ε2)/2 and the above process is repeated until the two gray value values are the same.

In the kiwifruit orchard environment, the growth of kiwifruit trees is different, and the lighting conditions are complex. Therefore, the threshold segmentation cannot effectively separate the kiwi tree trunk information from other backgrounds of kiwifruit orchards. Based on this, this study proposes a region-based growth method for orchard image segmentation featuring kiwi trunks. The method performs RGB space and HSV spatial transformation on the kiwifruit orchard image and compares the difference between the path and the scene environment under different color channels. At the same time, this method takes the sample image of kiwifruit orchard as an example and separates the original color image by different color channels. In the RGB space, the gray histogram corresponding to the B component has peaks and valleys, but the gray value overlaps too much, and the background and target regions cannot be separated. In the HSV space, the H component has obvious peaks and valleys. At the same time, it can be seen from the H channel image that the ridge and the trunk can be separated, but the trunk and the ridge cannot be separated. Therefore, in the RGB and HSV three-channel kiwi orchard images, each channel could not effectively separate the kiwi tree trunk. In this paper, the kiwifruit orchard image with direct graying is selected, and the kiwifruit orchard image enhanced by homomorphic filtering in the third chapter is selected as the regional growth domain. According to the purpose of this study, the kiwifruit orchard image is homomorphically filtered, and the average gray value Pi of the kiwi trunk area of each image after filtering is calculated. The gray value of the random point is set to gi, and the n number of points in the trunk area is randomly selected.

$$ {P}_i=\frac{1}{n}\sum \limits_{j=1}^n{g}_j $$
(11)

In order to reduce the error of the average gray value of the kiwi trunk region, the degree of deviation of the gray value of the trunk region in each image from the average gray value is calculated. The standard deviation (σi) is used to measure the degree of deviation of the gray value as follows:

$$ {\sigma}_i=\sqrt{\frac{1}{n}\sum \limits_{j=1}^n{\left({x}_j-{p}_i\right)}^2} $$
(12)

Among them, σi is the standard deviation of the gray value of the trunk region in the ith orchard image, and pi represents the gray value of the pixel selected in the region.

After the homomorphic filtering of the kiwifruit orchard image, the gray value of the kiwifruit trunk area is almost 0, and due to the change of light, the gray value of each trunk area in each image is distributed between 10.35. Therefore, combined with the actual situation, the average gray value of the kiwi trunk area of the eight sample images selected by this study is 3.81, and the threshold is T′ = 30. After the seed point is selected, the determination of the growth criteria is very important in the next.

The pixels of the peach orchard image are judged by similarity. The growth criteria have the following main steps: (1) The kiwi orchard image after preprocessing is scanned to find pixels that have not yet belong. (2) Focusing on the scanned home pixel, its domain pixels are grown and checked according to the four neighborhoods (up, down, left, and right) of the point. If the grayscale differences are less than a predetermined threshold, they are merged. (3) We start growing from the selected seed point until the entire image is processed and the seed point cannot be found or until the area cannot be further expanded. (4) The above steps are repeated until all the pixels are assigned and the growth is completed.

Fruit positioning

Before using the binocular stereo vision system for positioning, the camera must first be calibrated, and the camera calibration largely determines the accuracy of the three-dimensional information acquisition. In order to better understand the camera calibration principle, firstly, several commonly used coordinate systems and their mutual conversion relationship will be described. The image coordinate system represents the projection of the spatial object point on the image plane. The target fruit is undoubtedly the most relevant information between the images acquired by the kiwi picking robot. In this section, the target fruit information identified by the previous frame image is used to reduce the recognition time of the current image, and the frame-by-frame type is decremented. When a single robot picking robot performs fruit picking, it can only pick one by one, so when there are multiple fruits in the image, the target fruit to be picked must be determined. Therefore, the following methods can be used to find the center of the fruit. When the fruit is not occluded, the processed fruit segmentation image can be marked, and then the two-dimensional centroid coordinates are obtained by Formula (13) for the marker fruit region. At the same time, the side length is calculated, and finally, the target fruit is determined by Formula (14) based on the nearest principle of the distance image center.

$$ \left\{\begin{array}{c}x=\sum \limits_{i, j\epsilon \varOmega}\frac{1}{n}\\ {}y=\sum \limits_{i, j\epsilon \varOmega}\frac{j}{n}\end{array}\right. $$
(13)

In the formula, i and j represent the horizontal and vertical coordinates of the fruit image pixel, respectively; n represents the total number of pixels of the fruit image; and Ω represents the set of pixels belonging to the same fruit image.

$$ d=\sqrt{{\left({x}_0-{x}_c\right)}^2+{\left({y}_0-{y}_c\right)}^2} $$
(14)

In the formula, x0 and y0 indicate that the fruit centroid coordinates xc and yc represent the image center coordinates.

Results

In order to study the accuracy of the identification and localization method of kiwifruit in this study, it is now analyzed through experimental research. The operating environment of this test is as follows: hardware environment—operating system Windows7, processor AMD Athlon (tm) × 2DualCoreQL-64 2.10 GHz, memory 2GB; software environment—Matlab R2013a.

The overlapping dynamic algorithm of kiwifruit proposed in this paper is mainly divided into the following eight steps: Step 1—Ten overlapping kiwi images were collected continuously. Step 2—These ten images are preprocessed to remove the noise contained in the acquired image. Step 3—The first image acquired after preprocessing is subjected to improved R-G color difference segmentation. Step 4—Mathematical morphology, hole filling, and threshold area retention method are combined to denoise the segmented image. Step 5—According to the maximum value of the minimum distance from the point in the contour after the segmentation to the contour edge, the center of the overlapping kiwifruit is determined, and the maximum distance from the center of the circle to the edge of the contour is obtained in different directions, and then the minimum value is taken as the radius. At the same time, the matching template is extracted according to the center of the circle and the radius. Step 6—The maximum center method is used to find the coordinates of the center of the overlapping kiwifruit in the ten images collected continuously, and the motion path of the robot is fitted and predicted according to the position of the center of the circle. Step 7—According to the motion path prediction and the radius information, the subsequent processing area is intercepted. Step 8—Rapid normalization cross-correlation matching is used to identify kiwifruit. The effectiveness of the proposed algorithm is verified by experimenting with a set of pictures collected in a natural scene. The original image acquired is shown in Fig. 2.

Fig. 2
figure2

Original kiwi pictures collected

The original image shown in Fig. 2 is segmented, and the algorithm segmentation method of this paper and the traditional image segmentation method are compared. The results obtained are shown in Fig. 3. Figure 3a shows the conventional segmentation algorithm, and Fig. 3b shows the image segmentation method of the present study.

Fig. 3
figure3

Edge recognition results. a The conventional segmentation algorithm. b The image segmentation method of the present study

As shown in Fig. 3a, it can be seen from the edge recognition result that the traditional edge recognition cannot recognize the leaves, trunks, etc. in the background, so it is difficult to separate the fruit from the image background. Figure 3b shows the algorithm identification results of this study. In this algorithm, the trunk of the kiwifruit has been identified as the recognition object in the algorithm construction, so it can be eliminated with the background in the actual recognition. However, from the recognition result, some noise still exists, so it is necessary to perform noise removal processing on the image, thereby obtaining a better recognition effect, and the result of the drying is shown in Fig. 4.

Fig. 4
figure4

Kiwi picture after noise elimination

It can be seen from Fig. 4 that the fruit has been basically separated from the background of the picture, but there are still three images of the two kiwifruits that are more overlapping in the picture, which need to be layered. Therefore, the kiwifruit localization treatment was carried out in combination with the localization method of the present study, and the localization result is shown in Fig. 5.

Fig. 5
figure5

Image positioning recognition results

As shown in Fig. 5, the algorithm of this study separately locates the kiwifruit and locates and calibrates the picking sequence after the system is determined. As shown in Fig. 5, the picking target no. 1 and the picking target no. 2 have been indicated, and the overlapping kiwifruit is not in the picking target and is re-identified after the picking of no. 1 and no. 2 is completed. After that, the performance of the traditional identification positioning method and identification positioning method of this paper is compared. The comparison is mainly from the aspects of picking reaction speed, picking accuracy, leak recovery rate, maturity rate, etc. The results obtained on this basis are shown in Table 1.

Table 1 Performance comparison of kiwifruit recognition and localization algorithm

Discussion and analysis

Images collected in nature often cause noise due to equipment and other reasons, which may cause interference to subsequent image analysis and affect the accuracy of the analysis. Therefore, this paper uses the color model filtering method commonly used in the spatial domain to remove the noise of the acquired image, which lays a foundation for subsequent image processing. In addition, this article describes several common image color spaces and analyzes their relationship to each other.

Before the rapid dynamic recognition of overlapping kiwifruit, it needs to be segmented first, and the effect of segmentation directly affects the accuracy of recognition. After comparing several image segmentation methods, the paper finally adopts color difference segmentation algorithm of the improved color model. The algorithm performs gamma transformation on the R component in the image to stretch or shrink the R component in the image, thereby effectively alleviating the over-segmentation or under-segmentation and improving the image segmentation effect. The segmented image has noise and holes. Therefore, this paper combines mathematical morphology, hole filling, and threshold area retention method to further improve the denoising of the segmented image.

Considering that the kiwi picking robot is picking up during operation, this paper proposes an algorithm for rapid dynamic recognition of overlapping kiwifruit based on the correlation and difference of image sequences. First, a set of overlapping kiwi image sequences is acquired in a natural scene. After the segmentation is completed, a complete fruit contour can be obtained, and the position of the center of the circle is determined by finding the maximum value of the minimum distance from the point inside the contour to the edge of the contour. Then, the maximum distance from the center of the circle to the edge of the contour in different directions is found, and the minimum value is used as the radius. After that, the subsequent matching template is extracted according to the center of the circle and the radius information. Thereafter, the center position of each image in the continuously acquired image sequence is determined. At the same time, the two centers are respectively polynomial fitting, the motion path of the robot is fitted, the motion is predicted, and the position of the center of the circle in the next frame is estimated. Next, the processing area of the subsequent image is intercepted according to the center of the circle and the radius information. Finally, the matching identification of overlapping kiwifruit was carried out by using fast normalized cross-correlation matching.

This study verifies the effectiveness of the proposed algorithm through a series of experiments. First, the overlapping kiwifruit rapid dynamic recognition experiment was carried out. According to the experiment, it can be concluded that in this study, the trunk of the kiwifruit has been identified as the recognition object in the algorithm construction, so it can be eliminated with the background in the actual recognition. However, from the recognition result, some noise still exists, so it is necessary to perform noise removal processing on the image, thereby obtaining a better recognition effect. Through the noise elimination process, the fruit has basically been separated from the background of the picture, but there are still three pictures of two more overlapping kiwifruits in the picture, which need to be layered. The kiwifruit localization treatment was carried out in combination with the localization method of this study. The algorithm of this study separately locates the upper kiwifruit, and after system determination, the kiwifruit is located and the picking sequence is calibrated, which can effectively determine the fruit picking order. Finally, the performance comparison shows that the research method has certain validity for kiwifruit picking and can be applied to practice.

Conclusion

In the natural growth state, the overlapping of kiwifruit is very common. However, this phenomenon causes difficulties in the picking operation of the kiwi picking robot and affects the picking efficiency. Based on this, this paper proposes a method for rapid dynamic recognition and localization of overlapping kiwifruit. Firstly, the rapid dynamic recognition of overlapping kiwifruit is realized according to the correlation between image sequences. Secondly, the method of locating overlapping kiwifruit was studied. This paper analyzes how to overcome the influence of light intensity changes on the processing of kiwifruit orchard images, so that the kiwifruit trunk can be effectively segmented. At the same time, this paper chooses homomorphic filtering to achieve the purpose of image enhancement of kiwifruit orchard, highlighting the trunk characteristics of kiwifruit and reducing the influence of background noise on the identification of kiwifruit trunk. In , the image segmentation based on kiwifruit orchard and kiwi trunk is the important content of this paper. The study found that the method of this study can effectively separate the kiwifruit. Finally, the effectiveness of the research method is verified by experiments. According to the performance comparison, the research method has certain effectiveness on the picking of kiwifruit and can be applied to practice.

References

  1. 1.

    C. Yongjie, Recognition and feature extraction of kiwifruit in natural environment based on machine vision. Trans. Chin. Soc. Agric. Mach. 44(5), 247–252 (2013)

  2. 2.

    W. Zhan, D. He, S. Shi, Recognition of kiwifruit in field based on Adaboost algorithm. Trans. Chin. Soc. Agric. Eng. 29(23), 140–146 (2013)

  3. 3.

    Q. Chi, Z. Wang, T. Yang, et al., Recognition of early hidden bruises on kiwifruits based on near-infrared hyperspectral imaging technology. Trans. Chin. Soc. Agric. Mach. 46(3), 235–241 (2015)

  4. 4.

    L. Fu, Y. Feng, T. Elkamil, et al., Image recognition method of multi-cluster kiwifruit in field based on convolutional neural networks. Trans. Chin. Soc. Agric. Eng. 34(2), 205-211 (2018)

  5. 5.

    L. Fu, W. Bin, C. Yongjie, et al., Kiwifruit recognition at nighttime using artificial lighting based on machine vision. Int. J. Agric. Biol. Eng. 2015(4), 52–59 (2015)

  6. 6.

    G. Wenchuan, Early recognition of bruised kiwifruit based on near infrared diffuse reflectance spectroscopy. Trans. Chin. Soc. Agric. Mach. 44(2), 142–146 (2013)

  7. 7.

    A. Moreno Álvarez, L. Sexto, L. Bardina, et al., Kiwifruit allergy in children: characterization of main allergens and patterns of recognition. Children 2(4), 424–438 (2015)

  8. 8.

    T.M. Le, M. Bublin, H. Breiteneder, et al., Kiwifruit allergy across Europe: clinical manifestation and IgE recognition patterns to kiwifruit allergens. J. Allergy Clin. Immunol. 131(1), 164–171 (2013)

  9. 9.

    R.M. Goodwin, N.M. Congdon, Recognition and attractiveness of staminate and pistillate kiwifruit flowers (Actinidia deliciosa var. deliciosa) by honey bees (Apis mellifera L.). N. Z. J. Crop Hortic. Sci. 46(1), 1–9 (2017)

  10. 10.

    C. Nilsson, P. Brostedt, J. Hidman, et al., Recognition pattern of kiwi seed storage proteins in kiwifruit-allergic children. Pediatr. Allergy Immunol. 26(8), 817–820 (2015)

  11. 11.

    A.M. Twidle, F. Mas, A.R. Harper, et al., Kiwifruit flower odor perception and recognition by honey bees, Apis mellifera. J. Agric. Food Chem. 63(23), 5597 (2015)

  12. 12.

    D. Liu, W. Guo, D. Liu, et al., Identification of kiwifruits treated with exogenous plant growth regulator using near-infrared hyperspectral reflectance imaging. Food Anal. Methods 8(1), 164–172 (2015)

  13. 13.

    L. Qiang, S. Lili, C. Lili, A new approach for the detection of kiwifruit maturity by electrical impedance. App. Mech. Mater. 401-403, 1287–1294 (2013)

  14. 14.

    H. Zhu, B. Chu, Y. Fan, et al., Hyperspectral imaging for predicting the internal quality of kiwifruits based on variable selection algorithms and chemometric models. Sci. Rep. 7(1), 7845 (2017)

  15. 15.

    X. Dong, F. Wu, X.Y. Jing, Semi-supervised multiple kernel intact discriminant space learning for image recognition. Neural Comput. & Applic., 1–18 (2018)

  16. 16.

    S. Baolin, Molecular detection of pv. on bacterial canker of kiwifruit. Acta Phytopathol. Sin. 43(5), 458–466 (2013)

  17. 17.

    L. Fu, S. Sun, R. Li, et al., Classification of kiwifruit grades based on fruit shape using a single camera. Sensors 16(7), 1012 (2016)

Download references

Acknowledgements

The authors thank the editor and anonymous reviewers for their helpful comments and valuable suggestions.

Funding

Not applicable.

Availability of data and materials

Please contact author for data requests.

Author information

All authors took part in the discussion of the work described in this paper. All authors read and approved the final manuscript.

Correspondence to Qingxi Guo.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Liu, D., Shen, J., Yang, H. et al. Recognition and localization of actinidia arguta based on image recognition. J Image Video Proc. 2019, 21 (2019) doi:10.1186/s13640-019-0419-6

Download citation

Keywords

  • Image recognition
  • Kiwifruit
  • Fruit
  • Positioning
  • Fruit recognition