Skip to main content

Salient map of hyperbolas in GPR images

Abstract

In ground-penetrating radar (GPR) applications, a salient map is useful for data analysis and building a consensus for data interpretation. In this paper, we present a novel method to map the salient region of a hyperbola using an asymmetry measurement and seam carving. First, on the basis of the magnitude and phase information of the Zernike moment and pixel asymmetry, we construct a new hyperbola descriptor to represent the region feature. Second, in order to make the hyperbolic region stand out, we improve the calculation of seam carving to extract the saliency map with the hyperbolic region. Compared with typical methods, our method provides significant and meaningful hyperbolic information, balancing the low- and high-level contrast of the branch of a hyperbola.

1 Introduction

Ground-penetrating radar (GPR) is an effective nondestructive tool based on electromagnetic reflection information for investigating the shallow subsurface. GPR imaging is an indirect representation of subsurface media obtained through reflection, coherence, and superposition of radio waves. The propagation of electromagnetic wave in the underground is very complicated, and the interference of various noises and clutter is very serious, which makes the echo data accompanied with a variety of clutter and coherent noise. There are many aspects of the influence from the different types of signals to the GPR imaging, including the direct coupling wave, the surface reflection wave, the medium reflection wave, and the target reflection wave and noise wave. In the process of imaging, in order to reflect more reflected wave properties, the wideband is usually recorded, so there are not only various effective properties, but also a variety of complex interference waves, which increase the difficulty of professional identification.

In the application of GPR, a hyperbola has been widely used as the reflection character of an electromagnetic wave pulse to detect the distribution of the underground medium. Whether from the perspective of research or application, it has important research value to extract hyperbolas quickly and effectively from the complex background of GPR image. Several strategies have been proposed in the literature to aid in the extraction of the hyperbolic curve. State-of-the-art technologies include pattern recognition models [1,2,3], a modified Hough transform [5, 6], and other distinctive algorithms for complex environments [4, 7, 8].

Hyperbolas are the main reflection of many objects in the recorded GPR data. It plays a key role in locating objects such as pipelines, mines, or other buried objects. To extract the precise location and characteristics of the objects, all hyperbolas must be analyzed and located and identified in the GPR image. Because of the heterogeneity of the underground medium and the interference due to clutter and noise, GPR hyperbolic arcs usually present hybrid, overlapped, and obscured characters. A general solution to this problem is to attribute it to pattern recognition in grayscale images. Different image analysis methods and learning algorithms are combined to increase the identification accuracy of the hyperbolas, but the data processing and the localization of the hyperbolas is very time-consuming. In this case, precise identification of the hyperbola is a complex task due to the high uncertainty in feature extraction and the time-consuming fitting or matching of the structure shape. A saliency analysis is a valuable tool in image processing and has been used for the scene analysis of natural scene images [11]. Salient for the human visual system refers to the capability to detect visually distinctive objects with perceived stimuli in the scene. The advantage of this capability is to filter out information that we are not interested in the scene, so that the entire visual system can focus on the rich high-level information, thereby reducing the energy consumption and the waste of computing resources. For computer vision, significant detection means designing related algorithms and helps locate the regions that efficiently represent a special meaning and calculate their corresponding significant feature for subsequent processing. Itti et al.’s model [9] is a representative saliency model that utilizes low-level features such as the intensity, color, and local orientation and combines separate features at different scales/channels/orientations into a single saliency map with the aid of center-surround difference operations. In further studies, Harel et al. [10] proposed a bottom-up visual saliency model, graph-based visual saliency (GBVS), which uses Markov chains to form activation maps on certain feature channels. Furthermore, deep models have been widely used for saliency detection [16,17,18]. Wang et al. [19] adopted a two-level training mode. The low level of the model is used to determine the saliency value of each pixel, and a corresponding deep neural network (DNN-L) is designed to learn local patch features. In this stage, the local saliency map together with geometric information is constructed. In the global search stage, the role of the high-level model is to build the global saliency map. Another model-related deep neural network (DNN-G) is used to predict the saliency value of each salient region using a variety of global contrast features such as geometric information and global contrast features. The saliency score of each object region based on the global features are utilized to compute the final saliency map. Compared with the pixel-based saliency detection method, the performance of the above type of method is better because the region-based feature extraction method can extract more complex and more distinguishing features. However, their effectiveness and efficiency depend on the number of divided regions and the method of segmentation. For different images, there are large differences in the size and number of salient objects. When dividing them into a fixed number of regions, it is difficult to accurately detect all the images.

The advantage of a saliency map is that it can make an association between low-level features, e.g., the intensity, orientation, and high-level significant information of interest. Thus, it can be used to identify certain regions that contain hyperbolic arcs in GPR images and then improve the efficiency and reliability of the extraction of hyperbolas. With the filtering ability of saliency extraction, the detection of a hyperbola can remove unnecessary detail and the interference due to noise and scatter and quickly locate the region containing hyperbolas. On the other hand, on the basis of saliency estimation with a unified formulation, the regions believed to contain low-contrast hyperbolas could stand out.

2 Hyperbola description method

Zernike moments are the mapping of an image onto a set of complex Zernike polynomials. Since Zernike polynomials are orthogonal to each other, Zernike moments can map images to a set of complex Zernike polynomials defined over the interior of the unit disc in the polar coordinate space. The Zernike moment is the most commonly used transformation in image shape feature extraction and description. The shape feature extracted and described by the Zernike moments is not sensitive to the noises, and the values of the Zernike moments are hardly redundant. Further, their magnitude is independent of the curvature of the hyperbola. The whole shape of the object can be represented by the lower-order Zernike moments, and the details can be described by higher-order Zernike moments. Therefore, a set values of Zernike moments can represent the structure characteristics from coarse to fine. The Zernike moments of order n with repetition m of an image f(x, y) are generally defined over a unit circle as

$$ {\mathrm{Z}}_{nm}=\frac{n+1}{\uppi}{\iint}_{x^2+{y}^2\le 1}f\left(x,y\right){V}_{nm}^{\ast}\left(\uprho, \uptheta \right)\mathrm{d}x\mathrm{d}y. $$
(1)

where m and n are nonnegative integers with n ≥ m, θ is the azimuthal angle, and ρ is the radial distance. The complex polynomial Vnm(ρ, θ)is the Zernike basis and defined in polar coordinates as

$$ {V}_{nm}\left(\uprho, \uptheta \right)={R}_{nm}\left(\uprho \right){e}^{jm\uptheta} $$
(2)

where Rnm is a radial polynomial expressed by the following formula:

$$ {R}_{nm}\left(\uprho \right)={\sum}_{s=0}^{\left(\left.n-\left|m\right|\right)\right)/2}{\left(-1\right)}^s\frac{\left(n-s\right)!}{s!\left(\left(\left(n-\left|m\right|\right)/2\right)-s\right)!\left(\left(\left(n-\left|m\right|\right)/2\right)-s\right)!}{\uprho}^{n-2s} $$
(3)

The Zernike moment can be viewed as the filtering transform with respect to {Vnm(ρ, θ)}. Here, we use the Zernike moment to design a local region descriptor and enhance the contrast of a salient region with the curve features. For the neighborhood G of the discrete form of pixel p in an image, the discrete form of Eq. (1) is given as follows [12]:

$$ {\upupsilon}_{nm}^{(p)}=\frac{n+1}{\uplambda_N}{\sum}_{c=0}^N{\sum}_{r=0}^N{g}_p\left(c,r\right){R}_{nm}\left({\uprho}_{rc}\right){e}^{- jm{\uptheta}_{rc}} $$
(4)

where gp(c, r) is the neighborhood function of the pixel p, λN is a normalization factor, and N is the size of the local region G. Note that c and r denote the column and row number of G:

$$ {\displaystyle \begin{array}{l}{\uprho}_{rc}=\frac{\sqrt{\left(2c-N+1\right)+{\left(2r-N+1\right)}^2}}{N}\\ {}{\uptheta}_{rc}={\tan}^{-1}\left(\frac{N-1-2r}{2c-N+1}\right).\end{array}} $$
(5)

From the above equations, it can be seen that this transformation combines the local function gp(c, r) with a set of magnitude and phase parameters (ρ_rc, θ_rc) to achieve the description of a local feature. Moreover, on the basis of the nonlinear combinations of the moment υnm, a high-dimensional feature transform space can be derived to represent the hyperbolic characters.

To further refine the regions containing hyperbolas, we rely on the phase component to optimize the Zernike moment. The phase information can provide an accurate estimate for controlling the rate between the hyperbolic and background regions. We define a binary level and phase-based function δp to determine whether or not the pixel p belongs to the hyperbolic region as follows:

$$ {\updelta}_p\left({\uptheta}_{rc}\right)=\left\{\begin{array}{c}1,\kern0.5em {\uptheta}_{rc}\in \left\{{\upalpha}_i\right\}\\ {}0,\kern0.5em \mathrm{other}\end{array}\right. $$
(6)

where αi is the selected phase associated with the shape and characters of the hyperbola. Let δp be a conditional factor embedded in the Zernike moment. Then, the transformed version \( {I}_{nm}^z \) of image f is given by

$$ {I}_{nm}^z=\bigcup \limits_{\mathrm{p}\in \mathrm{f}}{\updelta}_p\left({\uptheta}_{rc}\right)\left|{\upupsilon}_{nm}^{(p)}\left({\uprho}_{rc},{\uptheta}_{rc}\right)\right|. $$
(7)

Therefore, we can globally capture the hyperbolic region without taking into account the scale inconsistency of the magnitude of the moment. In fact, before the feature transformation mentioned above, image preprocessing, such as image filting, intensive specification and thresholding, is essential.

3 Saliency transformation

Despite successful description, there remains an issue of significant characters for each hyperbola not being intuitively perceived owing to a predominant appearance of low contrast with the hyperbola disturbed by background. Hyperbolic discrimination is very expensive in general; thus, the focus is more heavily weighted to salient regions obtained with saliency transformation. To make the saliency region stand out, we modified the asymmetry model to calculate the hyperbolic characters from the uniform background. Alsam et al. [13] proposed a saliency model that uses asymmetry as a measure of significance. To calculate saliency, the input image decomposes into square blocks, and at the same time, the definition of the D4 group of elements for each block is given. As a result, the asymmetry measure for each block can be obtained by calculating the D4 element result and its absolute value. In the asymmetry model, the input image is decomposed into a square block, and each block matrix M can be partitioned into eight entries. The asymmetry H of M is expressed as follows:

$$ H={\sum}_{i=1}^8{\left\Vert \mathrm{M}-{\upsigma}_i\mathrm{M}\right\Vert}_1, $$
(8)

where σi is one of the eight group elements. The asymmetry H(M) for each block is used as a measure of saliency. Following the same strategy, we propose a new criterion for measuring the asymmetry value associated with each group element of M.

Given two subsets A and B of the diagonal elements of M, A\( \bigcup \overline{\mathrm{A}}=\mathrm{M} \) and B\( \bigcup \overline{\mathrm{B}}=\mathrm{M} \). In Fig. 1, these subsets are indicated with different colors. To compute the asymmetry for the matrix M, we first need to define the degree of similarity between these two sets as follows:

Fig. 1
figure 1

Illustration of subsets A and B for asymmetry measurement

$$ S\left(\mathrm{A},\mathrm{B}\right)={\sum}_{p\in \mathrm{A},\kern0.5em q\in \mathrm{B}}d\left(p,q\right), $$
(9)

where p, q are pixels and d is the absolute difference between two elements of the matrix sets. In our work, the saliency for a hyperbola is determined as the local asymmetry of the matrix M with respect to its diagonal elements. Then, we define the asymmetry function for matrix M as follows:

$$ {\displaystyle \begin{array}{l}H=\frac{\mathrm{S}\left(\mathrm{A},\overline{\mathrm{A}}\right)}{\mathrm{S}\left(\mathrm{A},\mathrm{A}\right)+\mathrm{S}\left(\overline{\mathrm{A}},\overline{\mathrm{A}}\right)}+\frac{\mathrm{S}\left(\mathrm{B},\overline{\mathrm{B}}\right)}{\mathrm{S}\left(\mathrm{B},\mathrm{B}\right)+\mathrm{S}\left(\overline{\mathrm{B}},\overline{\mathrm{B}}\right)}\\ {}+\frac{\mathrm{S}\left(\mathrm{A}+\mathrm{B},\overline{\mathrm{A}+\mathrm{B}}\right)}{\mathrm{S}\left(\mathrm{A}+\mathrm{B},\mathrm{A}+\mathrm{B}\right)+\mathrm{S}\left(\overline{\mathrm{A}+\mathrm{B}},\overline{\mathrm{A}+\mathrm{B}}\right)}.\end{array}} $$
(10)

Unlike the previous calculation of the asymmetry in (8) that obtains a distinct feature by calculating the absolute value between the group and no-group elements, we believe that the definition in (10) is a relative measure, which means that the saliency depends on the intensity of the pixel/set and the dynamic contrast among different sets.

The advantage of using asymmetry instead of pixel feature is the possibility to capture both the local and the global salient details and maximize the information across different blocks. Furthermore, they can make all maps combined linearly to get a single saliency map. In addition to that, they can calculate the absolute differences for each block independently in vertical or horizontal direction to build up a linear independent basis of usable features for the analysis of salient mapping.

4 Background reduction

Seam carving (SC) [14, 15] is a content-aware image resizing method for both reduction and expansion. It extracts the special regions of the image by distinguishing the difference of pixel feature. SC is an adaptive image processing method that can be used to scale images to different sizes while the important objects in the image remain basically unchanged. In addition, it can also be used as a content analysis method to locate an object in an image. Because SC changes the objects in the original image as little as possible, it also has obvious application significance in the region detection. The core idea of SC algorithm lies in the definition of the energy function for pixels and the resolution of the pixel features. Its operation is divided into two aspects. First, the pixels with low-energy values in the image are removed, and the second is that the special pixels are inserted in the image to keep the information related with the important region. The properties of the pixel are defined by the energy function, and energy values are often proportional to the importance of pixels. The definition of the energy function is to ensure that the pixels in the reserved region are not allowed to be removed, and the pixels in the non-important area should be removed preferentially. We employ asymmetry H as the feature representing pixels in the B-scan image to calculate the energy function. The energy function e(x,y) of the pixel in the image is defined as follows

$$ \mathrm{e}\left(\mathrm{H}\right)=\left|\frac{\partial }{\mathrm{\partial x}}\mathrm{H}\right|+\left|\frac{\partial }{\mathrm{\partial y}}\mathrm{H}\right| $$
(11)

A seam is an 8-connected path that contains as many pixels with lower energy as possible. It crosses the image vertically (called vertical seam) or horizontally (called the horizontal seam). By removing the seam with the lowest energy in both coordinate directions, the background regions represented by the seams in the current image can be reduced. When removing a seam, the discriminant algorithm must follow the following guidelines: Remove as many low-energy pixels as possible, and keep as many high-energy pixels as possible. In contrast, when inserting a seam, the discriminant algorithm must make the decision whether to use an optimum or a compromise manner. In fact, the computing process of the seams is very time-consuming. In order to increase the processing speed, we have adopted a simplified calculation method. Given an energy function e(H), we define the cost of a seam as follows:

$$ E(s)={\sum}_{i=1}^ne\left(H\left({s}_i\right)\right) $$
(12)

To get the optimal seam s, we can minimize this seam cost:

$$ {s}^{\ast }=\min \mathrm{E}(s)=\min {\sum}_{i=1}^ne\left(H\left({s}_i\right)\right) $$
(13)

Based on the above formula, we give an algorithm for the seam cost transformation from energy matrix H to the cost matrix corresponding to the seam in Algorithm 1.

figure a

The minimum value in the last row of the cost matrix TM is the cost of the optimal seam. In order to improve computing efficiency, we adopt a simplified calculation algorithm.

figure b

We can obtain the single seam in an optimal way by Algorithm 2, and then, after being repeated multiple times along vertical and horizontal coordinates, it is possible to obtain the background region made up to the seams.

5 Results and discussion

Owing to the small proportion of hyperbolas in the gray space of an entire GPR image, its gray components are often distributed in a narrow range and are covered by larger gray components in the background area during statistical analysis, making it difficult to reflect the features of hyperbolas through a directly extracted histogram. Sample images are standardized using the histogram specification method to improve the distribution centralization of gray components, narrow dynamic range, vague features, and other problems and to limit feature vectors to the specified scale space in the sample set for better comparison and classification of features.

To analyze the effects of the orders of the Zernike moment on the performance of \( {I}_{nm}^z \), we used 14 different order pairs {n, m} = {(3, 1), (3, 3), …, (9, 9)} and their means as an evaluation index. Three groups of test images were analyzed, and the self-comparison of their feature descriptions with the order (7, 1) is shown in Fig. 2. Figure 3 shows the evaluation results, from which we see that all of the peak means in the test images satisfy the condition n − m = 4k, k  N.

To evaluate the performance of our proposed method, we compare our algorithm with previous algorithms using a test image. Two typical models, Itti et al.’s [9] and GBVS [10], are selected to provide a baseline for the evaluation. The test image is a real image and contains a more complex form of hyperbolas; some of them are indistinct and overlapped, whereas some are weakly contrasted and asymmetric with considerable confusion between the foreground and the background. Figure 4 shows the test image and the associated saliency maps obtained by different saliency algorithms. From Fig. 4, we observe that our method can globally capture regions including most of the expected hyperbolas. Moreover, our method can make the low- and high-contrast hyperbolas stand out from the background. Specifically, misleading intersections are separated between different peaks of the hyperbolas, and the salient regions with accurate boundaries stand out. In Fig. 4 (Itti), (GBVS), it is observed that Itti et al.’s method and the GBVS method cannot provide accurate information to localize the hyperbolic regions. Moreover, they heavily depend on the regional intensity, whereas our algorithm provides almost consistent results.

To verify the effectiveness and feasibility of the background reduction algorithm in practical applications, we adopt a real large image containing both high-gain and low-gain hyperbolas with a resolution of 400 × 1200(see Fig. 5a). Figure 5b shows the result of the seam cost transformation. In the background reduction process, a seam set with the vertical index was performed in Fig. 5c. Note that the number of the seam is flexible and the value we assign is 600 in Fig. 

When reducing the background by SC algorithm, only the smallest energy pixels in each row or column are removed. As a result, the seams with minimal cumulative energy are selected out. Using the H matrix as the basic energy function, the energy values of the salient region are higher than those of the background region, and it can be ensured that when the minimum energy value is determined, the seam cannot pass through these high-energy regions. In Fig. 5, we can see that the seam mainly passes through low-energy background regions while the high-energy salient region remains as it is.

Fig. 2
figure 2

Visual examples of feature descriptions using the Zernike moment. a Test images, b images with n = 7 and m = 1, and c the transformed versions

Fig. 3
figure 3

Evaluation for different orders (n, m)

Fig. 4
figure 4

Test image and saliency maps obtained by different saliency methods

Fig. 5
figure 5

a Test image. b Map image of the cost energy. c Map image with the vertical seams represented by white lines

6 Conclusions

In this paper, we present a salient map method. By improving the Zernike moments of a GPR image, we construct a feature transform model for the expression of the hyperbola, which provides descriptive information regarding the magnitude and phase attributes associated with the hyperbolas for a salient map. Further, the salient map method is based on an asymmetry measurement and background reduction, which is efficient and robust for real application with high-level noise and low-level contrast. Experimental results indicate the effectiveness of the proposed method compared with other typical methods. Due to the use of a special energy function, whether the background is smooth or not, the proposed method can make the seams avoid the saliency region as much as possible and pass through background regions, thus better protecting the hyperbolic information. Salient map plays an important role in content-based image processing. For different applications, special definition of distinctive content is required, and a high-quality and effective saliency mapping method is proposed. Specifically, a salient map can simulate human visual perception and have the abilities of rapid processing and intelligent location. It has special meaning for the identification of hyperbolas and can be widely used in machine learning, signal processing, image processing, and other fields. When combined with machine learning and pattern recognition in some applications, such as landmine detection and pipeline inspection, recognition efficiency of the object can be significantly improved.

Abbreviations

GBVS:

Graph-based visual saliency

GPR:

Ground-penetrating radar

References

  1. Q Dou, L Wei, DR Magee, AG Cohn, Real-time hyperbola recognition and fitting in GPR data. IEEE Trans. Geosci. Remote Sens. 55, 51–61 (2017)

    Article  Google Scholar 

  2. W Alnuaimy, Y Huang, M Nakhkash, M Fang, V Nguyen, A Eriksen, Automatic detection of buried utilities and solid objects with GPR using neural networks and pattern recognition. J. Appl. Geophys. 43, 157–165 (2000)

    Article  Google Scholar 

  3. E Pasolli, F Melgani, M Donelli, Automatic analysis of GPR images: a pattern-recognition approach. IEEE Trans. Geosci. Remote Sens. 47, 2206–2217 (2009)

    Article  Google Scholar 

  4. L Mertens, R Persico, L Matera, S Lambot, Automated detection of reflection hyperbolas in complex GPR images with no a priori knowledge on the medium. IEEE Trans. Geosci. Remote Sens. 54, 580–596 (2016)

    Article  Google Scholar 

  5. C Maas, J Schmalzl, Using pattern recognition to automatically localize reflection hyperbolas in data from ground penetrating radar. Comput. Geosci. 58, 116–125 (2013)

    Article  Google Scholar 

  6. CG Windsor, L Capineri, P Falorni, A data pair-labeled generalized Hough transform for radar location of buried objects. IEEE Geosci. Remote Sens. Lett. 11, 124–127 (2014)

    Article  Google Scholar 

  7. EL Miller, M Kilmer, C Rappaport, A new shape-based method for object localization and characterization from scattered field data. IEEE Trans. Geosci. Remote Sens. 38, 1682–1696 (2000)

    Article  Google Scholar 

  8. PA Torrione, KD Morton, R Sakaguchi, LM Collins, Histograms of oriented gradients for landmine detection in ground penetrating radar data. IEEE Trans. Geosci. Remote Sens. 52, 1539–1550 (2014)

    Article  Google Scholar 

  9. L Itti, C Koch, E Niebur, A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20, 1254–1259 (1998)

    Article  Google Scholar 

  10. J Harel, C Koch, P Perona, in Proc. Neural Information Processing Systems (NIPS). Graph-based visual saliency (2006), pp. 545–552

    Google Scholar 

  11. M-M Cheng, NJ Mitra, X Huang, PHS Torr, S-M Hu, Global contrast based salient region detection. IEEE Trans. Pattern Anal. Mach. Intell. 37, 569–582 (2015)

    Article  Google Scholar 

  12. A Tahmasbi, F Saki, SB Shokouhi, Classification of benign and malignant masses based on Zernike moments. Comput. Biol. Med. 41, 726–735 (2011)

    Article  Google Scholar 

  13. P Sharma, O Eiksund, in Advances in Visual Computing. Lecture Notes in Computer Science, ed. by G Bebis, R Boyle, B Parvin, D Koracin, I Pavlidis, R Feris, T McGraw, M Elendt, R Kopper, E Ragan, Z Ye, G Weber. Group based asymmetry—a fast saliency algorithm (Springer, Switzerland, 2015), p. 9474

    Google Scholar 

  14. S Avian, A Shamir, Seam carving for content-aware image resizing [J]. ACM Trans. Graph. 26, 267–276 (2007)

    Google Scholar 

  15. S Oliveira, A Neto, An improved genetic algorithms-based seam carving method. Int. J. Innov. Comput. Appl. 7, 236–242 (2016)

    Article  Google Scholar 

  16. R Zhao, W Ouyang, H Li, X Wang, in CVPR. Saliency detection by multi-context deep learning (2015)

    Google Scholar 

  17. Q Hou, M-M Cheng, X Hu, A Borji, Z Tu, PHS Torr, in Proc. IEEE Conf. Comp. Vis. Patt. Recogn. Deeply supervised salient object detection with short connections (2017), pp. 3203–3212

    Google Scholar 

  18. P Zhang, D Wang, H Lu, H Wang, X Ruan, in Proc. IEEE Int. Conf. Comp. Vis. Amulet: aggregating multi-level convolutional features for salient object detection (2017)

    Google Scholar 

  19. L Wang, H Lu, X Ruan, et al., in Proc. IEEE Int. Conf. Comp. Vis. Deep networks for saliency detection via local estimation and global search (2015)

    Google Scholar 

Download references

Acknowledgements

The authors thank the editor and anonymous reviewers for their helpful comments and valuable suggestions.

About the authors

Da Yuan received his M.S. degree in automation from the China University of Mining and Technology, Beijing, China, in 1997. He received his Ph.D. degree from the School of Computer Science and Technology at Beijing Institute of Technology, China, in 2003. He was a visiting scholar of the College of Information Sciences and Technology at Pennsylvania State University, University Park, USA, in 2015–2016. He is a member of the IEEE and also a member of ACM.

Deming Fan is a lecturer staff of Qingdao University of Science and Technology (QUST). He received his Ph.D. in computer software and theory from Shandong University in 2015. His research interests are data visualization, distributed computing, and information systems.

Funding

This work was supported in part by a grant from the National Natural Science Foundation of China (No. 61373079).

Availability of data and materials

We can provide the data.

Author information

Authors and Affiliations

Authors

Contributions

All authors take part in the discussion of the work described in this paper. DY wrote the first version of the paper. DF did a part of the experiments of the paper. Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Da Yuan.

Ethics declarations

Ethics approval and consent to participate

Not applicable

Consent for publication

Not applicable

Competing interests

The authors declare that they have no competing interests. And all authors have seen the manuscript and approved to submit to your journal. We confirm that the content of the manuscript has not been published or submitted for publication elsewhere.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yuan, D., Fan, D. Salient map of hyperbolas in GPR images. J Image Video Proc. 2018, 65 (2018). https://doi.org/10.1186/s13640-018-0296-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13640-018-0296-4

Keywords